More than one world leader set aside mounting evidence that a pandemic was about to threaten their people. Indeed, many folks had already been reading Johns Hopkins University's coronavirus resource map well before some in power acknowledged its existence. As that map revealed, the data was spreading almost as .
Now, for these same world leaders to resuscitate the global economy before it spins off its axis and can no longer be recovered, they must acknowledge and validate the very data they've contradicted, or whose very existence they've denied.
In March, during a time closer to the world we once knew, Imperial College London revealed its suggested approach of adaptive triggering. That's where countries and states allow their economies to reopen in staggered, week-long windows until hospitals become stressed again. When the concept with its sawtooth graph was first sprung upon the world, it was greeted like an ugly, alien substance -- an anomaly amid a realm of reason and clarity. Adaptive triggering keeps a constant measure of the number of hospital beds occupied in a region versus hospital bed capacities, sounding the alarm and sending everyone back inside when there's an overflow and people begin dying again.
Whether they know it or not, states, provinces, and countries have already commenced their own de facto experiments with adaptive triggering, as they grapple with the problem of how to provide some sustenance to an economy that was never intended to last this long in lockdown.
Governors and mayors need a reliable forecast of the future -- anything to give them a foundation for making rational, often bold, policy decisions. They are simultaneously responsible for the lives of their citizens by way of public health, and their livelihoods by way of economic health. Progress for one compartment requires sacrifices from the other.
In the world of private enterprise -- which also seems to exist back in the world we once knew -- executives' business decisions were being influenced by math every day. Perhaps there's a certain logic in letting the most terrible decision an executive can make be informed by as much of the best math that can be utilized.
"Let me speak as a scientist: As a policymaker, I would be very careful about basing my decisions on a single model," remarked Prof. George Barbastathis of MIT's Dept. of Mechanical Engineering. "That includes, of course, our own model. The reason is that there's many factors that may be different in each country, and also the policy implications are very, very complex, and they cannot be captured by a model that is purely epidemiological like ours."
Prof. Barbastathis leads a group of MIT students who put their minds together and found themselves thinking like this: Neural networks spot patterns in functions that classical math cannot adequately explain. On the other hand, training an AI algorithm with a set of epidemiological curves would not enable it to reproduce a curve whose functional integrity could be trusted with people's lives. But if a neural net could substitute for the factors that a typical epidemiological model cannot account for, due to the complexity or unfathomability of the formula or just lack of historical data, then perhaps the curves an otherwise classical model does produce, could more closely fit reality.
Epidemiology is a science that does not necessarily incorporate any objective knowledge of disease or medicine. Indeed, the fascination with the whole subject (at least until we all became inundated with it daily) is that it may be able to describe the spread of a viral disease throughout a population, using no relevant sciences other than statistics and analytics.
The basic concept is this: During an epidemic or pandemic event, the members of a population may be divided among a small handful of compartments. In the most rudimentary models, there are three compartments: the set S of people who are susceptible to the disease, the set I of those infected by the disease, and the set R of those who have recovered from the disease (which typically excludes those who have succumbed to it). These three variables comprise the SIR stochastic model.
For analytical purposes, the three variables are initially populated by inputs (injections into a compartment, such as adding the product of an examined area's birth rate into S) and transfers (reapportioning a percentage of one compartment to another one). One kind of interaction that a compartment may have with another is a reproduction, the most common example being the estimate of how many susceptible people whom one infected person would infect -- the number of "secondary cases."
In the simplest model, the basic reproductive rate is referred to as R0 (often pronounced "R · naught," not to be confused with "are not"). What makes a virus "viral," to borrow a term from SEO, is this factor of multiplication. In a more complex, and thus more realistic, model, the effective reproductive rate R (not to be confused with the recovery set R) is the number of secondary cases on account of one infected person, when the total number of the susceptible population is a fraction of the population as a whole. When the value of rate R is less than one, the rate of the virus' spread is diminishing. When it's less than zero, it's dying out.
If all epidemics behaved the same way, then SIR would apply to every infectious disease in history. Models for each new real-world epidemic require some degree of adaptation to the basic model, to be applicable to reality. In one instance, only a fraction of the infected population may themselves be infectious, in which case that fraction must be multiplied into the transfer, or perhaps the I compartment should be subdivided into two or more parts, such as age groups. Perhaps a disease affects women and men differently, or old and young. Maybe immunity, once achieved, is lost over time. Modifications to epidemiological models are common, usually so long as they can be described by a simple mathematical formula involving a fraction, a percentage, or a multiple.
Anyone who's ever tried to predict the contents of the right side of a graph given only the left side, or attempted to simulate stock market behavior using spreadsheet formulas, knows that linear regressions and best-fit curves cannot account for the peculiarities in any real-world system's behavior. For SARS-CoV-2, which has impacted us all both socially and sociologically, one adaptation used quite commonly is SEIR, where compartment E is a transfer from S. It represents the fraction of the susceptible population who become exposed to the virus. The longer it takes a virus to incubate, the smaller E becomes.
The effectiveness of social distancing policies has a direct effect on the eventual value of E. The tremendous difference over the past few months between the early, almost genocidal, estimates of global deaths from the novel coronavirus, and the less apocalyptic predictions presently indicated, comes perhaps entirely from the introduction of E.
When R is 1
It is the grand object of all theory to make these irreducible elements as simple and as few in number as possible, without having to renounce the adequate representation of any empirical content whatever.
On the Method of Theoretical Physics
There may be no more effective indicator of the staggering cultural distinctions between the cultures of the world, and certainly the subcultures of America, than the varying impact of E on their respective case counts and death counts. New Zealand, led by Prime Minister Jacinda Ardern, studied the SIR model very closely at the outset of its crisis. Its government responded by imposing perhaps the most strict lockdown policy of any country on the planet. By the final week of April, New Zealand reduced its number of new cases per day to the single digits. Ms. Ardern then began formal discussions about gradually lifting some economic and social suspensions.
South Korea was applauded early on for its extremely timely response to the epidemic threat. This from a country that has spent every day since the middle of the 20th century under some degree of threat of barrage from its northern neighbor. The government of South Africa also adopted a strict lockdown policy from the outset, and according to reports, held its death toll as of April 30 down to 103. South Africa is currently building a sophisticated analytical model of its economy, in an effort to determine which business sectors have suffered most on account of the lockdown (thus far, construction, creative arts, mining, ocean management, and tourism lead that list). The government there is devising what it calls a risk-adjusted strategy, weighing the stress some sectors feel against the risk those sectors would pose to the rest of the population should their workers be allowed to resume.
It's hard to give too much credit to a SIR model for any of these successes, though. The case could be made that these countries' leaders may have acted impulsively, just with the proper impulse. What cannot be determined yet, however, is whether their economies suffered catastrophic damage -- we won't know until the moment they decide to attempt a jump-start. Here's where a SIR model, or something like it, becomes a critical necessity.
"We really can't safely loosen social distance, and lockdowns, until we've got really serious, extensive testing," stated Dr. Sherman Robinson, Senior Fellow at the Peterson Institute for International Economics. "That will make an enormous difference, once we really know the incidents. And you're going to need a lot of testing, and then social distancing until you can manage that."
Next, Dr. Robinson asserts, all countries will require some means of tracing. "You will still have cases, no question. Then presumably, you will have controlled it so that the R0 transmission rate is less than 1 -- the number of new cases arising from one new case. You get that under 1, that helps -- that's the whole idea of the lockdowns and the social distancing. Once you have that, then contact tracing is feasible. You can't contact-trace if it's just spreading like crazy."
In other words, any contact tracing methodology may be prone to error until after the R0 variable is brought under control. You can't know when that happens until you have a reliable epidemiological model. And if SIR is only a best-guess estimate, you may as well be basing your analytics on fractions with divide-by-zero errors.
"Our model was the first to use data from COVID-19 itself," Prof. Barbastathis told ZDNet, "in order to inform the model. Other models published before the beginning of April were using data from previous epidemics -- for example, SARS."
The MIT team found a way, the professor told us, to combine the functions of the neural network into the analytical process. Put another way, it's not a stand-alone neural net that produces values for E. In that sense, MIT's neural net is not a "black box" -- a secret stash of unknown math that produces highly probable results for mostly unknown reasons. Barbastathis maintains his team's model is 100% verifiable, by forcing the neural net to satisfy the transparency requirements of analytics.
According to a paper, Barbastathis and student Raj Dandekar released on April 6, the inspiration for this transparency method was a paper published just last January [PDF] by a global team of AI instructors, including MIT's Christopher Rackauckas. That paper introduced a technique they call Physics-Informed Neural Networks (PINN), which is a way to make a sufficiently large series of data appear like the products of a complex differential equation. As the Rackauckas team wrote:
While some areas of science have begun to generate the large amounts of data required to train deep learning models, notably bioinformatics, in many areas the expense of scientific experiments has prohibited the effectiveness of these ground breaking techniques. In these domains, such as aerospace engineering, quantitative systems pharmacology, and macroeconomics, mechanistic models which synthesize the knowledge of the scientific literature are still predominantly deployed due to the inaccuracy of deep learning techniques with small training datasets. While these types of low-parameter models are constrained to be predictive by utilizing prior structural knowledge conglomerated throughout the scientific literature, the data-driven approach of machine learning can be more flexible and allow for dropping simplifying assumptions.
Conventional epidemiology would have us produce a new adaptation of SIR for each new pandemic that comes down the pike. An augmented model using PINN, incorporating enough data pertinent to the conditions of the virus being studied, could conceivably be imprinted with enough of that virus' characteristics to adapt itself. On the one hand, such a model would not have to wait for an epidemiologist or mathematician to reconstruct it from the ground up, to be an effective forecaster. On the other, it would have to wait for enough data to be compiled.
"For our model to train properly, we do need data of approximately 500 infections," noted Barbastathis. "At the very, very early stages of the outbreak, where perhaps there's a dozen or so, we wouldn't trust the neural network with the result of the transform numbers."
The charts above, taken from the MIT team's report, shows the PINN model's success at estimating the curve values for Italy, at the point where its 500th infectious case was reported. In (a), the solid line represents the model's prediction; the dots are from actual posted data. From these predictions, the model was able to derive for (b) the relative positive impact of quarantine Q on the Italian population's health. Chart (c) shows the point in which the effective reproduction rate Rt reached the magic number 1.0. Italy started to beat the novel coronavirus 27 days after it imposed quarantine measures. The model predicted this, and that's what happened.
"Though no simpler"
If a neural-augmented model were to incorporate data from a broader array of categories, could it establish a reliable enough pattern with fewer than 500 cases? An astonishingly timely 2019 study [PDF] by a team of civil engineers including Johns Hopkins University's Prof. Lauren Gardner (more on her momentarily), using time-series data compiled during the 2013-14 Zika virus outbreak, found that a dynamic neural network (DNN), which carefully adjusts the weights between neuron layers as processing proceeds (actually a common model since the last century), was astonishingly accurate at forecasting which of the world's countries were at high or low risk of infection, after a set number of weeks, for any given period.
This chart, taken from the Gardner team's report, shows the relative accuracy of the DNN model's predictions of high- and low-risk countries, for 1, 2, 4, 8, and 12 weeks after the initial data was collected, in panels B, C, D, E, and F, respectively. Panel A represents the actual risk results for Zika at week 40. The model predicted high-risk levels with 94.34% accuracy for the one-week window, declining to 77.36% for the 12-week window.
If Lauren Gardner rings a bell, she is the co-creator of the now-iconic Coronavirus Resource Map, which has tracked the spread of the virus worldwide since January.
The Gardner team's DNN, according to their report, was as effective as it was in (belatedly) predicting the impact of Zika virus, because of the additional factors it took into account, for each region for which it forecasted: incoming and outgoing travel volumes for their airports within a given period, gross domestic product (GDP) per capita, the density of the population for each square kilometer, as well as an aggregate rating of the climatological suitability for the virus. Sensibly enough, adding more categories to the data enhances the context of the results.
Could that mean the Gardner team's DNN, which requires deeper investigation, would be more reliable than the Barbastathis team's PINN, which merely relies on a supplemental neural net to "inform" the transfer between compartments? Perhaps South Africa has an idea here: Its government proposes a 9-factor model for rating each region's relative risk of virus transmission, applied against a 29-tier breakdown of the nation's primary industries: some 261 total subcompartments.
Or is the MIT model accurate enough, to the point where adding more and more constituent variables (how about obesity rates, or lung cancer incidents, or months since the last flu shot?) would be wasted effort?
In his response, Barbastathis cited Albert Einstein, whose 1933 lecture at Oxford University attempted to reconcile the emerging quantum physics model with common sense. Condensed into a form digestible by SEO, Einstein said theory should be made as simple as possible, though no simpler.
"In science, one always tries to keep the models simple enough that one can derive meaningful conclusions, and not be overwhelmed by an extreme level of complexity," remarked the MIT professor. "But at the same time, one does not want to over-simplify the situation, because then, the important details would start to get missed. It's really difficult to make that call, especially when you are operating in real-time with a very pressing problem of public health and security. A policymaker, in particular, should never rely on one single model. They really need to take account of different ones, like [DNN]."
Perhaps two neural network-enhanced models, exploring the same issue from separate angles, may at some future date in history provide irrefutable evidence of impending danger. Future dates in history are nice to talk about, especially to remind ourselves that they will, at one point, exist. For now, we know 2 is greater than 1. But there was a time so many, many weeks ago when 1 was a great deal greater than 0.
The theme for this edition of Scale uses a photograph of The Great Wave Off Kanagawa by Katsushika Hokusai, from the Metropolitan Museum of Art, in the public domain.