Common themes from complex systems research can give us better tools to think about the pandemic and help make more informed decisions.
This post written by Jon Klein, co-founder at Verifiablee
I have a Master’s degree in Complex Adaptive Systems. And when I tell people that, there’s typically a short pause, followed by the obvious question: “What are complex adaptive systems?”. And so I normally go on a long-winded spiel about biological systems or social interactions or turtles on lily pads or traffic jams and usually this helps people kind-of-sort-of understand what complex systems are all about.
Over the past 3 months though, explaining complex adaptive systems has become a lot easier: you know that global pandemic that we’re all living and breathing, that has changed every aspect of our daily lives? Yeah — that one. That is a complex adaptive system.
Simply stated, a complex system is a system made up of lots of interacting pieces, in which looking at the behaviors of individuals does not readily explain the behavior of system as a whole. Pandemics certainly fit the bill, and some of the most lucid thinking on managing this pandemic (and others before it) has come from people and institutions studying Complex Systems. As the public debate on how and when to “reopen” unfolds, an understanding of complex systems reveals some themes that can help us make better decisions.
Pandemics as Complex Systems
One of the defining features of complex systems is that they often give rise to emergent behaviors: unexpected phenomena that emerge out of simple interactions. In the “Game of Life”, for example, simple interactions between individual “cells” give rise to surprising higher level patterns like islands of stability, oscillating patterns, and even patterns that travel through space and time. Similarly, all sorts of emergent phenomena can arise when looking at a pandemic.
To dig into how pandemics can be viewed as complex systems, let’s start with the most fundamental aspect: the spread of the disease. To understand the spread of the virus, we look at behaviors of the individual pieces (personal interactions, basic reproduction number of the virus, course of the disease); and we seek to understand the big picture emergent behaviors (the “curve” of infections, hospitalizations & fatalities). Changes to the behaviors at the individual level can have profound impact on the behavior of the system. For example, reducing personal interactions, as we have done through social distancing, has had a clear impact on flattening the curve of infections.
Looking beyond just the spread of the disease there are actually several other connected complex sub-systems we might want to analyze: the economy, supply-chains, health care systems, even the biological mechanisms of the virus in the body. Depending on what aspect of the pandemic we seek to understand, we might seek to understand some of these things in isolation, or how they impact each other. And some common themes from complex systems research can help us in the process.
Theme #1: All models are wrong, some models are useful
One of the core tools of understanding complex systems is modeling, in which we attempt to simulate the various components and interactions in a system in order to see how the system as a whole behaves. There are all sorts of different simulation techniques one can apply to study complex systems (such as mathematical, statistical, agent-based, etc), but they all share a goal of giving a better understanding of how a system will behave given the behaviors of its components.
Though modeling is sometimes used to attempt to predict specific outcomes, it is arguably more useful as a tool to understand how complex systems work. Modeling lets us test our assumptions about a system, and once we’re confident in the model, we can change assumptions about individuals & their behaviors and see how it impacts the system as a whole. For example, we might build models around the virus that are capable of helping us answer questions like: “how many more infections will we see if we re-open schools next week”, or “how many lives would be saved if we could hospitalize patients one day earlier?”
Some of the best known models for the spread of COVID-19 are those from the Institute for Health Metrics and Evaluation. These models have been used to predict the number of cases and fatalities broken down by state and country, and have been widely referenced in making policy decisions around the pandemic. Some people have also been critical of the many updates and changes that the projections, even at times when the underlying assumptions don’t seem to have changed, and they questioned their utility: if the models get so much wrong, what are they good for?
Though it’s reasonable to question the way any model is constructed, let’s be careful not to throw the baby out with the bathwater. Models can be “wrong” and still be useful. In this context, I use the word “wrong” meaning “not predictive of a precise outcome” — a model does not need to accurately predict a precise outcome of a system to be helpful in understanding the system and how the system behaves in response to changes in the underlying behaviors. Models can and should be updated to incorporate new data and changes in underlying assumptions. As we learn more about the virus and as policies and behaviors change, we should expect to need to update the model.
This is of course not intended to defend poorly built models or suggest that any arbitrary model is “useful”. Instead, it means to recognize that models are not perfect representations of the real world and that even a well-constructed model will need to change in response to new information. Models help us understand how complex systems work, along with their trends & their behaviors — even though all models are wrong, some models are, indeed, very useful.
Theme #2: The data isn’t just noisy — sometimes it’s unknowable
One of the biggest challenges in dealing with these types of systems is the fact that the data can be unreliable, and in many cases, it’s simply not knowable. Two of the most fundamental pieces of data in this pandemic are how many people are infected, and how many people have died. And in both cases, the actual values are currently unknowable and the data we do observe is subject to debate and interpretation.
In the case of the number of infected, the most important factor in the number of observed infections is the number of tests being done. And as is now abundantly clear, we are not doing enough testing. Even though we’re not doing enough testing, surely we can extrapolate from the tests we are doing, right? We can try, but then we have to think about who gets tested. The people who get tested the most are sick people, and often those who are sick enough to go the hospital. The people getting tested are not a representative sample, so extrapolating from the number of positive tests is not straightforward and subject to interpretation. Antibody testing of can help us get a better picture of the number of people who have been exposed to the virus, but it too suffers from the same sampling biases, not to mention wildly varying accuracy values.
Likewise, the number of fatalities, which seems like it would be simple to track, is not. The fatalities that are counted, generally speaking, are those of patients being treated for a confirmed positive case of the virus. What about people who died outside of a hospital setting? What about people who died with “suspected” cases but were never confirmed with a positive test? What about terminally ill patients in which the virus pushed them over the edge? What about all of the “excess deaths” that may or may be due to the virus, and which may or may not be counted in those other categories?
These may sound like edge cases, but the numbers are significant. Because of these issues and others in accurately counting fatalities, the definition and number of COVID-19 deaths are constantly evolving, even retroactively as when New York City reported 3,778 additional deaths after the fact. While it’s easy to attribute inaccurate numbers to coverups by dishonest governments (which in some cases is quite plausible), more often than not I suspect that it’s because getting an accurate & meaningful count is just really really hard.
Beyond these two fundamental statistics, there are many other things we cannot possibly know with certainty right now (in some cases because they are derived from these already unknowable values):
- the basic reproduction number (R0) of the virus
- the infection fatality rate (IFR) or case fatality rate (CFR) of the disease
- the economic impact of the shutdown
- the economic impact of the virus itself
All of these numbers are active areas of research and hotly debated by well-informed and well-intentioned researchers (though not all contributions to the debate are well-informed and well-intentioned). As time goes on, the estimates of all of these values should improve. After the fact — perhaps years from now — we may have reasonably accurate values for some of them and a better understanding how the pandemic unfolded. For the time being, however, these and other important values cannot be definitively determined — and yet we need to make use of this data to make reasonable decisions on how to act.
Theme #3: Embrace the Uncertainty — Outcomes are Probability Distributions
Given that we’re using imperfect models, built on not-fully-knowable numbers, we should get used to treating predictions with a great deal of uncertainty. Just like a weather forecast, (the weather itself being a complex system), models of complex systems typically produce probability distributions of outcomes instead of absolutes. Well constructed formal models, like the IHME ones, have probability distributions built-in — we should be careful to include probability distributions in our own thinking and beliefs about the pandemic as well.
Rather than making policy decisions based solely on our current fixed set of assumptions about the world, we need to make decisions that acknowledge the uncertainty about those assumptions. We need to ask all sorts of questions like, “what would the result of this policy be if my assumption about the IFR is off by 50%?” or “what if we’re wrong about the number of asymptomatic carriers?”. In many cases, this kind of thinking can cause good, sound decisions to look like over-reaction. It also means that we must continually observe the results of policy decisions and be prepared to change in response to changes in the data. This kind of uncertainty may be unsettling and unpopular, but it is also necessary to properly navigate a complex system.
Theme #4: Lookout for Phase Transitions, aka Tipping Points
Phase transitions are states in a complex system in which the system-level behaviors can change suddenly and severely, often with a sharp and unexpected transition. A phase transition we’re all familiar with is that of water freezing to become ice, which happens at 32 degrees Fahrenheit at normal atmospheric pressure. The behavior of water at 31 degrees and at 33 degrees is fundamentally different, and for all practical purposes is an entirely different material.
What’s more, as we change the pressure the water is subjected to, we get an even more interesting picture:
This diagram shows the state of matter we observe for water at different pressure and temperature values. We see lines between the states which represent the phase transitions, and even a “triple point” where the states of gas, fluid and solid all meet. Around this point, tiny changes in temperature and pressure can change the behavior of the system entirely into three different states of matter. Just like phase transitions of matter, many complex systems have phase transition points at which the behavior of the system can suddenly and drastically change in response to what appear to be small changes in conditions.
There’s one critically important phase transition in the pandemic context that you’re probably already quite familiar with: the R0 number, or basic reproduction number. It’s a measure of — on average — how many new infections occur from a single case of the virus. The R0 number is another one of those noisy and not-fully-knowable values and is constantly changing, but importantly, there’s a major phase transition that happens at R0 = 1. With R0 greater than 1, the pandemic is expanding, and if it’s under 1, the spread of the disease is waning.
Looking for and understanding these phase transitions help us understand how & when changes in underlying conditions give rise to important changes in systemic behaviors. There are many aspects of the pandemic which may be subject to unexpected tipping points, among others:
- hospitals utilization & capacity
- economic impact on businesses & individuals
- supply-chain stability
- long-lasting behavioral changes post-shutdown
- the course of the disease in individual patients
All of these areas are experiencing major impacts already, but there’s a risk of sharp phase transition points giving rise to “sudden” and more drastic changes in behavior that we need to be aware of.
As we navigate the “reopening” debate, that’s the question we’re all asking. And many people have already “picked sides” on the debate based on some belief about the fundamental nature of the virus, which is unfortunate. Though complex systems don’t give us any easy answers, complex systems thinking does reveal some common themes that can help in our understanding. Most of all, it can help us to stay flexible in our thinking and recognize that the behaviors of complex systems can an do change drastically over time, often in unexpected ways, and that we need to adapt.