This is a living document, as the Ebola epidemic is rapidly evolving. Presented below is a preliminary modeling approach to the crisis, originally conceived as an illustration of the spatial SEIR model family generally and the capabilities of the rapidly developing (and totally unfinished) libspatialSEIR software library particularly. This is not yet peer reviewed research, or even a particularly complete analysis. Nevertheless, we hope that the initial exploration given below is instructive and useful
All revisions of and changes to this document are saved on the gh-pages branch of the libspatialSEIR git repository. Several past versions are also cached in an external repository:
Additional analyses are underway. An informal look at prediction performance over time is now available.
The 2014 Ebola outbreak in West Africa is an ongoing public health crisis, which has killed hundreds of people so far. The cross-border nature of this epidemic, which has emerged in Guinea, Liberia and Sierra Leone has complicated mitigation efforts, as has the poor health infrastructure in the region. While there has been much analysis and speculation about the factors at play in the spread of the virus, to our knowledge there aren't any specific predictions about the expected duration and severity of this particular epidemic. In the (certainly temporary) absence of epidemic forecasts, this document explores a simple spatial SEIR model to make some initial predictions.
A summary of the WHO case reports is very helpfully compiled on wikipedia. It can be easily read into R with the xml library:
With data in hand, let's begin where every analysis should begin: graphs.
These represent cumulative counts, but because case reports can be revised downward due to non-Ebola illnesses the graphs are not monotone. A quick, but effective solution to this problem is to simply "un-cumulate"* the data and bound it at zero to get a rough estimate of new case counts over time.
*Unlike uncumulate, decumulate is actually a word. Unfortunately it just means "to decrease", and so was unsuitable for use here. There should probably be a word for uncumulating things, perhaps uncumulate.
For better graphical representation, the "un-cumulated" counts are scaled to represent average number of infections per day, and linearly interpolated. The process is a bit noisier from this perspective when compared to the original cumulative counts.
Now that the data is read in (and now that we have several plots to suggest that we haven't done anything too terribly stupid with it) , let's do some compartmental epidemic modeling. Not only has Ebola been well modeled in the past using compartmental modeling techniques, but this author happens to be working on a software library designed to fit compartmental models in the spatial SEIRS family. What a strange coincidence! Specifically, we'll be using heirarchical Bayesian estimation methods to fit a spatial SEIR model to the data.
While a full treatment of this field of epidemic modeling is (far) beyond the scope of this writing, the basic idea is pretty intuitive. In order to come up with a simplified model of a disease process, discrete disease states (aka, compartments) are defined. The most common of these are S, E, I, and R which stand for:
This sequence, traversed by members of a population (S to E to I to R), forms what we might call the temporal process model of our analysis. This analysis belongs to the stochastic branch of the compartmental modeling family, which has its roots in deterministic systems of ordinary and partial differential equations. In the stochastic framework, transitions between the compartments occur according to unknown probabilities. It is the S to E probability, which captures infection activity, into which we introduce spatial structure. Some details of this are given as comments to the code below, and more information than you probably want on the statistical particulars is available in this pdf document. For now, suffice it to say that we'll place a simple spatial structure on the epidemic process which simply allows disease to spread between the three nations involved, and we'll try to estimate the strength of that relationship. Many other potential structures are possible, limited primarily by the amount of additional research and data compilation one is willing to do.
For the purposes of this analysis, we will not do anything fancy with demographic information or public health intervention dates. Demographic parameters are relatively difficult to estimate here, as there are only three spatial units which are all from the same region. Intervention dates are more promising, but their inclusion requires much more background research than we have time for here. In the interest of simplicity and estimability, we'll just fit a different disease intensity parameter for each of the three countries to capture aggregate differences in Ebola susceptibility in addition to using a set of basis functions to capture the temporal trend.
There are some things we need to define before we can start fitting models and making predictions.
Compartment starting values follow the usual convention of letting the entire initial population be divided into susceptibles and infectious individuals. The starting value for the number of infectious individuals was 86, the first infection count available in the data. Offsets are actually calculated in the first code block (above) as the differences between the report times. For temporal basis functions, orthogonal polynomials of degree three were used. Prior parameters for the E to I and I to R transitions were chosen based on well documented values for the average latent and infectious times, and the rest of the prior parameters were left vague. These decisions are addressed in more detail as comments to the code below.
With the set up out of the way, we can finally build the models. In order to assess convergence, we'll make three model objects - one for each MCMC run.
With the model objects created, we may perform some some short runs in order to choose sensible Metropolis tuning parameters. The following script uses the runSimulation function defined in the previous code block to do just that.
Using these tuning parameters, we can now run the chains until convergence. As before, we'll adjust the tuning parameters along the way. Astute readers may notice the frankly inconvenient number of samples requested, and will correctly infer that autocorrelation is currently a major problem for this library. While such autocorrelation does not impact the validity of estimates based on the converged chains, it does increase the required computation time. This is an area of active development for libspatialSEIR.
As this is a Bayesian analysis in which the posterior distribution is sampled using MCMC techniques, we really need some indication that the samplers have indeed converged to the posterior distribution in order to make any inferences about the problem at hand. In the code below, we'll read in the MCMC output files created so far, plot the three chains for each of several important parameters, and take a look at the Gelman and Rubin convergence diagnostic (which should be close to 1 if the chains have converged.)
The convergence looks quite reasonable, so let's dissect the estimates a bit.
## Output from coda library summary: ## ########################
## ## Iterations = 1:2507 ## Thinning interval = 1 ## Number of chains = 3 ## Sample size per chain = 2507 ## ## 1. Empirical mean and standard deviation for each variable, ## plus standard error of the mean: ## ## Mean SD Naive SE Time-series SE ## Guinea Intercept -4.178 0.20874 2.41e-03 0.007494 ## Liberia Intercept -2.202 0.17308 2.00e-03 0.006196 ## Sierra Leone Intercept -2.909 0.17445 2.01e-03 0.006492 ## Linear Time Component 6.478 1.19195 1.37e-02 0.046300 ## Quadratic Time Component -6.447 0.94374 1.09e-02 0.038107 ## Cubic Time Component 3.678 0.65568 7.56e-03 0.025362 ## Quartic Time Component 0.879 0.36120 4.16e-03 0.018616 ## Spatial Dependence Parameter 0.168 0.01739 2.00e-04 0.000745 ## E to I probability 0.121 0.00733 8.45e-05 0.000326 ## I to R probability 0.105 0.00872 1.01e-04 0.000427 ## ## 2. Quantiles for each variable: ## ## 2.5% 25% 50% 75% 97.5% ## Guinea Intercept -4.5864 -4.3216 -4.176 -4.029 -3.785 ## Liberia Intercept -2.5522 -2.3178 -2.198 -2.083 -1.881 ## Sierra Leone Intercept -3.2593 -3.0288 -2.906 -2.788 -2.579 ## Linear Time Component 4.2554 5.6380 6.459 7.281 8.880 ## Quadratic Time Component -8.3018 -7.0949 -6.433 -5.796 -4.632 ## Cubic Time Component 2.3900 3.2319 3.680 4.129 4.948 ## Quartic Time Component 0.1827 0.6307 0.878 1.118 1.593 ## Spatial Dependence Parameter 0.1354 0.1564 0.168 0.179 0.204 ## E to I probability 0.1069 0.1161 0.121 0.126 0.136 ## I to R probability 0.0889 0.0987 0.105 0.111 0.123
The average time spent in a particular disease compartment is just one divided by the probability of
a transition between compartments. The units here are days, so we can see that the average infectious time is estimated to be between
7.4
and
9.4
days, while the average latent time is most likely between
4.9
and
7.4
(95% credible intervals). In reality, there is a lot
of variability in these times for Ebola, but these seem like reasonable estimates for the average values.
We also notice that there is definitely non-zero spatial dependence (the distribution of the spatial dependence parameter is well separated from zero), indicating significant mixing between the populations. This is unsurprising, as the disease has in fact spread between all three nations.
It also appears that Guinea has the lowest estimated epidemic intensity, followed by Sierra Leone and Liberia, which have similar credible intervals for their intercept parameters.
A common tool for describing the evolution of an epidemic is a quantity known as the basic reproductive numer, the basic reproductive ratio, or one of several other variants on that theme. The basic idea is to quantify how many secondary infections a single infectious individual is expected to cause in a large, fully susceptible population. Naturally, when this ratio exceeds one we expect the epidemic to spread. Conversely, a basic reproductive number less than one indicates that a pathogen is more likely to die out. This software library doesn't yet compute the ratio automatically, but does provide what's known as the "next generation matrix" which can be used to quickly calculate the quantity.
In addition to coming up with a point estimate of the ratio, it is helpful to quantify the uncertainty in the estimates obtained. The code below draws additional samples from the posterior distribution in order to estimate this uncertainty.
Currently the simulation required for epidemic prediction must be done "manually" by writing a bunch of R code. As the library develops, a simpler prediction interface is a high priority.
Below, we will attempt to predict the course of the epidemic through early fall. We must be cautious when making predictions about a chaotic process this far into the future. We must be particularly cautious because the basis chosen for the temporal trend in the epidemic intensity process was polynomial. While polynomial bases often provide a good fit to the data, they can behave unreasonably outside the range over which the model was fit (quadratic and cubic terms can get large very quickly).
It looks like our worries about polynomial basis functions were well founded. While these predictions are likely acceptable for several days or weeks after the currently available data, they clearly become dominated by higher order polynomial terms as time goes on. The behavior of the epidemic so far does not support these large swings in infection counts, so we can be fairly certain that these long term predictions are extrapolation errors. Analysis 2 will consider the results of using a natural spline basis for this process instead. Spline bases extrapolate linearly, and so are less prone to extreme extrapolation errors.
The problem with polynomial basis functions is that they extrapolate poorly, exhibiting extreme behavior under prediction. On the other hand, they often perform quite well for estimation purposes and prediction within the range of observed data. For this reason, and because spline basis coefficients are somewhat difficult to interpret, analysis 2 will not repeat the qualitative interpretation work presented above. Parameter estimates are available below for completeness.
Again, convergence looks quite good:
The estimated basic reproductive number and associated variability is virtually unchanged:
Here we see the most prominant difference between the two approaches. Namely, these predictions appear more reasonable. Such data can also be visualized in map form, though the recent surge in predicted cases has the effect of swamping the earlier dynamics:
As the two sets of basis functions give similar answers in the near future, it seems likely that the epidemic will continue to get worse for at least the next few weeks, though we must still be careful projecting too far into the future. This caution is especially important given the increased national and international response seen in recent days, as these simple models don't have any mechanism with which to capture the impact of current and future public health interventions. We may hope that the these efforts will be successful in the coming months, and that recent international attention will provide support for the efforts of involved governmental and non-governmental organizations like the WHO and MSF
In the meantime, the short term predictions largely agree with what the media has been reporting: the epidemic is likely to worsen.That wraps up the analyses for now. This document will continue to be updated as the epidemic progresses, reflecting new data and perhaps additional analysis techniques. As the document is tracked via source control it will be easy to see how well past predictions held up and how they change in response to new information. Questions and comments can be shared here