An Outbreak of Epidemiological Hysteria

October 01, 2014  ·  Michael Fumento  ·  Inference: International Review of Science  ·  Vaccines

There have been far fewer cases of, and deaths from, Ebola Virus Disease (hereinafter “Ebola”) during the period of the recent outbreak than from numerous other endemic diseases that primarily afflict Africans, such as malaria, tuberculosis, HIV/AIDS, and childhood diarrhea.1 Yet there is a widespread sense, in the media and among the public, that particularly urgent measures must be taken to combat Ebola. This is owed in large part to estimates of future cases produced by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC). Their representatives have accompanied the presentation of these estimates with powerful rhetoric, as have representatives of other public health organizations.2 Headlines predictably focus on the upper bound of the CDC estimate, rather than providing the range.3 Yet both the WHO and the CDC have arrived at their distressingly high figures by ignoring epidemiological principles successfully applied since the nineteenth century. These indicate that Ebola infections and even cases may have already peaked.

“The Ebola crisis we face is unparalleled in modern times,” said WHO Assistant Director-General Bruce Aylward,4 while WHO Director-General Margaret Chan declared Ebola to be “likely the greatest peacetime challenge that the United Nations and its agencies have ever faced.”5 In fact, tuberculosis alone has killed an average of 3,561 people per day in the past year, whereas the death toll from the recent epidemic of Ebola stood at 3,439—total—on October 1, 2014. According to the WHO’s data, recent death tolls from malaria, chronic diarrhea, and other plagues of that region have also been orders of magnitude higher. These diseases, too, for the most part confine their killing to same continent.

Only if the estimates are correct or relatively close could Ebola be considered in the same league with those diseases, much less considerably worse. These assessments might generously be considered “beat-ups,” as in “beating up the numbers.”6 The term is owed to Elizabeth Pisani, a former epidemiologist for UNAIDS (the Joint United Nations Program on HIV/AIDS), the WHO, and other agencies. In her book, The Wisdom of Whores: Bureaucrats, Brothels, and the Business of AIDS, she says of drastically inflated predictions, “We did it consciously. I think all of us at that time thought that the beat-ups were more than justified, they were necessary” to get donors and governments to care.7

In the September 26, 2014, issue of its in-house journal, The Morbidity & Mortality Weekly Report (MMWR), the CDC wrote, “[e]xtrapolating trends to January 20, 2015, without additional interventions or changes in community behavior (e.g., notable reductions in unsafe burial practices), the model ... estimates that Liberia and Sierra Leone will have approximately 550,000 Ebola cases (1.4 million when corrected for underreporting).8 The choice of a seemingly odd date, as opposed to the end of the calendar year, will be discussed shortly. The CDC also made a shorter-term estimate of “approximately 8,000 Ebola cases” by September 30, 2014. A WHO analysis published in the September 23 New England Journal of Medicine (NEJM), widely considered the United States’ most prestigious medical journal, predicted more than 20,000 cases in Guinea, Liberia, and Sierra Leone by November 2, 2014.9 Both estimates were entirely of morbidity, with no mortality component.

Even a cursory analysis should make one skeptical. Such an increase in the number of cases would be quite dramatic, given the starting point. On October 3, 2014, the WHO reported that only two days before, there had been 7,178 “probable, confirmed and suspected cases,” with 3,338 deaths, in Guinea, Liberia, Nigeria, Senegal, and Sierra Leone.10 (The Democratic Republic of the Congo has a small, separate epidemic of a different strain that has received only passing remark from the WHO.)11 They allow for a two-day lag in reporting, and more cases may come in for that period, thereby expanding the figure; but many of those probable and suspected cases will prove to be false positives, reducing the figure. (The CDC notes that only 4,087 of these reported cases, or slightly more than half of them, have been confirmed by a laboratory.)12 That means the CDC short-term estimate through September has already been proved false. The epidemic officially began in December 2013, so the WHO is suggesting that morbidity from a nine-month outbreak will approximately triple in one month.13 The CDC is claiming there will be an almost 75-fold increase (using the lower boundary) in little more than three months.

During all major recent epidemics, from HIV to the current Ebola epidemic, individual observers and organizations have warned that a mutation might make the pathogen more contagious.14 But this has never been observed. Nor does either agency’s estimate rest upon the assumption of increased contagiousness. It is also somewhat remarkable that these estimates showing sudden, huge spurts in growth were published after the WHO announced that Ebola had, in fact, been eliminated from one of the countries suffering internal transmission, Nigeria.15

In an online response to the WHO analysis in the NEJM, the French mathematician Marc Artzrouni explained why that agency’s model and that of the CDC are at best meaningless, and at worse designed in such a way as to overstate future cases. “Extrapolating an epidemic on the basis of the early exponential period alone is pointless—simply because there is no way of knowing how long this period will last. This was done 30 years ago with HIV and produced very misleading results.”16

Artzrouni is alluding to Farr’s Law, named after the Victorian British epidemiologist who first expounded it. As the Journal of the American Medical Association (JAMA) puts it, “epidemics tend to rise and fall in a roughly symmetrical pattern that can be approximated by a normal bell-shaped curve.”17 At first they go up sharply, as the epidemic grabs the low-hanging fruit. What is called the effective reproduction number, or the number of new infections caused by each current infection, is well above one.18

Here, in fact, is the WHO’s own explanation of this principle: “When R0 [the number of cases resulting from previous cases] is greater than 1, infection may spread in the population, and the rate of spread is higher with increasingly high values of R0.” Then, “[a]fter the early phase of exponential growth in case numbers, once infection has become established, the number of people still at risk declines, so the reproduction number falls from its maximum value of R0 to a smaller, net reproduction number, Rt. When Rt falls below 1, infection cannot be sustained.”19

For example, during its early exponential period in the United States, AIDS almost exclusively afflicted intravenous drug users, hemophiliacs, blood transfusion recipients, and men who had anal sex with other men. After that, the epidemic quickly moved to Rt. The spread of other diseases is also limited by such factors as modes of transmission and natural immunity. Tuskegee University maintains a site that goes into great depth about epidemiological curves and includes an illustration of a typical one.20 The CDC also maintains a site that allows people to create their own epidemic curve by inputting data.21

That all epidemics conform to this curve explains why the Plague of Athens (430 BC), the Black Death (1330s), and countless other recorded and unrecorded epidemics did not kill everyone in the affected regions, despite the absence of a CDC, a WHO, or any effective vaccines or treatments. Farr used the curve to predict the cresting and ending of the Cattle Plague of London, based only on early reports.22

Neither the CDC nor the WHO applied Farr’s Law. They merely took figures from the early exponential stage and extrapolated from them. The CDC, using data from August 2014, concluded that “Total cases in [Liberia and Sierra Leone] are doubling approximately every 20 days.”23 They continued: “Extrapolating trends to January 20, 2015, without additional interventions or changes in community behavior” such as unsafe burial practices, those nations “will have approximately 550,000 Ebola cases.” Curiously, data from Guinea, the first country hit by the epidemic, which accounts for 16 percent of all current cases, was omitted. The CDC provided no explanation for this.

Similarly, the WHO stated in the NEJM, “We estimate that, at the current rate of increase [emphasis added], assuming no changes in control efforts, the cumulative number of confirmed and probable cases by November 2” will exceed 20,000 cases. That rate is over the period between July 21 and August 31.24

The upper bound of 1.4 million comes from an assumption that for every case reported, an additional 1.5 are not. The CDC calculated this “by taking estimates of beds-in-use, computed by using CDC’s EbolaResponse modeling tool, and comparing these estimates to an expert opinion of actual beds-in-use.”25 The number is highly questionable: the use of such terms as “suspected cases” and “probable cases” indicates that a wide net has been cast and that many fish will be discarded after antibody testing.

To illustrate how wrong such a static model can be, consider the Severe Acute Respiratory Syndrome, or SARS, epidemic of 2002–2003, and specifically the curve of its epicenter in Hong Kong.26 In one specific neighborhood, during the upward slope in March 2003, 320 cases were reported in less than three weeks. Using this as the basis for doubling times would have meant 640 cases three weeks later, 1,280 three weeks thereafter (around the end of April), 2,560 three weeks after that, and so on. Had this process been carried out over almost four months, as the CDC does with its Ebola projections, there would have been forty million cases in a city of about seven million people. In fact, “[a]fter the initial phase of exponential growth, the rate of confirmed SARS cases fell to less than 20 per day by April 28.”27

Indeed, the CDC’s formula is already breaking down, as shown by data published before their estimate came out. Graphs of the epidemics in three countries presented in the September 2 issue of PLoS Current Outbreaks show cases growing exponentially in Liberia, but curving slightly in Sierra Leone. And in Guinea, the country the CDC omitted from its calculations, the curve has virtually flattened. “The model indicates that in Guinea and Sierra Leone, the effective reproduction number might have dropped to around unity [only one new case for each previous case] by the end of May and July 2014, respectively,” declares the abstract.28

So infections in those countries appear to have peaked months ago. As for the epidemic as a whole, it is possible that despite infections spiking elsewhere, the rise in Liberia is enough to keep the overall curve from flattening. “There are issues of incomplete data, of reporting delays, but it’s certainly a possibility that infections as a whole have peaked,” Artzrouni remarked.29

And what of full-blown cases? The calculations in the WHO’s October 1, 2014, situation report show an increase not of 100 percent in 20 days but 39 percent in 21 days.30 This is not exactly comparable to the CDC estimate because of variations in treatment of confirmed, suspected, and probable cases. When the mistakenly attributed cases are subtracted, the increase will be considerably smaller.

Not coincidentally, the October 1 situation report is accompanied by a bar chart showing that cases peaked on September 14 and subsequently declined in the following week and the one thereafter. Underneath is a disclaimer: “These numbers are subject to change due to ongoing reclassification, retrospective investigation and availability of laboratory results.”31 What that disclaimer means is that we can expect fewer cases in each of those weeks as testing eliminates the probable and suspected cases and leaves only confirmed ones. Because confirmation takes time, the later the data, as depicted in the bars, the more they will shrink. We may thus expect that drop-off to be even sharper than it currently appears. Further evidence that this peak and decline is real comes from earlier reports. The one from September 18 shows no decline;32 the September 24 report is the first to show decline.33 So we are seeing a real pattern.

The WHO’s discussion of effective reproduction numbers in the NEJM was perfectly lucid. The discussion made it clear that they understand the principle; they simply failed to apply it in their estimates.

Now recall Artzrouni’s statement: “This was done 30 years ago with HIV and produced very misleading results.” He does not specify by whom, but in epidemiological circles, it is quite well known. The WHO and the CDC.

“In 1997,” Pisani writes of the WHO and some of its early, outlandish projections about HIV, “the best we had to work with was a super-simple model in which you decided when the epidemic started, plotted on a graph any information you had about the percentage of adults infected with HIV, and then drew a curve through all your bits of information, a sort of glorified ‘join-the-dots’ exercise.” She says that they were quite aware that there would be a peak, but had “no idea how high the mountain might be. You could be dealing with Everest, or Mont Blanc, or some local hillock.”34 But the pressure was on, so they wrote and rewrote their reports to generate “interest and cash” until they finally succeeded.35

The US Public Health Service, meaning here essentially the CDC, held a series of meetings in 1986 to generate estimates of current and future HIV infections in the United States. It published its estimate of future cases in Public Health Reports. This contained only three references. One was their own plan for preventing and controlling AIDS. The second was a letter to Science. The third was the 1948 Kinsey report.36

Epidemiologist James Chin, who between 1987 and 1992 was the chief of the surveillance, forecasting, and impact assessment unit of the WHO’s Global Program on AIDS, said in 2011 that he and perhaps five others, plus two bottles of bourbon, came up with a range of between one million and 1.5 million. “That estimate has been so good,” he added, “that it’s been consistently used and it’s [used] at the present time.”37 Which is to say that after 25 years, infections finally caught up to the estimate.

A separate team within the WHO’s Global Program on AIDS worked on the case forecasts, concluding that while there were about 21,000 cases in mid-1986, by 1991 there would be 270,000. These figures were contingent on the definition of AIDS at the time. They added that the figure was probably highly conservative. Two years later, at another conference, the CDC announced an even more stunning figure of 450,000 cases by 1993.38 The media and other influential figures often rounded this up to “about half a million.” Making the CDC appear conservative was a 1989 report from what was then called the Government Accounting Office, predicting 300,000 to 480,000 AIDS cases by 1991.39 Best-selling writers inappropriately deployed the idea of doubling times to predict 64 million U.S. HIV infections by the end of 1990.40

In 1987, the definition of AIDS was widened to the point that evidence of HIV infection was no longer even needed. Even so, the CDC reported not 270,000 but 206,000 cases by the end of 1991, and not 450,000 but 361,000 by the end of 1993.41 Indeed, AIDS diagnoses had peaked by the end of 1992,42 and HIV infections had already peaked when the CDC came up with its first estimate.43 This belies government officials’ subsequent claims that the initial figures only proved too high because of their intervention, a response we should expect when they admit that Ebola estimates were far too high.44

It is not that the CDC projections lacked critics, including the late Dr. Alexander Langmuir, founder of the agency’s Epidemic Intelligence Service. Following his death in 1993, the American Journal of Epidemiology ran an issue dedicated to his memory, which included an article about his work on the application of Farr’s Law.45 Both in public and in private conversations with this author, Langmuir criticized the CDC for not applying it.46

The CDC might claim that it was too early in the epidemic, with too few cases, to establish a curve. But math and curves are only part of making epidemic forecasts. Showing what can be done with real epidemiology, John Pickering of the University of Georgia and his colleagues published, in 1985, an extremely elegant model of AIDS in three major cities. By then, evidence could be discerned that the epidemic was peaking. They observed, “If a model is to forecast reliably the incidence of an infectious disease over any extended period of time, then it must be based on the disease’s underlying epidemiology, rather than on mathematical functions that fit the existing incidence data.”47 Some diseases are easy to transmit, for example, and others are not. It was as if they could foresee what the CDC was planning to do—both with AIDS and now with Ebola.

This brings us, finally, to the question: why did the CDC’s estimate use the curious date of January 20, 2015, instead of the end of calendar year? Note that had they chosen the end of the year, the startling prediction of 550,000 cases would have been a much less startling 275,000. Had they used another 20-day interval, it would have been well over a million. That, perhaps, would have aroused suspicion.

We might do well to note the dark side of exaggeration of select disease outbreaks: resources are finite, and each dollar spent to treat Ebola, or another high-profile disease, is a dollar that could have been spent to treat less glamorous diseases. Few are asking why there has been so much fast-tracked research on an Ebola vaccine when there is no vaccine for malaria, a disease that has plagued humanity throughout its history, and which annually kills two hundred times as many people as have died this year of Ebola.48

Diarrhea kills about 1.5 million children under the age of five every year, accounting for roughly a fifth of child mortality worldwide. Intervention, according to a 2011 study in PLoS Medicine, could bring about a 78 percent reduction in child deaths due to diarrhea by 2015 at a cost of US$0.80 per victim.49 The United States has already committed to spending an estimated US$1.26 billion on Ebola.50 If that money were dedicated to diarrhea control, how many lives could be saved? How many will be lost because it was not?

Update from the author: as of October 8, new data from the WHO reinforced this essay’s suggestion that the epidemic has peaked.