A grave of a COVID-19 victim is seen with a Brazilian flag at the Nossa Senhora Aparecida cemetery in Manaus, Amazonas state, Brazil, on April 29, 2021.

A grave of a COVID-19 victim is seen with a Brazilian flag at the Nossa Senhora Aparecida cemetery in Manaus, Brazil, on April 29.

If there’s one thing about COVID-19’s death toll that researchers seem to agree on, it’s that the official count is probably way too low.

But the extent of the undercount is a source of contention.

That may help explain why, when the influential Institute for Health Metrics and Evaluation in Seattle released a new model this month suggesting that the true number of COVID-19 deaths around the world was more than double the figure from the World Health Organization, response from other experts was mixed.

Some researchers who were interviewed for this story said the model seemed solid, but just as many criticized it, suggesting the team glossed over the uncertainties inherent in estimates like these and didn’t share sufficient detail about how their statistical sausage was made.

The IHME model suggests that the extent of underreporting varies from region to region around the world but adds up to nearly 7.3 million as of May 16, well above the official total of about 3.4 million.

The IHME team also made future predictions: By Sept. 1, according to the model, 9.1 million people will have died.

In California, the death count was an estimated 120,515 or so on May 16 — roughly double what the model termed the state’s reported count of 62,596.

Dr. Mark Ghaly, the state’s Health and Human Services secretary, expressed some doubt about the doubled figures.

“We believe that sure, it’s going to be an undercount because the data needs to be looked at more critically,” Ghaly said. “But the idea that it is 50 percent of the actual — I think many of us don’t believe that will be accurate. But [we] need to spend more time looking at it more closely.”

Dr. Rochelle Walensky, the director of the U.S. Centers for Disease Control and Prevention, said the agency would examine IHME’s revised model and decide whether to count any additional deaths as COVID-specific.

“We will look at this carefully,” she said in a briefing.

Many research groups have estimated the number of deaths that would have been expected in the absence of the pandemic and compared those estimates with the actual number of reported fatalities to come up with a statistic called “excess deaths.”

Not all excess deaths during the pandemic have been caused by a SARS-CoV-2 infection. But in some ways, the figure is a more reliable marker of the pandemic’s true toll, according to the World Health Organization, partly because many places lack the infrastructure and resources to accurately track COVID-19 deaths.

There are good reasons why the official tallies may not reflect reality, the researchers said. Among them: Cases may go undetected at times and in places with low coronavirus testing rates. Deaths of older people in the pandemic’s early days could have been attributed to other causes. In some places, officials may not release accurate mortality figures for political reasons.

That’s a problem, because an accurate death toll is essential to understanding an outbreak and predicting how it will spread. And it allows policymakers to more accurately weigh trade-offs between public health, the economy and other priorities as they try to respond.

“It’s really, really important that we have a very clear-eyed view of what the actual burden of this disease is,” said James Scott, a statistician and data scientist at the University of Texas at Austin.

Dr. Timothy Brewer, an infectious disease expert and epidemiologist at UCLA, pointed out that the IHME model used data from a few countries and extrapolated to predict what happened in the rest of the world. “That,” he said, “may or may not be a reasonable assumption.”

“I don’t think we can necessarily assume what happens in the United States or California is the same as what’s going to happen in Gabon, or Ghana, or someplace else,” said Brewer. “I think that’s kind of the biggest challenge I have with this.”

Nicholas Jewell, a biostatistician and epidemiologist at the London School of Hygiene and Tropical Medicine, had similar concerns. Taking information from places that have reliable death records and applying it to places where records are less reliable is tricky business, he said.

“It’s an analysis that deserves a full review,” he said — one “where statisticians can assess the methodology that was used in full detail and replicate the findings if necessary.”

But by and large, Brewer expressed confidence that the model was well thought out.

“I think the approach they took was sophisticated. They did an excellent job pulling together available data,” he said.

Ruth Etzioni, a population health scientist at the Fred Hutchinson Cancer Research Center in Seattle, said that counting only direct COVID-19 deaths underestimates the true toll, given that the pandemic has delayed healthcare and caused other knock-on effects for so many. Regardless, she said the estimates made by the IHME model seem reasonable.

“I think that’s plausible” that the total deaths are about double the official count, Etzioni said. “I think it’s pretty hard to put a specific number on it — but that it is considerably higher than reported is to me incontrovertible.”

For their model, which was originally released on May 6, the researchers at IHME estimated the excess death rate for different places based on weekly or monthly data up to May 2 (though it has been updated since then) and projected them further out, to Sept. 1. They sliced up the data into six categories:

Deaths directly caused by SARS-CoV-2 infection

Deaths resulting from healthcare being delayed due to the pandemic.

Deaths stemming from mental health disorders including depression, higher alcohol consumption and higher opioid use.

Deaths avoided because stay-at-home orders reduced injuries from traffic accidents and the like.

Deaths avoided because mask use and social distancing reduced transmission of other potentially deadly viruses, including influenza and measles.

The reduction in deaths from chronic conditions such as cardiovascular disease because people who would have succumbed to these conditions died of COVID-19 instead.

The researchers estimated excess deaths for each location where weekly or monthly all-cause statistics were available. They removed the deaths due to causes unrelated to COVID-19 and accounted for deaths that were averted by the pandemic.

This information was used to build a model that they applied to all areas, including those for which mortality data were missing. The results varied.

According to estimates through May 10, the U.S. had seen 913,081 deaths, the researchers estimated — nearly 60 percent more than the 578,985 deaths gathered from official reports.

India had seen 737,608 deaths, they said — nearly triple the reported 248,307 fatalities. Mexico’s estimated toll of 623,571 was also nearly three times as high as the official count of 219,925.

While the Russian Federation’s toll was slightly lower, the discrepancy was far higher: 607,589 COVID-19 deaths estimated by IHME compared with the official count of 111,909.

The gaps for Egypt and Kazakhstan were among the worst. The IHME model estimated a toll of 175,488 for Egypt (more than 12 times the official count of 13,962) and a toll of 84,453 in Kazakhstan (more than 14 times higher than the official count of 5,810).

The results were met with significant skepticism from a variety of researchers outside IHME.

Scott, of the University of Texas, was one of a number of people who expressed deep concern over the fact that estimated death counts were so exact without indicating any mathematical uncertainty around those numbers.

The model the IHME team used to estimate COVID-19 deaths relied on several assumptions — and each assumption injected a little bit of uncertainty into the proceedings, he said. It doesn’t take long for that uncertainty to add up.

That’s why such figures usually come with error bars that show the level of “give-or-take” uncertainty around an estimate. That was not the case for the IHME model: While the projections for deaths in the future do have error bars, the estimates of deaths that already occurred do not.

“You need error bars,” Scott said.

“Error bars are what turn a back-of-the-envelope calculation into something that one can actually judge and engage with as a scientific endeavor,” he added.

Eili Klein, an epidemiologist at Johns Hopkins University, also expressed alarm that the calculations appeared so definitive.

“There has to be some uncertainty in their model that they should be reporting,” Klein said. For the U.S., “are they saying this is 900,000 deaths plus or minus two or three, or is it 900,000 plus or minus 500,000? I don’t know how much uncertainty there is in the model, which doesn’t allow me to judge the accuracy of their estimates.”

Ali Mokdad, an IHME public health researcher and one of the senior faculty leads on the COVID-19 modeling effort, said the calculations for past deaths don’t need large error bars because the estimates are bracketed by real-world figures — the total number of deaths in a given location.

Amelia Apfel, a media relations officer for IHME, added that for now, the team does not report the uncertainty around the estimated deaths for the past “in order to simplify the modeling process,” though it’s something they may explore in the coming weeks.

Many independent researchers thought the IHME numbers for the U.S. and elsewhere sounded like they were in the ballpark. Even so, some said the larger issue wasn’t whether the numbers were right, but where they came from.

In some ways, Brewer said, it may come down to a question of trust in a particular group and their work.

“In general, this is a very reputable group that has done reasonable work before,” he said. Based on the description and methods detailed online, “it does sound like they were working hard to get it right. That doesn’t mean they got it right, but it does sound like they were trying to get it right. So I do sort of give them that trust or leeway, as it were.”

Mokdad said criticism was part of the research process and he welcomed the feedback from others.

“That’s normal in science,” he said. “We do something new and innovative and yes, we fix it as we go on, if there are mistakes. I’m not worried about that.”