Via the Columbia Journalism Review: The fight for COVID-19 data, and what the press can do with it.
On Sunday, Richard A. Oppel Jr., Robert Gebeloff, K.K. Rebecca Lai, Will Wright, and Mitch Smith, of the New York Times, published the most comprehensive analysis we’ve yet seen of the racial disparities shaping the spread of COVID-19, the disease caused by the coronavirus. The reporters analyzed 640,000 COVID cases across nearly 1,000 counties—counties that, taken together, comprise just over half of the total population of the US—through the end of May.
Their findings were horrifying: across the map, from rural towns to big cities to the suburbs, Black and Latino people have been three times as likely as white people to contract COVID-19, and nearly twice as likely to be killed by it. In some counties, especially in Arizona, Native Americans have faced a much higher likelihood of infection. Asian people, meanwhile, have been 1.3 times as likely as white people to catch COVID.
We now have these figures only because the Times sued the Centers for Disease Control and Prevention for their release. Eventually, the CDC handed over data on 1.45 million cases, though more than half of the cases lacked adequate accompanying data on race, ethnicity, and/or location—apparently due to discrepancies in the way state and local officials first reported the data to the CDC—and so the Times left them out of its analysis.
In other words, the best information we currently have about a problem of urgent national concern is incomplete, and wouldn’t be public at all were it not for a major newspaper’s legal and reportorial muscle.
That last depressing fact reflects a longer-term problem: since the early days of the pandemic, officials across the US, often citing privacy considerations, have withheld granular data that would illuminate various facets of the virus’s spread. As with the Times, many outlets—including the Raleigh News & Observer and the Charlotte Observer, in North Carolina; the Arizona Republic and four local TV stations, in Arizona; and the Bay Area News Group, in California—have sued local officials for data related to the pandemic, including, prominently, details of outbreaks in nursing homes and prisons.
After the Miami Herald sued the state of Florida for information on COVID deaths in nursing homes and assisted-living facilities, officials pressured the paper’s law firm to drop the case; eventually, the Herald, in concert with different lawyers and other news organizations, won out.
Florida won early plaudits for putting detailed COVID data online, but as the Herald’s Ana Claudia Chacin and Mary Ellen Klas wrote last month, the state’s reporting has been incomplete and inconsistent. In May, Rebekah Jones, a state official responsible for maintaining the online data, alleged that she was fired for refusing to manipulate it; Ron DeSantis, Florida’s governor, accused Jones of “insubordination,” and said that she faces “cyber stalking” charges.
In Georgia, officials wrongly reported declining case rates three times in as many weeks. Various states and the CDC were accused of massaging testing data to make themselves look more aware of viral spread than they actually were. The list goes on.
In the absence of consistent, reliable official statistics, journalists and researchers have worked tirelessly to try and fill the gap; writers at The Atlantic, for example, founded the COVID Tracking Project to build a more unified national picture of viral spread and surveillance. (In March, Emily Sohn profiled the project for CJR.)
Others have gotten creative. Last week, NPR, working with academics at Harvard and elsewhere, calculated how many COVID tests the US, and each individual state, would need to run in order to mitigate current outbreaks, and how many they’d need to run to start suppressing viral transmission altogether—a more ambitious aim under which life could start return to “some semblance of normalcy.”
According to their figures, the country as a whole would need to run 1 million daily tests to achieve mitigation, and 4.3 million daily tests to achieve suppression. (Yesterday, 518,000 tests were run nationally.) Texas, to pick a state at random, would need to run 117,000 daily tests for mitigation and 431,000 for suppression. As of last week, it was falling far short of both metrics.