Skip site navigation

At the start of the pandemic, the federal government had no infrastructure that it could use to track all SARS-CoV-2 laboratory results in the United States. It took three months for the Centers for Disease Control and Prevention (CDC) to create a program to collect these results and release a national public testing dataset. When The COVID Tracking Project analyzed this dataset when it was released in May 2020 and again a few months ago, we found it diverged from state-released COVID-19 data, signalling problems with federal data quality.

But over the past two months, there have been major changes to address some long-standing data problems for nine jurisdictions:

  • The federal government has started receiving more complete testing data from Missouri, Oklahoma, Puerto Rico, the Virgin Islands, Washington, and Wyoming. Previously, testing data for these jurisdictions came from an incomplete dataset submitted directly by labs to the federal government.

  • Gaps between state and federal data for New Hampshire, Rhode Island, and South Carolina were identified and corrected. 

Not only has the federal government started receiving better data in these nine jurisdictions, but the US Department of Health and Human Services (HHS) also made an important adjustment to the choices it makes about which data to publish for particular states. Together, these changes have improved the quality of the datasets that policymakers rely on for the COVID-19 response. 

However, the recent fixes don't address all the problems with the federal testing data, which is still highly divergent from some states’ data and shows signs of systemic infrastructural problems. We hope that these improvements portend further ones in how federal testing data is collected and presented. 

New improvements

Direct, complete federal data from six jurisdictions

For most jurisdictions, the numbers in the federal government’s testing dataset come directly from state public health departments. Each day, states submit a record of all the tests conducted by labs to a cloud platform run by the Association for Public Health Laboratories (APHL). From there, APHL forwards the data to a system called HHS Protect, from which all the federal agencies that publish testing data draw their datasets. Together, this system makes up the majority of the CDC’s test tracking system, called the COVID Electronic Reporting program, or CELR.

Most jurisdictions were fully onboarded to submit data to CELR between May and October of 2020. But eight US states and territories—Maine, Missouri, Ohio, Oklahoma, Puerto Rico, Washington, and Wyoming—weren’t onboarded to the system during that period, leaving the federal government without detailed testing data for these states in CELR. Instead, the federal government usually ended up relying instead on data submitted directly by laboratories to this system.

That was a problem, since only a portion of laboratories nationwide submit data to the federal government. CELR testing data for five of these jurisdictions—Maine, Missouri, Oklahoma, Puerto Rico, and Washington—was very incomplete, potentially throwing off indicators like test positivity. In our last analysis, we found these jurisdictions were missing between 36% and 79% of testing volume. Ohio, the Virgin Islands, and Wyoming were able to submit more complete aggregate testing counts to the CDC, but county-level data for Ohio and Wyoming had to be drawn from the incomplete laboratory dataset, resulting in test undercounts at the county level.1

Over the past two months, six of the eight jurisdictions—all except Ohio and Maine—have been onboarded to submit line-level data to CELR. (Based on correspondence with state health officials, we expect Ohio and Maine to begin submitting regular data to CELR soon). Those changes allowed the federal government to switch away from using laboratory or aggregate datasets to the more complete and detailed state line-level data, resulting in major improvements to the federal testing data. 

Federal testing data compared to state testing data before and after changes in sourcing for Missouri, Oklahoma, Puerto Rico, Washington, and Wyoming. The plot for the Virgin Islands only shows federal data because they do not publish state testing data in units comparable to federal data.

Missouri, Oklahoma, and Puerto Rico all have visible changes in their historical time-series. Missouri’s data now much more closely matches state-provided data. Oklahoma’s testing data has also greatly improved, though it still has a ways to go to match state reporting.2

In Puerto Rico, it appears HHS made the decision to enact a soft cutover from the old federal data source to the new one—switching over to the new data source without changing the historical data. While this decision has created the false appearance of rapid increases in the testing rate for recent months, the effect is that the current data is far more accurate and complete than before.

Impact to data for the Virgin Islands, Washington, and Wyoming is less visible. In the Virgin Islands and Wyoming, this is likely because the aggregate data was a good estimate of testing volume. Washington’s data appears to display slightly increased volume in March, suggesting a soft cutover from the federal data. The effect is less pronounced than in Puerto Rico because the lab-provided data in Washington is more complete.

Finally, in all six of these jurisdictions, HHS was able to make a major improvement in the Community and State Profile Reports, which focus on the time period of the past two weeks. Since the reports only require fourteen days of historical data, HHS switched its sourcing in the reports to draw exclusively from the new state-provided, line-level data, even for states that hadn’t submitted full historical line-level data to CELR yet. That means the county-level testing indicators, such as test positivity, are calculated using more complete and sound data than before. 

State and federal testing data gaps closed

In our last analysis of the federal testing data, we found that many states exhibited large gaps between state and federal testing numbers—even when we controlled for the varying definitions that states use on their dashboards. Based on our conversations with health officials, these discrepancies tend to signal submission and data cleaning problems with federal testing counts. For example, some states were unable to submit test results that had been sent to them in non-electronic formats to the federal government, while others were submitting data with duplicate records of tests the federal government could not easily remove.

State-reported vs. federally-reported test count differences for states that define tests the same way as the federal government but have >5% differences in testing counts.

A few months later, there are still large discrepancies between many states’ federally-provided and state-provided testing counts, even when we only look at states that define their testing metrics the same way as the federal government does. However, recent revisions to the federal dataset improved or removed these discrepancies for three states: New Hampshire, Rhode Island, and South Carolina.

Federal testing data compared to state testing data before and after revisions for New Hampshire, Rhode Island and South Carolina. South Carolina’s data had multiple revisions before landing on the final one; the graph depicts the final revision.

In February and March data notes to the Community Profile Report, HHS explained that it was investigating discrepancies between state and federal test positivity for New Hampshire and South Carolina. Shortly after, we saw major adjustments to the historical and current testing data in these states that improved gaps between state and federal testing data.

Rhode Island also saw a similar revision in its federal testing data, bringing it into closer alignment with state data—moving the state from a 34% difference to a 16% difference between the two sources. 

It is likely that the remaining difference in Rhode Island represents antigen tests, which the state includes in its state-level counts, but which are not included in the federal testing dataset. While Rhode Island does not provide a breakdown of antigen and PCR testing that we could use to verify this, the volume of missing federal tests is comparable with antigen volume for states that do report that breakdown. 

What the federal government should do next

We are encouraged to see new attention paid to the two most pressing data quality problems for the COVID-19 testing dataset: incomplete lab-submitted data and gaps between state and federal data caused by submission errors.

That said, the dataset still needs more work before it can capture testing volume in many more states. We hope to see the federal government take the following steps for the testing dataset:

1. Provide clear documentation of CELR data sources and problems on a per-state basis.

Right now, the only federal testing data products with state-by-state documentation of data sourcing are the Community Profile Reports and State Profile Reports. But the machine-readable testing dataset does not have any documentation other than a general note on the composition of CELR data at the national level. The lack of documentation makes the dataset very hard for most data users to understand how to interpret. For example, the feed cutovers between laboratory-submitted and line-level data in HHS timeseries are not explained anywhere in the dataset description.

The federal government should publish state-by-state documentation of the data sourcing for each dataset, including sources used for each state across datasets, exact dates of any feed cutovers, and data warnings for incomplete data where appropriate.

Second, in many cases, we think it’s likely the federal government is aware of CELR data submission problems. In states where there are known quality problems with federal testing data, the federal government should add warnings so that users can treat the data with appropriate caution.

2. Provide states with technical assistance to submit high-quality testing data. 

Many states still exhibit large gaps between their state-published and federal testing data, signalling problems with the quality of data they submit to the federal government. In some cases, there are likely easy fixes for submission errors, while other states face large infrastructural barriers to improving their federal testing data. In either case, fixing problems may fall to the wayside in state health departments stretched thin by case management and vaccine rollouts. 

The federal government should allocate more technical assistance to states, especially those with large, unexplained gaps between their state and federal testing counts, to improve their testing data submissions to CELR. Where there are problems that cannot be fixed, the federal government should document them.

3. Continued commitment to collecting testing data.

There are decisions on the horizon for the federal government to make about what data to collect from states and use in public-facing datasets: Most pressingly, whether to continue collecting line-level data for negative tests, switch to aggregate data, or stop collecting negative test data altogether. 

As we've written before, we believe it’s vital to continue collecting the testing data in some form. Especially if the federal government intends to pursue the aggressive testing strategy it has outlined, we cannot afford to lose sight of testing nationwide. And the federal government is the only entity that stands a chance at aggregating a complete and standardized national testing dataset; no non-governmental organization like ours can possibly do it. 

Additional research by Jennifer Clyde, Rebecca Glassman, Rachel Glickhouse, Julia Kodysh, Dave Luo, Alexis Madrigal, Michal Mart, Theo Michel, Ruirui Sun

1 The Virgin Islands were able to send aggregate testing counts for each of its two districts.

2 According to a state health official, Oklahoma still needs to onboard just about half of laboratories to submit to the electronic laboratory system out of which it draws data for submission to CELR. On its dashboard, it shares results out of an aggregate submission application that catches tests from laboratories that have not yet been onboarded to electronic line-level submission, but includes antigen tests. The new federal data from their electronic laboratory reporting system is still more complete than was the old data coming directly from labs.


image (13).png

Kara Schechtman is Data Quality Co-Lead for The COVID Tracking Project.

@karaschechtman

More “Federal COVID Data 101” posts

Federal COVID Data 101: What We Know About Race and Ethnicity Data

Publicly available federal race and ethnicity COVID-19 data is currently usable and improving, although it shares many of the problems we’ve found in state-reported data.

By Alice GoldfarbMarch 19, 2021

How Not to Interpret COVID-19 Data

Beware of dating schemes, data dumps, weather events and other issues that can lead to mistakes that confuse the public.

Federal COVID Data 101: How to Find Data

Here’s where and how you can find COVID-19 data now that The COVID Tracking Project has ended.