Skip site navigation

Today, The COVID Tracking Project is announcing that we have decided to reconfigure our presentation of State Grades—our previous letter grade system that scored each state’s data reporting completeness. Over the course of the past year, states have regularly changed their data reporting processes, and our researchers have learned more about how and why states are or aren’t reporting certain metrics. A single letter grade no longer feels like the right reflection of data quality. 

So, to complement our plans to end data collection next month, we’ve decided to revamp our letter grades into a more granular set of annotations. These new assessments can be found at the top of each state’s data page, and we hope they will help users better grasp the context behind each jurisdiction’s data. 

Before we get into the specifics of our new process, some history: In March 2020, as we started to build out a process for collecting COVID-19 data from all 56 US states and territories, it became clear that states would be taking their own approaches to reporting the data. With little federal guidance, state health officials largely had the responsibility and freedom to choose which COVID-19 data points to publish. States were able to define their own reporting processes and set their own reporting schedules. No two state data dashboards looked alike.

Almost immediately, we knew we needed a way to keep track of the differences in each state’s reporting practices, so we built a simple system to assign letter grades. Seven weeks later, this system developed into a more rigorous and structured process, and in April 2020, we launched a revamped grading system to help public health researchers, journalists, and the public better understand the wide scope of reporting practices.

This revamped process became hugely important. As we wrote in a July 2020 update, our weekly checks of all 56 reporting practices helped us identify significant improvements states had made in how they were presenting COVID-19 data. And behind the scenes, our work grew. Over the course of the year, we met regularly with state health officials to understand the limitations of their data infrastructure. Along the way, we helped them to see how their reporting practices differed from those of other states.

Though we’ll never know the extent to which our grading process played a role in improving COVID-19 data reporting practices, we have watched state officials champion top grades in media briefings and even on data dashboards. And when states were grappling with low grades, we largely found officials willing to engage with our team, eager to learn how they could improve.

For our next chapter, as we’re working toward ending our data collection in March 2021, improving our documentation, and building an archive, we are once again announcing a change to how we evaluate COVID-19 data reporting. We’re no longer assigning each state one letter grade, and instead compiling detailed and explanatory assessments for each state.

Our new assessments focus on the data reporting for our three datasets: 

  1. how the state defines and reports key metrics, such as testing data, cases, hospitalizations, and deaths,

  2. how the state reports race and ethnicity data,

  3. how the state presents information about COVID-19 in long-term-care facilities.

Our assessments are based on the thoroughness of each state’s COVID-19 data reporting, along with the clarity and accessibility of their data descriptions. To reflect the complexity of the data, our new process of assessing state-level reporting practices factors in the dozens of choices health officials make about how to present COVID-19 information. For testing, case, hospitalization, and death data, we’re calling particular attention to the placement of data definitions, and our assessments are designed to track clear, approachable context published in close proximity to core numbers.

Our assessments also consider where and how states provide data about race and ethnicity, and we give extra weight to states that include information for all of the standard race and ethnicity groups included in federal data, like the census. Additionally, for data about COVID-19 in long-term-care facilities, we are tracking what metrics the state reports for residents and staff, as well as whether the state reports data at the facility level. 

Though we’re continuing to track data for American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands, we’ve decided not to provide assessments for these territories. Given the COVID-19 situations in each, we found that excluded categories were often a reflection of no data existing rather than a choice to not report the data.

Built off of the letter grades that came before them, our new assessments are intended to encourage more contextualized COVID-19 data—both for officials presenting the data and for users interpreting it. It’s important to remember these assessments do not evaluate how well state health officials are responding to COVID-19, but instead are intended to reflect the completeness of a state’s data and help put into context the breadth of reporting practices.

Ultimately, we want public health researchers to have a comprehensive national picture of the pandemic, which means having the ability to fully compare the situation across states. Our new assessments detail the choices states have made, give data users a sense of the variability in the data collected, and encourage improved data reporting from state health officials. 


image.jpg

Alice Goldfarb leads The COVID Tracking Project’s part in The COVID Racial Data Tracker, and is a Nieman Visiting Fellow.

@afgoldfarb
sara-simon-headshot.jpg

Sara Simon works on The COVID Tracking Project’s data quality team and is also a contributing writer. She most recently worked as an investigative data reporter at Spotlight PA and software engineer at The New York Times.

@sarambsimon

Latest posts

20,000 Hours of Data Entry: Why We Didn’t Automate Our Data Collection

Looking back on a year of collecting COVID-19 data, here’s a summary of the tools we automated to make our data entry smoother and why we ultimately relied on manual data collection.

By Jonathan GilmourMay 28, 2021

A Wrap-Up: The Five Major Metrics of COVID-19 Data

As The COVID Tracking Project comes to a close, here’s a summary of how states reported data on the five major COVID-19 metrics we tracked—tests, cases, deaths, hospitalizations, and recoveries—and how reporting complexities shaped the data.

How Probable Cases Changed Through the COVID-19 Pandemic

When analyzing COVID-19 data, confirmed case counts are obvious to study. But don’t overlook probable cases—and the varying, evolving ways that states have defined them.