How to Leverage Alternative Data Sources to Refine Broadband Availability Maps

Coverage Type: 

recent story in the Texas Tribune discussed how Texas’s broadband map shows service in some areas that residents say doesn’t exist. Discrepancies like those push state and local policymakers to engage in their own expensive mapping efforts to try to address such errors before distributing the tens of billions of dollars in broadband subsidies coming down the pike. Paradoxically, to improve availability maps, a state’s (or county’s, city’s, or town’s) first step should not be to gather more availability data. Instead, states should first turn to data—often available for free—on metrics other than availability to systematically evaluate the quality of the Federal Communications Commission's availability data. The results of that analysis can then allow the state to focus data-gathering resources specifically on areas the evaluations suggest information on availability may be suspect rather than replicating the FCC’s efforts across the entire state. One method is to use adoption and speed data to infer where availability data may have errors. Availability, speed, and adoption are distinctly different measures, but they are correlated with each other. This positive correlation means that we should generally expect them to be broadly similar: high availability, high adoption, and high speeds go together. We can be most certain that the availability data is correct in regions where all the variables intersect.


How to Leverage Alternative Data Sources to Refine Broadband Availability Maps