Cybercrime Surveys Aren't Telling Us What We Need to Know
Cormac Herley, a principal researcher at Microsoft Research, embarked on a study of the methods used to calculate estimated losses to cybercrime and found them severely wanting.
Most of the statistics come from surveys in which respondents are asked to report whether they've been victims of a crime and how much they lost. "Surveys are hard," Herley says. His research revealed a number of reasons why surveys about cybercrime are particularly hard. Scientists have pretty good methods for surveying, say, voter intentions. In that case, you focus on getting a good representative sample. Inaccuracies matter, but a few one way or another aren't going to make much of a difference. Cybercrime is a whole different story. For one thing, cybercrime surveys are trying to measure a number: how much money was lost. In that case, individual responses can make a huge difference. A voting survey isn't thrown off by much if someone who actually plans to vote Democrat states an intention of voting Republican. But if a survey respondent who has lost $50,000 to cybercrime claims to have lost $500,000, any calculations based on that information will be wildly out of whack.
There are other problems. Any registered voter has useful information to report. But not everyone has a useful story to tell about cybercrime, which means that a small number of responses can make a huge difference. For example, in a 2006 survey conducted by Gartner Research, 128 out of 4,000 people claimed to have been victims. Herley calculates that 59 percent of losses came from the top 1 percent of respondents who had been victimized—in this case, a single person. He believes that such concerns make it impossible to trust data coming from most cybercrime surveys.
Cybercrime Surveys Aren't Telling Us What We Need to Know