When it comes to understanding AI’s impact on elections, we’re still working in the dark
Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign influence campaigns to undermine trust in democracy, and that growing access to AI tools would lead to a deluge of political deepfakes. This premature, “sky is falling” narrative was based on very little evidence, something we warned about. But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,” and that “political misinformation is not an AI problem,” as some observers have stated. In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications. But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms.
When it comes to understanding AI’s impact on elections, we’re still working in the dark