Newsrooms try AI to check for bias and error

Source: 
Coverage Type: 

After months of experimenting with artificial intelligence (AI) to make their work more efficient, some newsrooms are now dipping their toes in more treacherous waters: trying to harness AI to detect bias or inaccuracies in their work. Confidence in the news media is at an all-time low, pressuring news leaders to look for new ways to win back trust. But AI, which has its own biases and makes up fake facts, is an unlikely savior. "The Messenger," a new digital media company, said it plans to partner with a company called "Seekr" to ensure its editorial content consistently aligns with journalism standards using AI. Seekr analyzes individual articles using factors like "title exaggeration," "subjectivity," "clickbait" and "personal attack" as well as purported political leaning. The promise is that a neutral AI will somehow arrive at purely objective ratings—but AI itself is trained on human data, and that data is full of its own biases. It took less than a minute to find, for instance, that Seekr gave a "very low" rating to a harmless Messenger story rounding up late-night comedy hosts' schticks about Kevin McCarthy's ouster, citing the jokes from Stephen Colbert and Jimmy Kimmel as "subjective" and "personal attacks." Regardless, experts see some value in using AI to fact-check very large datasets—like being able to track the spread of a falsehood identified by a human across multiple stories and media outlets.

 


Newsrooms try AI to check for bias and error