Artificial intelligence can't solve online extremism issue, experts tell House Counterterrorism Subcommittee
A group of experts warned the House Counterterrorism Subcommittee that artificial intelligence is not capable of sweeping up the full breadth of online extremist content — in particular posts from white supremacists. Lawmakers cast doubt on claims from top tech companies that artificial intelligence, or AI, will one day be able to detect and take down terrorist and extremist content without any human moderation. Subcommittee Chairman Max Rose (D-NY) said he is fed up with responses from companies like Google, Twitter and Facebook about their failure to take down extremist posts and profiles, calling it "wanton disregard for national security obligations." "We are hearing the same thing from social media companies, and that is, 'AI’s got this, it’s only gonna get better,' " Chairman Rose said during his opening remarks. "Nonetheless ... we have seen egregious problems." The lineup of experts, including Facebook's former chief security officer and current Stanford academic Alex Stamos, agreed that AI is not ready to take on the complicated issues of terrorist content — and raised questions over whether it ever will be able to. Stamos said the "world’s best machine learning resembles a crowd of millions of preschoolers." "No number of preschoolers could get together to build the Taj Mahal," he explained.
Artificial intelligence can't solve online extremism issue, experts tell House panel Artificial Intelligence is Too Dumb to Fully Police Online Extremism, Experts Say (nextgov)