Newsrooms grapple with rules for AI

Source: 
Coverage Type: 

Leading media organizations are issuing guidance on leveraging artificial intelligence in the newsroom at the same time they're making licensing deals to let AI firms use their content to train AI models. The sudden arrival of publicly and commercially available generative AI tools has forced a new set of ethical choices on media companies struggling to protect public trust while still experimenting with the technology and preserving their legal rights. Most news companies are allowing some use of AI under the editorial supervision of humans, but many of the new guidelines prohibit AI from being used to write articles, and extra scrutiny is applied to AI-generated images and video. The AP became the first major news company to strike an agreement deal with OpenAI that will allow the firm to use AP's content to train its AI models. Because of that partnership, and its history as an early adopter of automation, its editorial guidance will likely weigh heavily with other news organizations. However, the AP's commercial agreement with OpenAI may not serve as a blueprint for other media companies weighing efforts to protect their intellectual property interests. NPR reported that the New York Times is considering legal action against OpenAI for unauthorized use of Times stories as training data. The publication updated its terms of service on Aug. 3, 2023 to forbid using Times content in "training a machine learning or artificial intelligence (AI) system." As news publishers weigh different AI standards, some level of consistency will be necessary to develop broad reader trust.


Newsrooms grapple with rules for AI