AI companies promised to self-regulate one year ago. What’s changed?
On July 21, 2023, seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—committed with the White House to a set of eight voluntary commitments on how to develop AI in a safe and trustworthy way. These included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks. On the first anniversary of the voluntary commitments, the tech sector has made some welcome progress, with big caveats. Companies are doing more to pursue technical fixes such as red-teaming (in which humans probe AI models for flaws) and watermarks for AI-generated content. But it’s not clear what the commitments have changed and whether the companies would have implemented these measures anyway.
AI companies promised to self-regulate one year ago. What’s changed?