Melissa Heikkilä

US and UK refuse to sign summit declaration on AI

Vice President JD Vance warned Europe not to adopt “overly precautionary” regulations on artificial intelligence as America and the UK refused to join dozens of other countries in signing a declaration to ensure that the technology was “safe, secure and trustworthy”.  The two countries held back from signing the communique agreed by about 60 states at the AI Action summit in Paris, dealing a setback to efforts led by French President Emmanuel Macron to build international consensus around the technology.

AI companies promised to self-regulate one year ago. What’s changed?

On July 21, 2023, seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—committed with the White House to a set of eight voluntary commitments on how to develop AI in a safe and trustworthy way. These included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks. On the first anniversary of the voluntary commitments, the tech sector has made some welcome progress, with big caveats.  Companies are doing more  to pursue technical fixes such as red-teaming (