Senator Bennet Urges Leader Schumer to Consider AI Labels, Disclosures, Risk Assessments, and Audits

Source: 
Coverage Type: 

On August 30, Senator Michael Bennet (D-CO) wrote Senate Majority Leader Chuck Schumer (D-NY) about Leader Schumer's SAFE Innovation Framework for Artificial Intelligence (AI). Sen. Bennet suggested that several critical elements be considered when developing the framework:

  • A Values-Based Framework: A robust AI regulatory framework will require AI developers to construct their systems so that they preserve Americans’ privacy, civil rights, and civil liberties; protect against bias and discrimination; ensure safe environments for our children; and secure the integrity of our civic processes.
  • Public Risk Assessments, Mitigation, and Audits: AI systems should undergo regular public risk assessments to examine their safety, reliability, security, explainability, and efficacy. We should couple these assessments with transparency and disclosure obligations to enable effective compliance audits.
  • Content Indicators: AI-generated content should retain a distinct, easily recognizable signifier, such as a watermark, hard-coded indicator, or visual overlay, so users can readily identify AI content as AI content.
  • AI Disclosure: AI platforms should disclose their AI nature at the beginning of a user’s interaction and periodically throughout in order to ensure that users understand what sort of system they are encountering.
  • Data Transparency: Users must understand how AI systems intend to use, store, and transfer their personal data. Users should have a right to know how their data will contribute to any AI system’s training or optimization. Users should be informed about how their data, generated by interactions with AI systems, are used. Users should have the right through “opt- in” procedures to determine whether AI systems can collect and use their data.

Colorado Senator Bennet Lays Out AI Principles and Requirements