NTIA Artificial Intelligence Accountability Policy Report

Alongside their transformative potential for good, artificial intelligence (AI) systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm. Participants in the AI ecosystem—including policymakers, industry, civil society, workers, researchers, and impacted community members—should be empowered to expose problems and potential risks, and to hold responsible entities to account. AI system developers and deployers should have mechanisms in place to prioritize the safety and well-being of people and the environment and show that their AI systems work as intended and benignly. To achieve real accountability and harness all of AI’s benefits, the United States—and the world—needs new and more widely available accountability tools and information, an ecosystem of independent AI system evaluation, and consequences for those who fail to deliver on commitments or manage risks properly.


NTIA Artificial Intelligence Accountability Policy Report NTIA calls for audits and investments in trustworthy AI systems FACT SHEET: NTIA Urges Policy Changes to Boost Accountability and Trustworthiness in Artificial Intelligence Systems