Ina Fried

For AI firms, anything "public" is fair game

Leading AI companies have a favorite phrase when it comes to describing where they get the data to train their models: They say it's "

"Extremely concerned": UN official warns Silicon Valley execs of AI dangers

Volker Türk, the UN's high commissioner for human rights, was in Silicon Valley last week to deliver a simple message to tech companies: Your products can do real harm and it's your job to make sure that they don't. Technologies like artificial intelligence hold enormous potential for addressing a range of societal ills, but without effort and intent, these same technologies can act as powerful weapons of oppression, said Türk. New regulations are often where the tech debate lands, but Türk tells Axios that the firms should already be ensuring their products comply with the existing 

AI's next fight is over whose values it should hold

There's no such thing as an AI system without values — and that means this newest technology platform must navigate partisan rifts, culture-war chasms and international tensions from the very beginning. Every step in training, tuning and deploying AI models forces its creators to make choices about whose values the system will respect, whose point of view it will present and what limits it will observe. AI systems' points of view begins in the data with which they are trained — and the efforts their developers may take to mitigate the biases in the data. From there, most systems undergo an

AI's road to reality

A middle road for AI adoption is taking shape, routing around the debate between those who fear humanity could lose control of AI and those who favor a full-speed-ahead plan to seize the technology's benefits.

AI could choke on its own exhaust as it fills the web

The internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves. Experts estimate that AI-generated content 

"Nutrition labels" aim to boost trust in AI

As adoption of generative AI grows, providers are hoping that greater transparency about how they do and don't use customers' data will increase those clients' trust in the technology. There's a mad 

How AI will turbocharge misinformation—and what we can do about it

Attention-grabbing warnings of artificial intelligence's existential threats have eclipsed what many experts and researchers say is a much more imminent risk: A near-certain rise in misinformation. The struggle to separate fact from fiction online didn't start with the rise of generative AI — but the red-hot new technology promises to make misinformation more abundant and more compelling. By some estimates, AI-generated content 

Tech is building in the ruins again

Every 15 years or so, it seems, the US economy rolls into a ditch — and the tech industry pulls something remarkable out of its labs. Here we are again! Silicon Valley's favorite bank has failed, while its top firms continue to lay off hordes of workers — but, at the same time, industry leaders foresee vast new growth spurred by artificial intelligence (AI).

U.S. to spend $1.5 billion to jumpstart alternatives to Huawei

The federal government plans to invest $1.5 billion to help spur a standards-based alternative for the gear at the heart of modern cellular networks.

Silicon Valley's Rep Ro Khanna offers a midterm warning

Although Rep Ro Khanna (D-CA)'s district includes a wide swath of the tech industry's homes in towns like Sunnyvale, Cupertino, Santa Clara, and Fremont, he is an advocate for laws that would curb Big Tech's power. Among the restrictions Rep Khanna favors would expand privacy protections beyond California's existing law as well as a change in antitrust law that would shift the burden of proof in large deals, requiring the acquiring company to prove a deal won't hurt competition. Members of Congress have proposed new bills around privacy and antitrust and children's online safety, but so far