AI's next fight is over whose values it should hold
There's no such thing as an AI system without values — and that means this newest technology platform must navigate partisan rifts, culture-war chasms and international tensions from the very beginning. Every step in training, tuning and deploying AI models forces its creators to make choices about whose values the system will respect, whose point of view it will present and what limits it will observe. AI systems' points of view begins in the data with which they are trained — and the efforts their developers may take to mitigate the biases in the data. From there, most systems undergo an "alignment" effort, in which developers try to make the AI "safer" by rating its answers as more or less desirable. Makers routinely talk about aligning AI with human values, but don't acknowledge how deeply contested human values are. Right now, in many cases, only the makers of an AI system know exactly what values they're trying to embed — and how successful they are.
AI's next fight is over whose values it should hold