AI brings us a new kind of bug
Generative AI is raising the curtain on a new era of software breakdowns rooted in the same creative capabilities that make it powerful. Every novel technology brings bugs, but AI's will be especially thorny and frustrating because they're so different from the ones we're used to. AT&T's cellular network and Google's Gemini chatbot both went on the fritz recently. In AT&T's breakdown, a "software configuration error" left thousands of customers without wireless service during their morning commute. Google's bug was very different. Its Gemini image generator created a variety of ahistorical images: When asked to depict Nazi soldiers, it included illustrations of Black people in uniform; when asked to draw a pope, it produced an image of a woman in papal robes. This was a more complex sort of error than AT&T's, at the boundary between engineering and politics, where it looked like a diversity policy had gone haywire. Google paused all AI generation of images of people until it could fix the problem. In both the AT&T and the Google incidents, systems failed because of what people asked computers to do. AT&T's wireless service crashed—like most computer crashes—when it tried to follow new instructions containing an error or contradiction that caused the system to stop responding. The Google incident wasn't so simple, because most AI systems don't operate by commands and instructions—they use "weights" (probabilities) to shape output. Developers can put their fingers on the scales—and clearly they did with Gemini. They just didn't get the results they sought.
AI brings us a new kind of bug