Of course technology perpetuates racism. It was designed that way.
Today the United States crumbles under the weight of two pandemics: coronavirus and police brutality. Both wreak physical and psychological violence. Both disproportionately kill and debilitate black and brown people. And both are animated by technology that we design, repurpose, and deploy—whether it’s contact tracing, facial recognition, or social media. We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy. Of course, it’s not a new question at all.
If we don’t want our technology to be used to perpetuate racism, then we must make sure that we don’t conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate.
[Charlton McIlwain is a professor of media, culture, and communication at New York University]
Of course technology perpetuates racism. It was designed that way.