Can a computer be racist?
This may seem like a silly question, but the answer certainly isn’t. As you’ve probably heard since you started using calculators in school, computers are only as smart as the people who use them. What this means is that any artificial intelligence (or AI) system is only capable of performing the tasks its programmer designed it to do.
In the same way, AI systems can take on the prejudices and racial biases of their programmers. At its best, this reinforces stereotypes about attractiveness, intelligence, and morality. At its worst, it can increase the number of racially-motivated arrests and murders.
AI Systems and Filters: Racial Discrimination and Beauty
As we mentioned, AI systems can only be as biased as the people who program them. This doesn’t mean that all apps, software, and machines are designed by Neo-Nazis or the KKK. Unfortunately, implicit bias (i.e. biases that normal, everyday people aren’t aware they even have) are more often the cause of racist programming.
Here’s one of the more “innocent” examples. In 2017, FaceApp created a “Hot” filter that claimed to make the person in a photo more attractive. According to its creators, this filter based its edits on global data about perceptions of beauty.
There are numerous problems with such a program, but the one that FaceApp found itself facing was claims of racial prejudice. Across the board, the “Hot” filter lightened people’s skin tones, narrowed their noses, widened their eyes, and redistributed their cheekbones. In other words, they blurred or outright removed any nonwhite ethnic features.
The implication here is not only that white=pretty but that white=normal. Intentionally or not, this app’s programmers created a space where whiteness is considered the yardstick against which to measure beauty.
Police Brutality and Unwarranted Arrests
Standards of beauty aren’t the only racial biases that pop up in programming. Much more worrisome is how AI systems can target certain racial groups in legal matters.
All AI systems fuel themselves on pattern recognition. When a photo filter developer inputs data that repeatedly suggests “white” features as the most beautiful, the AI system sees that as a pattern and follows it to the letter. When a policing program only identifies crimes in neighborhoods with a significant racial majority, the AI system begins to do the same.
What this means is that AI systems train themselves to look for crime in Black, Latino, and Muslim populations more than anywhere else. Once the pattern is set, that’s the only thing the AI system will recognize.
How Do We Fix This?
Like racism in general, we can only overcome racist AI systems by confronting our societies’ implicit biases. This means that we must be more conscientious about the data we use and the conclusions we draw from it. As civil rights scholars have been screaming for years, justice is a basic human right, and we shouldn’t have to struggle to meet it.
We must also listen to the voices of people of color and other discriminated groups. This isn’t just true for white people, either. Even if you belong to an ethnic minority, you may possess implicit biases about other races or even your own race because of systemic prejudices.