Season 2 Episode 05: Racism in AI
The ubiquitousness of Artificial Intelligence is undeniable. Whether you’re using a search engine, watching Netflix, or scrolling through recommendations on TikTok, A.I. powers the conveniences we’ve come to expect while interacting with technology. Although these examples seem benign on the surface, the far-reaching effects of our interactions with this expanding area of tech aren’t always as cut and dry or even beneficial for everyone whose lives it affects. As we look into the black box of the beast, our roundtable of journalists weighs the implications of bias in A.I. and how discrimination through digital redlining has become an acceptable tradeoff for growth and technological supremacy.
So what does A.I. bias look like in the real world? One example that can be seen is in the area of facial recognition software. This tech has been known to misidentify black and brown faces as everything from animals to crime suspects in live tests, which is one of the reasons why cities like Boston and Oakland have banned its use. The recent news of the NYPD testing Boston Dynamic’s robotic dog Spot has rightfully bristled some hairs. With several cameras being onboard the robot, one can surmise that footage will be run through that same facial recognition software.
And that’s just the tip of the iceberg, as A.I. can interpret everything from readable text to video files and physiological data. This further widens the opportunity for discrimination to negatively impact BIPOC. As with many ethical issues facing us today, diversity is the key to getting rid of bias within A.I. This ranges from diversifying everything from where the data is sourced to the people working within the field. However, how do we implement this when many of the prominent voices that advocate for ethics within the field are being silenced or ousted when they try?