I, Robot: Artificial intelligence, morally gray

What do we want AI to become?

An image generated by AI using the prompt “An Idaho newspaper room ran by Argonauts” | Courtesy | Open AI

It’s something that was once only found in sci-fi and our imaginations. It used to be Turing-era programmable computers, clunky at best and often just glorified calculators. It was one of those things we associated with our foggy idea of the future. 

I’m talking about artificial intelligence, or AI, of course. And I don’t want to generate large-scale panic over a future robotic dystopia—we’re still a little far from Roko’s Basilisk even being possible—but it’s hard to deny just how big this new technology is becoming. Tech giant and software company Nvidia has even said they consider it an entirely new industry

We are at a decisive point for the future of AI. What we decide to do about it now will affect our world forever. AI has the potential to be either good or bad, and it all depends on how we use it. Here’s how. 

AI has some good uses, some bad and quite a few are subjective. Some everyday uses of AI include grammar checkers, autocomplete, photo gallery organization and searches, speech recognition, algorithms, search engines, spam filters and targeted advertising. There are also quite a few new developments that we’ve seen, which I will mention under good, bad and morally gray. 

Let’s start with the good parts of AI. Companies are using artificial intelligence for some administrative tasks such as scheduling and billing; it’s becoming commonplace in healthcare to provide initial evaluations of scans, specialized treatment plans and triage. AI is even used sometimes in cybersecurity, but its development has also made a lot of existing protections obsolete. AI can also replace some menial jobs, like call centers or data analysis. Finally, we can use it to provide inspiration or a starting point for research. 

But that brings us to the edge of what AI can do well and ethically. As I said, a lot of its applications are either hit-or-misses or double-edged swords. AI text detectors, for example—while it seems helpful to check whether a text was written by a human, the detectors are often incorrect or discriminatory against ESL writers. 

Research is also rarely something the machines can do on their own. While it can be an excellent resource for outlining or structuring research, ChatGPT and other AI can’t replace the entire process. Just take a look at the case of Mata v. Avianca, Inc., where a New York lawyer filed a court document citing at least six court cases that the AI had made up. In a similar fashion, an airline’s chatbot recently made up an incorrect refund policy, which Canadian courts ruled the airline had to honor. (Funnily enough, this led the airline to try to argue for the bot’s autonomy to get out of paying the refund—far from the idea that AI personhood would be debated following a murder.) 

This leads us to the bad uses of AI. We’ve all heard the stories; maybe we even tried these things ourselves—generative text and visual art, all generated based on human prompting. There are a lot of problems with these. Morally, these programs we’ve created to replace the things we hate are actually replacing the arts—the things that make us human and make life worth living. Plus, the bots generating these texts or images are usually trained off of real people’s work.  

This is called scraping—the software will consume incredible amounts of writing, art and photography, all in an attempt to replicate it. Half of the time, it doesn’t work too well. We’ve seen those AIs that don’t seem to grasp the concept of hands, or articles and essays filled with clunky, surface-level language that never really gets anywhere. These mass-produced books can also be filled with the wrong information, even to the point of being deadly. The problem with scraping for training AI, though, is that artwork is being stolen from countless artists without credit or financial compensation. 

Of course, AI has plenty of ethical issues regarding the work it produces. Still, none of this goes to mention its discriminatory tendencies. With many AI systems, facial analysis almost exclusively performs better on male than female faces and lighter than darker skin tones—that is, darker-skinned women were misclassified at error rates of up to 35%. In comparison, lighter-skinned males were misclassified with maximum error rates of 1%. 

This trend is mainly due to the underrepresentation of these groups in samples used to train AI. Because AI and other facial recognition software are often trained primarily on light-skinned, male faces, they’re regularly at least misclassifying female and darker-skinned faces. This leads to further discrimination, as failed machine learning worsens racism and sexism in hiring, law enforcement and even advertising. 

That pattern also mirrors the representation of women and people of color in the tech sector. In 2014, 57% of executive and managerial positions at 75 top Silicon Valley tech companies were held by white employees, with less than 1% held by Black people. Similarly, only 28% of executives and managers were female. In non-tech Silicon Valley firms, 49% of employees were women and 59% were people of color. 

What happens when we project our systemic racial and gender biases onto this new technology? AI is supposed to represent a massive part of our future; it’s supposed to be neutral, promoting equality and eliminating injustice. However, artificial intelligence has started to head down the path of becoming yet another tool limited to a small but powerful group of people. 

And by combining that with a decline in privacy and automation of creativity, we’re on track to that depressing, dystopian world where machines make art and humans are left only the boring, the menial and the lifeless. 

Dakota Steffen can be reached at [email protected]. 

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.