Elizabeth Callaway: Before we worry about the future of AI, we need to look at its immediate dangers

[ad_1]

(Illustration by Christopher Cherrington | The Salt Lake Tribune)

This is part of a series in which Utahns share their insight on AI. Read more here.

As an English professor who studies science fiction, I’m used to horror and hype about AI. Amid discussions of the singularity, AI automating us all out of our jobs and the potential extinction of the human race, I’d encourage us to focus first on the harms AI is already causing.

Among the areas of immediate concern are rampant AI bias, unfair labor practices in AI and the business model behind AI development.

Bias in AI

Many prominent examples have taught us that AI is no less biased than human beings, as training sets that teach AI how to perform a task often contain biased human decisions or reflect historical bias. Famous examples include Amazon’s AI recruiting tool that had to be abandoned because it was biased against women, risk assessment software that ProPublica found assigned higher risk levels to Black probationers with similar or better criminal records than white counterparts and healthcare AI that consistently under-referred Black patients to programs that improve patient care.

Because AI operates at scale, AI bias is not just a replication of human bias. It is a worsening of it. We must require AIs to pass bias tests for before they are released. We can’t continue to leave bias detection up to investigative journalists to uncover after-the-fact.

Unfair labor practices in AI

When addressing bias in AI, we must also examine its means of production, leading to my second major concern: labor.

Many of these AI systems seem like labor-saving tools or even like magic. Take large language models (LLMs), like ChatGPT. It has a simple, minimalist interface; it generates human-sounding text in seconds; and it appears to be an effortless way to write. However, there is a mountain of work hidden behind this apparent ease. This is not only the work of software developers but also the labor of everyone who has ever posted writing on the internet. Social media posts, blog entries and comments in online conversations that are written by human individuals are all scraped and fed into AI for training. These AI are then packaged up and sold back to us (or paid for by the ads we see) to add to the bottom line of for-profit corporations.

Even after the LLM is trained, it has another layer of human evaluation and writing, which is often carried out by workers in the global South who are subject to poor working conditions, unstable pay, and shady management.

I always tell my students that ChatGPT feels like it’s a way to write without doing all that work only because it’s not our work that goes into it. It’s somebody else’s.

The business of AI development

Finally, and most importantly, we need to focus on the business model behind AI development. AI has so much potential to support human flourishing, but what tasks AI is turned to will be driven by how AI development is funded. If AI is developed through grants and prizes to solve problems like protein folding, it can be a tool for good. However, if AI is hooked up to the social media business model of surveillance advertising, then we’re developing AI specifically trained to influence human behavior (primarily to keep users on a platform longer.)

AI that is designed to manipulate human beings, and especially the kind of AI that isn’t tested for bias ahead of time and is developed using exploitative labor practices, is the kind that scares me the most.

Social media recommendation AI, the kind that chooses what videos to recommend in the sidebar of YouTube, organizes an Instagram feed and queues up TikToks, was humanity’s first large-scale encounter with AI. Those AI figured out that some of the best ways to keep us engaged were to recommend incrementally more extreme content, to feed us conspiracy theories and to keep us outraged, ashamed, fearful and divided.

Social media demonstrates how AI is agnostic about the methods it uses to accomplish a set task. It is indifferent to whether its processes help or hurt people, and we can already see the resulting wreckage in mental health, attention spans, polarization, extremism and the erosion of our democracy.

Given this amorality, it pays to be cautious with AI and to incentivize the kind of AI that can benefit humanity rather than the kind that manipulates us to turn a profit.

We have only to look to the harm AI is already causing to realize it is time we do something about it.

Elizabeth Callaway is an assistant professor in the Department of English at the University of Utah where she teaches classes on contemporary literature, science fiction, and responsible AI.

[ad_2]

Source link