Living with AI

Erin Kernohan-Berning

11/29/20234 min read

Matrix movie still
Matrix movie still

Artificial Intelligence (referred to as AI) is the big technology buzzword right now. You’ve likely already used an AI powered tool at some point in your life, even if it wasn’t marketed as AI. The predictive text on your phone which suggests the next word in the sentence you are typing is one such tool. The sometimes comically wrong auto-generated closed captions on TikTok or YouTube videos are another. Alexa, Siri, Cortana, and other virtual assistants are also AI tools. With the runaway popularity of Chat-GPT, more tech companies are introducing tools billed as Artificial Intelligence with vague promises to revolutionize the workplace. But what is AI? And is it really “intelligent”?

AI as we know it today isn’t remotely intelligent. As Morten Rand-Hendriksen (instructor with LinkedIn Learning) says, the most important thing you need to understand about AI is that “AI doesn’t know anything.”[1] Artificial Intelligence is evocative of sci-fi know-it-all computers, but really refers to a kind of statistical model. AI tools are trained on a large amount of data until they are reasonably good at finding the statistically best way to assemble a response to a request that we give them. In the case of text generators like Chat-GPT that data is words.

If you type a request into Chat-GPT and read the answer it gives you, you might disagree that AI doesn’t know anything. For example, ask Chat-GPT to write a hamburger recipe and you’ll likely get a useable hamburger recipe. It might even be good. But all Chat-GPT is designed to do is assemble a response that is most likely to look like a hamburger recipe based on its vast training data. Chat-GPT doesn’t know what a hamburger is. It doesn’t even know what a recipe is. It just knows, statistically speaking, what words go together in what order to give you an answer to your request.

The mistake us humans make is that we think that AIs are smart. Because we as humans use words to convey information and meaning to one another, we assume that anything else using words means what they are saying. But AI can’t mean what it’s saying because it doesn’t really know what it’s saying. Rather, we as humans read the output of an AI and make up the meaning ourselves. Where this can go wrong is that these AIs are so focused on generating a response that if there isn’t enough data, they will generate incorrect responses that “sound right” to us just to satisfy the request.

This means pretty much anything generated with AI needs to be double checked by a human being. New York lawyers Steven Schwartz and Peter LoDuca found this out the hard way when earlier this year they prepared a case using Chat-GPT to search for case law, and the AI provided case references that did not exist. The lawyers were both sanctioned, and their firm fined $5,000. Earlier this year, popular online foragers raised the alarm that AI generated books on mushroom foraging were appearing on Amazon, containing dangerous advice on identifying wild edibles. Whoever generated those books were looking to make a quick buck, but likely didn’t care if the AI generated content was accurate.

But it’s not just AI’s need to come up with an answer, any answer, that should spark caution. There’s also the data these AIs are trained on. In a landmark paper on AI, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender and Gebru et. al.) researchers examined how AIs are trained. The data used to train many AIs has been scraped from the internet, which is full of biases present in our society. Those biases are then incorporated into the responses that AI generates for us. When the AI feeds us answers based on biased training data, we further entrench those biases by embracing those answers as true (because we think AIs are smart).

This can especially impact marginalized people particularly where AI tools are being used for crucial and life-changing purposes like job or security screening. And because the processes through which these AI tools deliver results aren’t disclosed, or even fully understood, there is little accountability when those tools might inappropriately reject someone’s job application or deny someone access to a needed service. We have to remember that many of the problems that come with AI are human problems – AI just holds up a mirror to them.

AI does have the potential to make certain tasks easier. AI tools can do handy things like create a summary from a long meeting, analyze a full inbox to make calendar appointments, and make sense of a complicated school schedule. But they need to be used with our human eyes looking for errors – especially errors that may be harmful.

Learn more

Correction log

[1] Morten actually said "ChatGPT doesn't know anything." though the sentiment can be readily applied to AI as the term is used currently.

On the dangers of stochastic parrots. Bender, Gebru, et. al. (PDF) Last accessed 2024/01/16.

Can Language Models Be Too Big? With Emily Bender and Margaret Mitchell. 2021. (YouTube) Last accessed 2024/01/16.

Cleaning up a baby peacock sullied by a non-information spill. 2023. Emily M. Bender. (Medium) Last accessed 2024/01/16.

The Zeroth Law of AI Implementation. 2023. Morten Rand-Hendriksen. (Website) Last accessed 2024/01/16.

AI is a Loom. 2023. Morten Rand-Hendriksen. (Website) Last accessed 2024/01/16.

ChatGP-why: When, if ever, is synthetic text safe, appropriate, and desirable? 2023. Emiliy M. Bender. (YouTube) Last accessed 2024/01/16.

A few unpopular opinions about AI. 2023. Jeff Jarvis. (BuzzMachine) Last accessed 2024/01/16.

The Great A.I. Hallucination. 2023. Laura Marsh and Alex Pareene. (The New Republic) Last accessed 2024/01/16.

Here's what ethical AI really means. 2023. Abigail Thorn. (PhilosophyTube) Last accessed 2024/01/16.

We're all Stochastic Parrots. 2023. Goutham Kurra. (Substack) Last accessed 2024/01/16.

Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books. 2023. Ella Creamer. (The Guardian) Last accessed 2024/01/16.

This new data poisoning tool lets artists fight back against generative AI. 2023. Melissa Heikkilä. (MIT) Last accessed 2024/01/16.

New York lawyers sanctioned for using fake ChatGPT cases in legal brief. 2023. Sara Merken. (Reuters) Last accessed 2024/01/16.

‘Life or Death:’ AI-Generated Mushroom Foraging Books Are All Over Amazon. 2023. Samantha Cole. (404 Media) Last accessed 2024/01/16.

How should regulators think about AI? 2023. Emily M. Bender. (YouTube) Last accessed 2024/01/16.

ChatGPT Doesn't Know Anything. Morten Rand-Hendriksen. (TikTok) Last accessed 2024/01/16.