What do LLMs do to our brains? The research has just begun.

Erin Kernohan-Berning

7/9/20254 min read

human brain toy
human brain toy

The first half of 2025 has seen a glut of headlines (and social media posts quoting those headlines) about how AI might be making us stupid. Citing research coming out of the likes of MIT, University of Pennsylvania, and Carnegie Mellon University, tech news articles have been sounding the alarm that using a generative AI tool such as ChatGPT will rot our brains. The problem with these articles (easily found on online news sites such as Forbes, 404media, Gizmodo, TechCrunch, and many other reputable news sources) is that in an effort to rise to the surface in our attention economy with a scary headline, they’ve missed some of the more interesting conclusions of the research they are reporting on.

There are three pieces of research that have caught the bulk of the attention over the last month or so. Let’s look at what the research said (and didn’t say).

Wharton School, University of Pennsylvania (Shiri Melumad and Jin Ho Yun): Over 4,000 participants were split over a series of experiments that had them research a topic and then provide advice on that topic - something general interest like planting a vegetable garden. Some participants were asked to use an LLM like ChatGPT, while others were asked to use a conventional web search. Participants who used a conventional web search tended to develop deeper knowledge than those who used the LLM, provided more persuasive advice, and felt more invested in the topic they had researched.

Microsoft and Carnegie Mellon University (Hao-Ping Lee): 319 knowledge workers were surveyed to determine how they used LLMs on the job, their confidence in the work produced by the LLM, and whether they felt they used critical thinking in their job. Those who reported having the most confidence in LLMs tended to report less critical thinking, and those who reported having the most confidence in themselves reported more critical thinking.

MIT (Nataliya Kosmyna, et al): 54 participants were hooked up to an electroencephalogram (EEG) and asked to write an essay while their brain activity was monitored. Participants could either use an LLM for assistance, conventional web search, or write without tools (“brain-only”). This brain-only group had the highest EEG activity, followed by the web search group, and then the lowest for the LLM group. The LLM group also had less recall of their work, and diminished feeling of ownership. In a subsequent session, the LLM group and brain-only group were swapped and asked to revisit their previous work. In this case, the brain-only group still demonstrated high EEG activity even while using an LLM, where the LLM group still demonstrated comparatively lower EEG activity even when switching to brain-only.

The most important thing to understand about all this research is that at no point did any study conclude that using AI, “makes us stupider.” In fact, the lead author of the MIT study, Nataliya Kosmyna, has stressed in multiple interviews to avoid language like, “stupid,” “brain rot,” and “damage.” Another thing to note is that all of these studies are preprint studies. They are early research, released without peer review with the purpose of generating further research. They also all have clearly stated limitations when you read through each study. The purpose of these studies isn’t to be perfect or the final answer on whether using AI has a particular effect on us, but to create some work that other researchers can then try and replicate or improve upon. As the body of research grows, hopefully, so do the answers to the questions we have about how using AI may or may not affect us.

I think one of the interesting things the studies may indicate is the value in more manual ways of gathering and synthesizing information. In all three studies, participants took greater ownership over their work when it was them doing it, rather than their work being offloaded to a machine. In the MIT and University of Pennsylvania studies, they noted participants appeared to retain the information they learned better when relying on their own brains making sense of the topic they were researching, rather than generative AI giving them a statistically average accounting of that topic. The MIT study introduces another intriguing avenue of inquiry with their brain-only to LLM group – does using manual research methods followed by judiciously using an LLM help offset some of those less desired effects?

It’s important when we’re thinking critically about technology that we don’t selectively choose facts that support a preconceived narrative. There are plenty of valid criticisms of generative AI that need to be grappled with and those need to be clearly identified, not muddied by hyperbole. By reducing this research to “AI makes us dumb,” we miss out on some of the more useful conclusions the researchers worked hard to demonstrate.

Learn more

Experimental Evidence of the Effects of Large Language Models versus Web Search on Depth of Learning. 2025. Shiri Melumad and Jin Ho Yun. (The Wharton School Research Paper, University of Pennsylvania) Last accessed 2025/07/06.

The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. 2025. Hao-Ping (Hank) Lee. (CHI) Last accessed 2025/07/06.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. 2025. Nataliya Kosmyna. (MIT) Last accessed 2025/07/06.

AI's great brain-rot experiment. 2025. Scott Rosenberg. (Axios) Last accessed 2025/07/06.

Correction log

Nothing here yet.