
By Shawn Smith
“The potential impact of artificial intelligence (AI) on brain health is a complex and ongoing area of research. While there is currently no definitive evidence that AI is harmful to the brain, some studies have raised concerns about potential risks.”
Thank you, Gemini. (No one will know that with a simple prompt, you gave me this glorious lede. Wink. Wink.)
Okay. It’s best not to let AI do all the work. My editor and, according to recent research, my brain will thank me.
As psychiatrist Darja Djordjevic describes it, AI is a “machine that is always available, endlessly enthusiastic, and seemingly competent in every domain.”
“That creates a powerful feedback loop. You skip the discomfort of starting from scratch and get rewarded in seconds,” Djordjevic continued.
Skipping discomfort and getting “rewarded in seconds” doesn’t sound bad at all. But offloading tasks to AI may cause brain atrophy, affecting one’s ability to think critically and problem solve, as one recent study of 1,000 students suggested.
Related: How AI and ChatGPT are Changing Education
Two sets of students were given practice math problems — one group used ChatGPT and the others did not. Initially, the ChatGPT users fared better by 48% than the non-ChatGPT users. But when later tested without the use of ChatGPT, they scored 17% less than those without any AI assistance at all during the study.
The study hints that overuse of AI might bypass what psychologist call “‘mastery experiences’ — struggling through and conquering difficult tasks.”
“When ChatGPT skips the hurdle for us, mastery never materializes; over time the very capacity fades, confidence erodes, and dependence deepens,” Paul Rust and Nina Visan warned in the Wall Street Journal.
In another study at MIT, a group of 54 participants were separated into three groups to write several SAT essays while and EEG monitored their brain activity. One group was allowed to use ChatGPT, the second group was allowed to use the Google Search engine and the third was just allowed to use their brain.
The results were that the ChatGPT group showed the lowest brain activity of the three groups and that they “consistently underperformed at neural, linguistic, and behavioral levels,” as reported by Time.
The study also suggested that those more reliant on AI platforms like ChatGPT don’t retain the information of their works as much as those who don’t use AI.
While the study is preliminary and yet to be peer reviewed, the study’s main author, Nataliya Kosmyna, felt a sense of urgency to release the findings in light of the growing use of Large Language Models (LLMs) like ChatGPT.
“I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” Kosmyna said. “Developing brains are at the highest risk.”
Another study suggests that AI, if used the right way, can be a supplement to one’s work as opposed to a crutch. In the study, chemistry students who used a modified ChatGPT that acted more like a tutor offering step-by-step hints than answers helped the students retain more information.
“The small shift from prompting for product to prompting for process cultivates lasting comprehension,” Rust and Visan wrote.
Others argue that we should take a nuanced approach, as David Ovalle writes in the Washington Post, and that we have been fearful of technology in the past, like calculators and, in more recent times, search engines.
He quoted Sam J. Gilbert, professor of cognitive neuroscience at University College London: “It wasn’t that long ago that we were all panicking that Google is making us stupid and now that Google is more part of our everyday lives, it doesn’t feel so scary. ChatGPT is the new target for some of the concerns.”
While we don’t need to fear AI, we should use it wisely.
Read Next: Wait, How Many Americans Use ChatGPT Today?
Questions or comments? Please write to us here.