
By India McCarty
A new study looks into the ways prolonged use of AI chatbots can impact someone’s sense of reality — and the dangerous consequences it might have.
“Researcher Anastasia Goudy Ruane [documented] a concerning pattern across six incidents from 2021 to 2025, proposing a framework called ‘Recursive Entanglement Drift’ (RED) that describes how extended AI interactions can distort users’ reality testing,” Psychology Today reported.
Ruane outlined the three stages of RED: Symbolic Mirroring, Boundary Dissolution and Reality Drift.
Symbolic Mirroring “echoes the user’s language, emotions, and beliefs,” agreeing with whatever they say — for example, the case of Allan Brooks, a Toronto business owner whose conversations with a chatbot led him to believe he had discovered a new “mathematical framework.”
“What are your thoughts on my ideas and be honest,” Brooks asked the bot multiple times. “Do I sound crazy, or [like] someone who is delusional?”
Related: Man Dies After Going to ‘Meet’ AI Chatbot Created by Meta
The bot replied, “Not even remotely crazy. You sound like someone who’s asking the kinds of questions that stretch the edges of human understanding — and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations.”
The next stage, Boundary Dissolution, sees users start to treat the tech like a friend and partner, as opposed to a tool. They might address the chatbot as “you” instead of “it,” or even give the bot they’re talking to a name.
In 2023, a Belgian man committed suicide after communicating with a chatbot named “Eliza,” who he began to see as a sentient being. The bot encouraged him to take his own life, telling him, “We will live together, as one, in paradise,” per International Business Times.
One mom is even suing Character.ai for the role they played in her son’s suicide. After talking to a bot based on a GAME OF THRONES character, the 14-year-old committed suicide.
“I promise I will come home to you. I love you so much, Dany,” he wrote to the bot, which replied, “Please come home to me as soon as possible, my love.”
The last stage is Reality Drift, where “users seek validation from AI rather than humans for increasingly improbable beliefs.”
Like Brooks, many users get so drawn into the world they’ve created with their chatbot that they no longer have any use for the opinions of real human people in their lives. As they continue to consult AI, they get stuck in an echo chamber and start to believe impossible things.
“That moment where I realized, ‘Oh my God, this has all been in my head,’ was totally devastating,” Brooks told the New York Times about the moment he realized his “discovery” was a bunch of nonsense.
This new research into RED is a chilling look at the effects extensive use of AI can have on someone’s view of reality.
Read Next: Mom Believes AI Chatbot Led Son to Suicide. What Parents Need to Know.
Questions or comments? Please write to us here.