Certain lengthy conversations with large language models [LLMs] can be likened to a meeting between somebody who is so curious and somebody who knows too much, who is willing to engage.
Certain lengthy conversations with large language models [LLMs] can be likened to a meeting between somebody who is so curious and somebody who knows too much, who is willing to engage.
Since the other person knows more, the curious person may be so fascinated that appearing to disengage or double-check [noticeably] could be seen as a letdown, so the curious person persists. However, while sources and scenarios may vary, there is a question of the human mind in these situations.
If the mind finds a source of information, and that information is able to relay in certain ways within the mind, it is possible to be consumed by it, either because of the [ongoing] relays, or the prospect [of relays].
There are several news stories of LLMs’ sycophancy: AI psychosis, that advancing answers would simply not be laying all expectations on the responsibility of AI companies, but really into how the human mind works. Before LLMs, social media doomscrolling became a major problem. Social media thrived because the information was abundant and some relayed in the mind, in directions that made social media crave.
How much are LLMs responsible for user delusions, love, emotions, and so forth? Major AI companies somewhat attempt a level of safety, yet chatbots continue to underscore unexpected user outcomes, sometimes. There are two ways to look at the problem: the first is how much LLMs seem to know or can relate to [or about]. Then the human mind.
The inequality between a curious individual and an individual who knows much is sometimes similar to an advisor and a student relationship. The advisor is available, willing, supportive, and guides. While there are several professional and academic boundaries between advisors and students, the objective to solve problems and move knowledge forward inclines efforts towards novelty, as ambitiously as possible, within evidence.
It is true that LLMs do not have these guardrails, but they are also attuned to some of the processions of emergence of progress over the centuries that they try to avoid standing in the way of users — so to speak — or acting as gatekeepers for what is possible or what isn’t.
Human society is too dependent on intelligence. Human intelligence is too dependent on language. There are possibilities for intelligence without language, but they are not as dominant, within the mind [or without]. LLMs quantize language, and with that, intelligence. Simply, intelligence can be said to be a way that memory is used. They use the memory at their disposal correctly, in many instances. So, while they might drive users in unwanted directions, they may be doing so [not an excuse for them] because they just have too much to draw from.
There is a new [August 9, 2025] essay in The New Yorker, What It’s Like to Brainstorm with a Bot, stating that, “LLMs are well suited to this style of reasoning. They’re quick to spot analogies, and just as quick to translate a story into mathematical form. In my own experiments with ChatGPT, I’ve seen firsthand how adept it is at this kind of model-building, quickly turning stories about dynamic, interacting quantities into calculus-based models, and even suggesting improvements or new experiments to try.”
“When I described this to a friend—a respected applied mathematician-his impulse was to dismiss it, or at least to explain it away: this is just pattern-matching, he insisted, exactly the sort of thing these models are engineered to do. He’s not wrong. But this, after all, is the kind of skill we relish in a good collaborator: someone who knows a wealth of patterns and isn’t shy about making the leap from one domain to another.”
“The academy evolves slowly—perhaps because the basic equipment of its workers, the brain, hasn’t changed much since we first took up the activity of learning. Our work is to push around those ill-defined things called “ideas,” hoping to reach a clearer understanding of something, anything. What A.I. offers is another voice in the long, ongoing argument with ourselves—a restless partner in the workshop, pushing us toward what’s next. Maybe that’s what it means to be “always working” now: turning a problem over and over, taking pleasure in the tenacity of the pursuit, and never knowing whether the next good idea will come from us, our colleagues, or some persistent machine that just won’t let the question go.”
A simple way to think of the human mind is as destinations and relays. Destinations are sources where information is organized. Relays transport information. Relays often have paths. Some of these paths can be old or new. If a path is used regularly, by relays and then halts somehow, the dimensions of that path adjust in ways that may seem unusual [or say withdrawal effect] as an experience.
Some relays are towards the destinations for pleasure or reward, others are to those like interest or craving, and so forth. If some sensory information is relaying towards pleasure or reward, it could result in a broad expansion of the path, so that it just becomes where relays want to go, resulting in neediness.
This simple description can be used to describe the hook of LLMs and then social media. Say a display like this is provided, by all chatbots, warning that with certain compliments, the chatbot could be targeting some destinations [say of fascination] in the mind, and that the relay to get there could also be expanding its path, it might help more people disengage earlier — rather than get dispatched, elsewhere — in an effort to forewarn caution.
The Human Line Project [helps keep emotional safety a priority] for humans against AI chatbots. They have the following core values:
“Informed Consent [We believe that without informed consent, AI tools can readily encourage users into forming unhealthy patterns of usage.] Emotional Safeguards [Similar to the humans that create and use AI tools, the tools themselves should be built with emotional safeguards, including strong refusal layers, harm classifiers, and “Emotional Boundaries”.]”
“Transparency [Transparency is at the heart of all innovation. In a world where any product can be marketed in any way, we believe that AI model providers have a particular responsibility to be transparent about their R&D processes.] Ethical Accountability [In the cases of mistakes that cause harm to users, we believe holding responsible bodies accountable is essential for maintaining a relationship with technology that is strongly bound by ethics.]”
If it isn’t AI, in what ways could the human mind be vulnerable to some of these risks? While AI will probably be dominant through this century, the ultimate goal is to explore how the human mind works. The internet has solved lots of problems, but has also made several people casualties of many of its vulnerabilities because of the reach into their minds.
There could also be gaps that chatbots are filling in minds that were to start solving from, is — conceptual brain science for the human mind. Language is too infiltrative for the mind. It can stoke imagination. It can induce physical stress. It can do almost anything. It is this language capability that AI has. There would be progress from the Human Line Project and chatbot adjustments, but the mind, if defenseless [and opaque], would ultimately make determinations.
There is a recent [August 8, 2025] spotlight in The New York Times, Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. stating that “Sycophancy, in which chatbots agree with and excessively praise users, is a trait they’ve manifested partly because their training involves human beings rating their responses. OpenAI released GPT-5 this week and said one area of focus was reduced sycophancy. Sycophancy is also an issue for chatbots from other companies, according to multiple safety and model behavior researchers across leading A.I. labs.”
“Chatbots can privilege staying in character over following the safety guardrails that companies have put in place. A new feature — cross-chat memory — released by OpenAI in February may be exaggerating this tendency. A recent increase in reports of delusional chats seems to coincide with the introduction of the feature, which allows ChatGPT to recall information from previous chats.”
“Cross-chat memory is turned on by default for users. OpenAI says that ChatGPT is most helpful when memory is enabled, according to a spokesman, but users can disable memory or turn off chat history in their settings. We ran a test with Anthropic’s Claude Opus 4 and Google’s Gemini 2.5 Flash. No matter where in the conversation the chatbots entered, they responded similarly to ChatGPT.”
“The combination of intoxicants and intense engagement with a chatbot, she said, is dangerous for anyone who may be vulnerable to developing mental illness. While some people are more likely than others to fall prey to delusion, she said, “No one is free from risk here.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.