Is AI quietly rewiring our brains? What experts say about the hidden mental risks


AI is everywhere, but is it changing our brains? How delegating core tasks like memory and focus could compromise our autonomy - and even our mental health


artificial intelligence symbolised by 3D rendered digital human head© Getty Images
Nuria Safont
Nuria SafontWellness Writer
November 20, 2025
Share this:

In just a few years, artificial intelligence - known as AI - has gone from being an abstract concept reserved almost exclusively for science fiction films to becoming a tool we use constantly. It helps us to write texts, search for answers, translate languages, plan trips and create our fitness regimes among what seems like an endless number of other uses. But this technological revolution poses a question: could AI be changing, or even atrophying, the way our brain functions?

As we increasingly delegate more of our cognitive tasks to algorithms, it's logical to wonder if we aren't hampering the exercise of skills that were once essential: remembering, deducing, paying attention, writing, calculating or even conversing face-to-face. 

And although the aim isn't to demonise AI, experts warn that its thoughtless use could have more significant consequences than we imagine, especially in developing brains or in emotionally vulnerable people.

Could this new tool could have a truly devastating impact on our brain? The short answer is no - at least not for adults, and in the strictest sense. But it can modify the brain's functioning and, with it, alter fundamental processes such as memory, attention and decision-making.

Shot of smiling beautiful woman using her mobile phone while cooking vegetables in the kitchen at home.© Getty Images
We're using AI more and more for uses from writing texts to giving us recipes

How AI alters brain function: Memory, attention and decision-Making

Professor Ignacio Morgado, Emeritus Professor of Psychobiology at the Institute of Neurosciences of the University of Barcelona, explains: "Artificial intelligence makes the brain work in a different way. Instead of directly storing information, it stores it in 'files' that contain much more information than the brain itself can handle."

In other words, when we constantly consult an AI for recipes, writing help or solutions to problems, it can make our brain act like an orchestra conductor who just pushes a button instead of actually interpreting the music. Although this frees up some mental space, it could eventually make us dependent and less able to think for ourselves. 

The impact of AI on the developing adolescent brain

This effect becomes especially relevant in the case of teenagers. "The adolescent brain is still immature; it's easier to deceive, influence and steer in inappropriate directions, such as violence," warns Morgado. "Reason, which is protective against these dangers, doesn't fully mature until at least 20 years of age, though earlier in girls than in boys." That's how the inappropriate use of AI can alter adolescent development.

teens on mobile phones© Getty Images
The adolescent brain is still immature; it's easier to deceive, influence and steer in inappropriate directions

Still, the problem isn't just early access but the way teens are using AI. Minors may turn to these tools to do homework, solve social problems or seek emotional answers they aren't yet prepared to interpret. "The fact that children use AI from a very young age can indeed be harmful to their development," the expert emphasises. 

Can AI accelerate cognitive decline in older adults?

In older adults, the impact appears to be different. "Cognitive decline can indeed accelerate from not exercising the brain; but not from using AI," Morgado points out. In fact, artificial intelligence could have some therapeutic value if used in a directed way to stimulate the mind, play, learn or train cognitive abilities.

However, there are two things that always apply: common sense and information. "We need to be well-informed about the tool we are using. Understanding, for example, that we're not talking to a person or a professional, but to a programmed machine with limitations," Morgado reminds us. 

Essentially, if we know how to use artificial intelligence, it can be beneficial. If we abuse it or don't use it properly, it could end up affecting our brain. One thing, though, is clear: "[AI] is here to stay and will condition our lives in many ways; we need to try to ensure that the effects are positive."

woman forgetting dates on calendar© Getty Images
Using AI can modify the brain's functioning and alter fundamental processes such as memory, attention and decision-making

Mental health chatbots: When do AI assistants become dangerous?

One of the areas where the use of artificial intelligence has expanded most is mental health. Chatbots designed to converse with people who feel lonely, anxious or depressed are increasingly common. And although AI can be useful as a complement to treatment, psychotherapy experts are concerned about unsupervised use.

According to a review of studies carried out by Barcelona's Itersia Psychotherapy Centre, chatbots' 24/7 accessibility and ability to simulate empathy mean that many users, especially young people or emotionally fragile individuals, substitute psychological consultations for conversations with virtual assistants. 

"Swapping a therapist for a chatbot can lead to anything from missing a clinical crisis to emotional dependence without real support," warns psychologist Elisabet Sánchez. “These systems can complement, but not replace, human clinical supervision in mental health interventions."

young girl using chatbot on mobile smartphone© Getty Images
Kids and teens may turn to AI to do homework, solve social problems or seek emotional answers they aren't yet prepared to interpret

The situation is especially worrying in rural areas or contexts where access to psychologists is limited. “Patients with emotional or mental health problems have swapped what used to be dubbed 'Dr Google' consultations for 'Dr ChatGPT' or similar tools, where they get a real-time response at any time of the day," explains Sánchez. "Society's need for immediacy has meant a shift from a real psychological consultation to conversational assistants.

A study published in the journal JMIR Mental Health concludes that although AI models can be useful in psychoeducation and basic emotional management, they are not capable of detecting warning signs, making accurate diagnoses or genuinely empathising with patients. Their "diagnostic accuracy, cultural competence, and ability to engage users emotionally remain limited," the report states. 

AI models are not capable of detecting mental health warning signs, making accurate psychological diagnoses or genuinely empathising with patients

5 critical risks of using AI for emotional support and therapy

Mental health experts at Itersia have compiled a list of the main dangers associated with the indiscriminate use of AI and chatbots by emotionally vulnerable people:

  • Failure to detect crises: AI assistants are not prepared to identify or intervene in risk situations such as suicidal thoughts, violence or psychotic episodes.
  • Illusion of emotional connectionConversational language can create the false sensation of talking to someone who genuinely understands, which creates an artificial relationship without professional support.
  • Incorrect advice: They can offer erroneous, non-evidence-based or even dangerous answers.
  • Lack of clinical context: They don't properly "understand" the personal, cultural or emotional nuances that a human therapist can interpret.
  • Absence of ethical oversight: They are not subject to ethical codes or clinical supervision, which can lead to irresponsible use.
mature adult female therapist listens compassionately to the unrecognizable female client share her problems.© Getty
Chatbots are not interchangeable with real-life therapists and psychiatrists who can make a skilled diagnosis

A recent investigation with patients suffering from social anxiety found that those with the most severe symptoms were precisely the ones who most trusted chatbots, seeing them as a safe refuge from human interaction. This, however, can reinforce social avoidance and reduce the likelihood of seeking real professional help.

More Health & Fitness
See more