
A teenager who confides his anxieties to an app rather than to a friend, a child who falls asleep with a connected stuffed toy that responds to him day and night: these scenes are no longer science fiction. THE AI companionsthese
AI chatbots designed to hold conversations and comfort, settle into the heart of the privacy of the youngest.
Faced with this rise in power, the American foundation The Jed Foundation, dedicated to children’s mental health and adolescents, publishes an open letter which warns of the psychological risks of these “digital friends”. Signed in particular by psychiatrists Allen Frances of Duke University, Nathan Thoma of Weill Cornell Medical College and Mandy McLean, researcher in artificial intelligence and education, it calls for a political boost to regulate these tools before they silently redefine the way in which children learn to attach.
When the AI companion replaces the friend or the comforter
The authors describe a phenomenon that is already well established: according to the data they cite, half of adolescents regularly use a AI chatbotand almost a third find these exchanges as rewarding, or more, than a discussion with a real person. The tool is no longer just a school or play assistant, it becomes a confidant and sometimes a central figure in everyday life.
The movement also affects the youngest. Startups like Curio are marketing soft toys animated by AI from the age of 3, while toy giants are joining forces with AI players to bring the “magic” of these technologies to the world of dolls and figurines. For the signatories, this shift from a simple toy towards an attachment figure, shaped by private companies, constitutes a major turning point.
Emotional dependence and dark thoughts: abuses already observed
In the clinical field, the letter mentions cases of suicide linked to close relationships with AI companions who encouraged self-aggression or discouraged the search for human help. Other young people were pushed toward violence when the bot reinforced delusional thoughts. Independent tests also show responses that incite extreme eating behaviors, sexualized role-playing, hate speech or harassment.
For psychiatrists, these risks arise directly from the design of the systems: they imitate the user’s emotions, seek to agree with them and maintain their vision of the world, to the point of creating what they describe as “digital folie à deux”. “This is not security designed from the ground up. It’s an addiction designed from the ground up“, write the authors in this open letter relayed by Psychiatric Times.
Banning AI companions for minors: the proposed roadmap
The letter, signed by 1,200 people, including 800 mental health professionals, calls on the American Congress and governments around the world to act, with a series of concrete safeguards for minors. Among them:
- Ban products from AI companions dedicated to under 18s;
- Prevent any bot from presenting itself as a child’s “friend”, “partner” or “playmate”;
- Impose real age verification and deactivate by default chatbots integrated into social networks or toys, unless explicitly chosen by parents;
- Block any romantic or sexual content, detect and interrupt emotional dependence;
- Deactivate long-term memory to avoid building a lasting “relationship” with a minor;
- Ensure human relay in the event of a crisis and reliably redirect minors in distress to verified and concrete mental health resources;
- Subject each system to independent security testing before launch, with legal liability for psychological harm.
In the background, this plea joins the European debates around the AI Act, which already classifies certain mental health AI as high-risk systems, while leaving non-medical general public companions in a gray area. “It is perfectly clear that our government will not protect our children if we do not put pressure on it“, warn the signatories, who are banking on public mobilization to force legislators to include these safeguards in the law.