Your words say more than you think, AI learns to identify invisible signals of psychiatric disorders

Your words say more than you think, AI learns to identify invisible signals of psychiatric disorders
Each word, each silence can reveal a psychological fragility. American researchers test the AI ​​to capture these signals before it is too late.

What we say is never trivial. Behind each word, each silence, sometimes hide fragile signals from our mental state. An American team explores how artificial intelligence could help psychologists capture them before it is too late.

What your words secretly reveal

For Josh Oltmanns, professor of psychology at the Washington University in St. Louis, words are not simple tools. “”Our thoughts, feelings and behaviors are reflected in language “he explains.

The choice of a term, a hesitant sentence, a flow that accelerates or slows down … So many details that tell a much more intimate story than we imagine. Oltmanns insists: “We can say a lot about a person by the way they speak“.

A tone that goes out can betray a depression, a speech too precipitated can point to anxiety. And these are only two examples among hundreds: “The speech samples include hundreds of different acoustic parameters that could be significant “.

When the AI ​​becomes the ear of the shrink who never gets tired

So far, everything was based on the sensitivity of the psychologist. But even the most experienced can let a detail spin. “”Psychologists are human beings, and human beings are fallible, so even a good clinician may not always notice important verbal signals “admits Oltmanns.

This is where the machine enters the scene: “But a properly drawn computer model will identify these signals “. Not to replace humans, but to support them. “The computer program could help validate their observations or warn them of something they have missed”he continues.

The study published in 2025 in Advances in Methods and Practices in Psychological Science illustrates this potential. With a simple maintenance, the AI ​​can meet thousands of clues and suggest tracks that a practitioner, alone, would not have noticed. In a cabinet, as in the emergency room, this support could make the difference.

Immense promises, but very real dangers

But if the idea seduces, she also worries. “”AI is often trained on information from the Internet, which means that it can be biased “warns Oltmanns. The danger? See cultural or linguistic differences wrongly interpreted as psychological disorders. This is why he puts on an inclusive research: “We are particularly interested in the study of language schemes among white and black participants in order to ensure that AI models treat each group fairly “.

And while laboratories are experimenting, commercial companies are already advancing. “”Companies already sell psychological assessment tools by AI to hospitals and clinicians, but I don’t really know how well they work or to what extent they have been assessed “worries the researcher.

His observation remains final: “This type of technology could represent a huge advance for psychology, but it must be done with care. We have to be smart “. Because behind each algorithm, there are human lives, and a simple conviction: “We have a lot of ideas and a lot of work to do “.