“Chatgpt-5 does not think, he calculates”: Dr. Kierzek alerts on the risks of medicine entrusted to AI

"Chatgpt-5 does not think, he calculates": Dr. Kierzek alerts on the risks of medicine entrusted to AI
While Chatgpt-5 impresses with its power and reactivity, many voices wonder: can we already do without a human doctor? Dr. Gérald Kierzek, Medical Director of True Medical, alert on the risks of blind confidence in artificial intelligence … without denying his advantages.

The new version of GPT-5 claims superior medical expertise than doctors. What do you think?

Dr Gérald Kierzek: Yes, AI has a data processing capacity much higher than that of a human doctor: knowledge, exhaustiveness, speed …
Among the obvious advantages, we can cite instant access to millions of studies, clinical cases and protocols, or the ability to cross rare symptoms to suggest improbable diagnoses – as has been reported in certain cases by Chatgpt. Artificial intelligence has no cognitive bias, it does not tire and do not underestimate certain tracks.

But it should be remembered that AI does not “think”: it calculates probabilities. It does not replace a clinical judgment. A doctor integrates into his analysis the history of the patient, his social context, his non -verbal language … As many elements that AI does not perceive. And then, who is responsible in the event of an error? There is a real ethical and legal issue.

In summary: Yes, AI is higher in data processing, but lower in global clinical practice. It is a complementarity, not a substitution.

However, some Internet users claim that Chatgpt has enabled them to obtain a diagnosis ignored by doctors. Isn’t it, in this sense, superior?

Yes and no. A doctor cannot memorize the two million medical publications published each year, where GPT-5 instantly accesses it. AI also identifies invisible correlations for a human, for example by combining rare symptoms with family history.

But medicine is not limited to data. Intuition, empathy, adaptation to the patient are essential. AI can suggest improbable diseases, but this creates a risk of overdiagnosis, an anxiety -provoking “Doctor House” effect. And then, no physical exam: impossible for Chatgpt to palpate a tumor or listen to lungs.

It is a great tool for supporting – or even ahead – the doctor, but it is always up to the latter to validate or invalidate AI hypotheses.

Does the fluidity of Chatgpt-5 responses, especially vocal, risk strengthening patient confidence in AI, to the detriment of doctors?

There is a real “sympathetic doctor effect”. An AI that speaks with confidence and fluidity inspires confidence, even if it is wrong. The shape takes precedence over the bottom.

And as doctors are overloaded – with an average of 15 minutes by consultation in France – patients could turn to the default AI.

This is where the risks appear: dangerous self -medication (“Chatgpt told me to take such a medication, I did it without medical advice”), disengagement of health systems in a context of budgetary restrictions (“why consult, if AI responds better?”).

It will clearly have to frame the use. For example, by clearly displaying that Chatgpt is a diagnostic assistance, not a medical opinion. And by integrating AI into professional tools, such as the shared medical file, or in the training of caregivers.

And in the field of mental health: can an empathetic and always available chatbot replace a psychologist?

This is another debate, but which poses the same problems. In terms of advantages, Chatgpt offers unconditional listening, 24 hours a day, with empathetic responses – of the “I understand your distress” type. It is a good tool to put words on your emotions.

But we must not forget that there is no therapeutic framework. A shrink adjusts his approach – cognitive therapy, psychoanalysis, etc. -, not AI. And in the event of a crisis? What does Chatgpt do in front of a suicidal patient? There is also the risk of bias in the responses, without medical control.

This can be a good complement between two sessions, but not a replacement. AI does not make a diagnosis, does not create a human relationship, and does not follow a patient over time.

In a tribune of the New York Times, a therapist wrote that Chatgpt is “sometimes therapeutic, but is not a therapist”. Do you agree?

Quite. “Therapeutic” means that it can relieve occasionally, like talking to a benevolent friend. But it is not a therapist. There is no diagnosis, no structured follow -up, no transfer – this specific relationship between the shrink and the patient.

Chatgpt is a sealing buoy at sea. A shrink is a professional rescuer who knows how to swim to you and recover you.

AI is a super-assistant-including to help the doctor sweep all the assumptions-but he is not a doctor. In mental health, it is useful for immediate support, but dangerous solo.

Priority is a framed integration into the health system. You should not see AI as an economy tool – it does not go on strike and does not need ten years of study – but remember that patients need humanity and proximity above all.