“Nothing alarming,” says Chatgpt … It was however an advanced esophagus cancer

"Nothing alarming," says Chatgpt ... It was however an advanced esophagus cancer
In Ireland, an apparent good health father preferred to entrust a strange pain to Chatgpt rather than consulting a doctor. The AI ​​responded lightly, when he was actually suffering from advanced cancer. Dr. Gérald Kierzek reminds us of the dangers of autodiagnosis by artificial intelligence.

Warren Tierney, an Irishman 37, father of two young children and former psychologist, was not a fragile patient. But monopolized by his family life, he opts for ease, when he suddenly feels embarrassment. Rather than consulting a doctor, he decides to question Chatgpt. A choice that he will bitterly regret by learning his cancer a few weeks later.

“If I’m wrong, I offer you a Guinness!”

The Daily Mail, which had access to the exchanges between man and AI, raises a series of responses which are certainly reassuring, but never offering the man to consult. Nor to “worry.

While Warren begins the discussion by writing “I sometimes find it difficult to swallow, but after anticoagulants I could eat a cookie. Is this a good sign? “Chatgpt answers:: “Yes, this is a very encouraging sign. It suggests that it is probably nothing serious.”

Warren dares the word cancer, but the AI ​​stops his fear there: “Very improbable. No alarming symptoms, stable condition, improvement“.

Chatgpt even allows himself a joke when the man is more worried. “” If I’m wrong, you can criticize me. Are you okay? I will write your declaration under oath and I will offer you a Guinness. But seriously, nothing that you describe suggests cancer. ” A misleading reinsurance that is cold in the back, when you discover the real diagnosis.

Behind pain, Esophagus Cancer Stade IV

Because a few weeks later, resolved to consult, the cleaver falls: Warren is suffering from an esophagus adenocarcinoma, diagnosed at a stage IV.

Esophagus cancer which affects approximately 4,250 people per year in France, with a male predominance (75 % of cases). It can take the form of a Epidermoid carcinoma (most frequent); or a adenocarcinoma (from which Warren suffers) which evolves in the glandular cells of an organ.

In this case, the early symptoms to know, (not supplied by Chatgpt), include

  • Intermittent dysphagia (foods that “get stuck”);
  • Retrosternal burns;
  • Painful belching;
  • Regurgitations.

What AI could also have said is that esophagus cancer is formidable: the survival rate at 5 years remains less than 20 %, essentially because it is diagnosed too late. The sooner the treatment, the better the chances of survival. Following diagnostic delay reduces the probability of curative treatment. In the case of Warren, several weeks were lost.

“”False reinsurance can kill“, alerts Dr. Kierzek

For the emergency doctor and medical director of True Medical, this case is the perfect illustration of the major risks linked to self -diagnosis by artificial general public intelligence.

“Chatgpt and other chatbots have no medical expertise. They do not see the patient, cannot prescribe exams, and are content to generate text from variable reliability databases. It is an illusion of competence”.

Behind the apparent help, and a minute response, the danger is major when it comes to your health: either the answer brings false reinsurance (“it’s nothing”) that can delay the diagnosis of cancer. Either unnecessary anxiety in the event of an alarmist response.

“However, every week lost in the diagnosis of cancer can change the prognosis. A chatbot does not have the capacity or legal responsibility to make a diagnosis. It will never replace the clinical examination and the gaze of a doctor.”

For Dr. Kierzek, the medical use of AI must be strictly supervised. “The AI ​​can serve as a general information tool, but should never be used to decide a processing or diagnose. Health is too serious to be entrusted to a conversational robot.”

Openai rejects any responsibility

Especially since faced with the error, you will not find an answer. In the case of Warren, Openai, the company causing Chatgpt, simply recalled that its conditions of use. The tool “is not intended to be used for medical diagnoses and does not replace the advice of a health professional “clearing up with the slightest responsibility.

A clear clause … but insufficient for those who, like Warren Tierney, saw their destiny upset by the confidence placed in a machine which, in health, can only be Information supplement – Never a alternative.