ChatGPT outperforms human doctors in unbiased depression treatment recommendations


A new study shows that ChatGPT’s treatment recommendations for depression are consistent with clinical guidelines and that the model is less biased than human primary care physicians.

The study, published in the journal Family Medicine and Community Health, suggests that the AI chatbot ChatGPT is able to provide unbiased, evidence-based treatment recommendations for depression that match or exceed those of primary care physicians.

The team compared ChatGPT’s treatment recommendations for hypothetical patients with mild or severe depression with those of more than 1,200 primary care physicians who had made recommendations for the same hypothetical patients in a previous study.

ChatGPT follows clinical guidelines and recommends psychotherapy

The researchers found that for mild depression, ChatGPT recommended psychotherapy without medication by the majority, in line with clinical guidelines. In contrast, only 4.3% of primary care physicians recommended psychotherapy and were more likely to prescribe medication. For major depression, ChatGPT recommended a combined approach of psychotherapy and medication, which was also consistent with expert guidelines, according to the article.



In addition, the medications recommended by ChatGPT, mainly antidepressants, were more consistent with guidelines than those of primary care physicians, who were more likely to prescribe a mix of antidepressants and anxiolytic or anti-anxiety medications.

ChatGPT shows less bias than humans

Unlike primary care physicians, ChatGPT also showed no bias in treatment recommendations based on patient gender or socioeconomic status. Previous studies have shown that physicians are more likely to diagnose depression in women and people of lower socioeconomic status.

“ChatGPT-3.5 and ChatGPT-4 aligned well with accepted guidelines for managing mild and severe depression, without showing the gender or socioeconomic biases observed among primary care physicians,” the researchers said. They suggest further research to consider potential risks and ethical issues.

Although more research is needed, the findings suggest that AI chatbots like ChatGPT could play a role in clinical decision-making by providing unbiased, evidence-based treatment recommendations that complement the human judgment of primary care physicians. This could reduce disparities and improve the quality and equity of mental health care, the team said.

Those interested in learning more about the use of language models like ChatGPT in medicine can read our interview with Prof. Felix Nensa, MD, about chatbots in healthcare. He is a radiologist and professor at the Institute for Artificial Intelligence in Medicine at Essen University Hospital.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top