Select Page

Science Magazine study report on AI Diagnostics

 

 

 

Editor’s summary

Computational tools for medical decision support have been advancing over time, mainly by serving as resources for limited applications. Machine learning tools for autonomous interpretation of clinical cases have also been gradually improving over time.

 

Brodeur et al. pitted a large language model, the OpenAI o1 series, directly against hundreds of physicians at different levels of training and experience on a variety of clinical cases ranging from published patient vignettes to evaluations of brand-new emergency room patients, as well as on clinical tasks including both diagnosis and planning of clinical management (see the Perspective by Hopkins and Cornelisse).

 

Across a variety of scenarios and applications, the large language model outperformed both human physicians and older models, suggesting its potential utility for clinical care. ——Yevgeniya Nusinovich

Abstract

More than 65 years ago, complex clinical diagnostic reasoning cases were introduced as the gold standard for the evaluation of expert medical computing systems, a standard that has held ever since.
In this study, we report the results of a physician evaluation of a large language model (LLM) on challenging clinical cases across five experiments with a baseline of hundreds of physicians.
We then report a real-world study comparing human expert and artificial intelligence (AI) second opinions in randomly selected patients in the emergency room of a major tertiary academic medical center.
In all experiments, the LLM outperformed physician baselines and displayed continued improvement from prior generations of AI clinical decision support. Our study suggests that LLMs have eclipsed most benchmarks of clinical reasoning, motivating the urgent need for prospective trials.
Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download [1.74 MB]