AI beats non-specialist doctors in detecting eye problems

A University of Cambridge study that used a large language model to assess a series of patient scenarios outperformed junior doctors

A close up of a brown eye is displayed

New research published in PLOS Digital Health has found that the artificial intelligence (AI) model, GPT-4, can outperform trainee non-specialist doctors in assessing a range of patient eye care scenarios.

Scientists from the University of Cambridge tested GPT-4, which is a large language learning model, against doctors with different levels of experience in examining 87 different scenarios.

The researchers found that GPT-4 performed better than unspecialised junior doctors, and at a similar level to trainee and expert ophthalmologists.

Lead author, Dr Arun Thirunavukarasu, of University of Cambridge, shared that, with further development, large language models could be used to provide guidance on eye-related issues.

“We could realistically deploy AI in triaging patients with eye issues to decide which cases are emergencies that need to be seen by a specialist immediately,” he said.

The case scenarios that the model was tested against included a range of eye problems, such as extreme light sensitivity, decreased vision, and lesions, taken from a textbook used to test trainee ophthalmologists.

The textbook is not available on the internet, making it unlikely that this information was included within GPT-4’s training data sets.