An algorithm by Google has taught itself how to identify diabetic retinopathy and macular oedema with high accuracy using fundus photographs.
The software’s analysis of a photograph now corresponds with the diagnosis of ophthalmologists more than 90% of the time.
Rather than having experts tell the computer program how to detect suspicious lesions, it is given a large data set of labelled fundus photographs, in a technique known as “deep learning.”
Using 128,175 anonymous retinal images, the algorithm determined the criteria it uses to determine a healthy retina from one showing signs of disease.
The algorithm’s capability was then put to the test analysing two sets of retinal images, one of 9963 images from 4997 patients and the other with 1748 images from 874 patients.
Based on the diagnosis of multiple ophthalmologists, the algorithm had a false negative rate of between 2.5–4% and a false positive rate of 6–7% for the sets of images. The authors of the JAMA journal paper emphasised the accuracy of the algorithm.
The researchers wrote that: “[An] automated system for the detection of diabetic retinopathy offers several advantages, including consistency of interpretation – because a machine will make the same prediction on a specific image every time – high sensitivity and specificity, and near-instantaneous reporting of results.”
Moorfields Eye Hospital is also participating in Google’s wider automated diagnosis work, for ocular conditions such as age-related macular degeneration.
Image credit: Alex Grichenko