How do I…

Engage with AI as an optometrist?

Dr Peter Hampson, clinical director at the AOP, provides an introduction to the often complicated subject of artificial intelligence (AI)

Side profile of a person's face with a black ground. Overlaid on top of the profile is a sketch of the human anatomy

John McCarthy, a renowned computer scientist from Stanford University, defined artificial intelligence (AI) as: “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

As a healthcare professional, you might be wondering where you can go to learn about AI in this area specifically. As you might imagine, there are plenty of online resources. Those provided by American multinational IBM are a good place to start. They cover basic definitions and offer some free courses and training materials, including how AI can work in healthcare and why healthcare organisations are choosing to utilise it.

AI in optometry practice

The short answer to whether any optometrists are already using AI software in practice is yes, but this might be so simple as to be slightly misleading.

There is a natural instinct to think AI is new, but it isn’t - in fact, many optometrists will have been using basic versions of AI for many years. Take, for example, visual fields testing and the commonly used algorithms built in to speed up the testing process. Perhaps surprisingly, this is an example of AI.

We are now starting to see the first commercially available products that can help to analyse optical coherence tomography (OCT) data and spot defects. You might have this or similar software in your practice already.

The short answer to whether any optometrists are already using AI software in practice is yes, but this might be so simple as to be slightly misleading


Ethical concerns

Again, the question of ethical concerns when it comes to AI has a somewhat complex answer. The short answer is yes. However, the ability to do anything about it isn’t necessarily going to be in the control of AOP members. The ethical concerns around AI involve much wider issues that society is grappling with.

Some of the risks in this area include:

  • Bias in the AI software, based upon the training data used. Is your patient accurately represented within the training data the AI has used to ensure that the answer given is correct? This may not always be the case
  • Access for all parts of society, regardless of ability to pay. A number of emerging AI applications involve the assessment of OCT scans, but what if the patient can’t afford the scan? This might mean they cannot benefit from the earlier diagnosis that OCT, in combination with AI, may bring
  • The risk of not understanding the technology, or its limitations. Specsavers now refers to AI internally as ‘supported decision making.’ But when do you pay attention to the advice the AI offers, and when do you ignore it? What does ‘being a professional’ mean if you start to defer decision making to technology? This is another question that does not have a straightforward answer
  • Regulation. The Government has recently launched an AI regulation white paper which appears to favour a light touch approach. This is linked to the previous point: if you do not understand what the AI technology is doing and it ultimately gets it wrong, who should be responsible? The technology provider, or the professional utilising it? 

These are the questions that as a profession and as a society we must address if we are to make use of AI and not simply find ourselves being used by it. Of course, this means a longer and more in-depth conversation than one that is likely to take place between optometrists at a practice level.