Search

Five considerations on using AI tools

A session during Independents Day explored how practitioners can “cut through the hype” to harness artificial intelligence in practice

A crowd gathers in an exhibition space
Bruce Foster Photography

Themed Back to the future, it seemed only right that Independents Day (29–30 June) would feature a presentation on artificial intelligence (AI) – a technology that was once in the realm of science fiction but is now integrated into many of our day-to-day lives.

Daniel Hardiman-McCartney, clinical adviser for the College of Optometrists, took to the stage to discuss responsible use of AI, methodologies for experimentation, and the regulatory framework in the UK.

The College of Optometrists has recently published an interim position statement on AI, created jointly with sector bodies including the AOP, FODO, and ABDO. The paper covers procurement of AI tools, use of large language models, and considerations for professional development. 

OT reports on key takeaways from the session.

1 AI literacy is a responsibility

Hardiman-McCartney emphasised the importance of AI literacy, commenting: “We all have a responsibility as business owners and clinicians to become literate with AI.”

Practitioners already have the skills to distinguish between tools that can be truly helpful for patient outcomes and a product that sounds “exciting” but does not deliver, he said.

Hardiman-McCartney gave the example of how a practice may review the ingredients and research basis of two different eye drops when considering which to stock.

He said: “Look under the bonnet of AI systems – how well it performs and how much of a benefit it could be for practice.”

2 Regulation and responsibility

When procuring an AI-based tool for use in practice, it is important to ensure the product is registered with the UK Medicines and Healthcare products Regulatory Agency (MHRA) as a medical device. This is referred to as Artificial intelligence as a Medical Device, or, AlaMD.

Hardiman-McCartney recommended asking questions of suppliers, such as whether the technology is registered with the MHRA, and how any potential issues could be reported.

Clinicians often have questions around clinical accountability in consultations using AI tools. This is also a concern for patients – who is responsible for decision making?

Providing a “rule of thumb,” Hardiman-McCartney explained that, currently, for decisions made using AI products in the consulting room: “The clinician would be ultimately responsible for AI as a medical device in the consulting room in the UK. So, even more reason why we need to be AI-literate, understand the evidence, and critically appraise AI products – cutting through the hype.”

Read more

Review highlights shortcomings of AI tools approved by regulatory bodies

Moorfields Eye Hospital and UCL Institute of Ophthalmology outline lack of transparency regarding training data

3 Large language models

Looking at large language models (LLMs), such as ChatGPT or Google’s Bard, Hardiman-McCartney suggested that these can be a good way for practitioners to test out the technology and see how it works for themselves.

However, he emphasised that these are not registered as medical devices or intended for medical use, and that patient-identifiable or sensitive information should not be used in these open-access systems.

Users must be aware of the limitations of the technology, with Hardiman-McCartney noting that these models can present “confident, convincing results that sometimes are completely off the mark,” adding that practitioners need to use their critical appraisal skills when using these tools.

Prompt engineering is one way in which users of AI can improve the specificity and effectiveness of the results that LLMs produce. This refers to the process of carefully curating the instructions to achieve a desired outcome from AI.

Hardiman-McCartney described the TALENTS model, which suggests users create their prompts for AI around these points: Task, Aim, Listeners, Extent, Nature, Takeaway, and Sources.

4 Considering sustainability

A key concern surrounding the use of AI is its impact on the environment, and whether the use of these tools is sustainable.

Identifying three kinds of sustainability – environmental, financial, and workforce impact – Hardiman-McCartney outlined some of the considerations for the use of this technology.

AI consumes a vast amount of energy. Hardiman-McCartney pointed out that many practices will have spent years decarbonising their business – and embracing AI will mean more energy is used.

Financially, AI is relatively cheap at the moment, but will become a bigger running cost for businesses over time, so there is a question of how it will be funded? This will require financial modelling, he suggested.

Workforce sustainability is another consideration. While certain tasks can be delegated to AI to save practitioners time, Hardiman-McCartney suggested: “What about optometrists of the future in 10-years' time, who may not have practice looking at a fundus image, looking for Drusen or diabetic retinopathy?”

The workforce of the future could look quite different, Hardiman-McCartney suggested, adding: “We need to be thinking about that and being AI-literate, and playing with AI now will help empower us to know what that looks like.”

5 The benefits

Hardiman-McCartney outlined some of the potential that AI could have for the profession, from AI agents, to oculomics, and ambient voice technology.

Ambient voice technology can be used to transcribe consultations, creating notes and summaries, and is already used in some NHS settings, he suggested. However, he noted recent reports warning GP surgeries about the need to ensure tools are GDPR compliant.

Maintaining trust is key, Hardiman-McCartney emphasised, adding that this makes it particularly important to understand regulatory compliance.

Reflecting on the potential uses of AI, he suggested that the profession should not let AI “sweep us along in its current,” but understand and use it in a clinically safe way.

“I hope you start embracing AI, playing with it in a safe way, but also take the time to educate yourself, become AI-literate in terms of the regulation and medical devices side of things so you can use in the practice while the maintaining trust of our patients,” he said.