Search

Our response to the Government consultation on the pro innovation approach to AI regulation white paper

Our response to the consultation, June 2023

Spectacles are being focimetered

This is our response to the Government consultation on the Pro innovation approach to AI regulation white paper. The consultation ran from March to June 2023.

As a healthcare profession which is already utilising AI in a variety of ways, to diagnose patients and improve system efficiency we welcome this white paper which we hope will set a framework to harness the benefits of AI while proving effective and appropriate regulation. We agree that AI will have an important role in helping the UK to remain at the forefront of technological advancements and deliver better health outcomes for patients.

We welcome the proposed key principles set out in this paper and the intent behind them of enabling and broadening public trust in the use of AI, specifically in our case in the delivery of eye care services. We agree that unless there is public confidence and trust in the deployment of AI-based technologies patients will miss out on their benefits. Further work also needs to be carried out by Government and regulators to improve the definition of principles, explain how they will be implemented in regulation and practice, and to navigate the trade-offs between different concepts of fairness and other principles.

We welcomed the 2022 MHRA (Medicines and Healthcare products Regulatory Agency) roadmap clarifying in guidance the requirements for AI and software used in medical devices, and subsequent complementary guidance.

We have provided a thematic response linked to several key principles or themes set out in the sections of the consultation document:

  1. Transparency
  2. Regulation, and the role of regulatory bodies
  3. Patient confidence, and safety/tools for trustworthy AI
  4. Monitoring and evaluation
  5. Application and delivery

Section 1 - Transparency

Do you agree that requiring organisations to make it clear when they are using AI would adequately ensure transparency? What other transparency measures would be appropriate, if any?

The 2022 Centre for Data Ethics and Innovation report into public perception of AI and its use clearly showed there were varying degrees of public understanding and confidence in AI and its various uses. The report highlighted two key things:

  • Participants’ views on governance around transparency and accountability are tied directly to the perceived risk of AI’s use in any given context. In contrast, fairness (together with the data AI should be able to use) is seen as a trade-off between relevance/utility of a data source and individual privacy
  • Low familiarity with more complex AI applications makes it difficult for participants, and also in this case patients, to specify what governance they expect. What they do want is the same principles of transparency, fairness, accountability, and privacy to apply, along with context-specific limitations

This shows a need for transparency and effective governance as part of any regulatory process that is both pro-innovation and, more importantly, safe and measurable. For this reason, we agree that requiring organisations to make it clear when they are using AI would be a step in the right direction towards transparency. But we feel this needs further clarity. Simply stating that AI is being used is insufficient. The requirement should be expanded to include detail of when and how AI is being used and the scope of its use. True transparency should include information on training data sets, evaluation of biases, performance metrics, and funding sources. Information provided about these aspects also needs to be clear, meaningful, and accurate.

In our view this principle needs to be further defined. For example, any funding or revenue streams from companies who may benefit from the technology, or who have a vested interested in the outcomes should be disclosed to make clear any potential conflicts of interest. For example, if a drugs company chooses to partner with the manufacturer of an AI chatbot in the pharmacy industry, this potential conflict should be clearly identified to avoid promoting a more expensive ‘branded’ drug over a generic biosimilar or where this occurs so that patients are aware and can make an informed choice.

The CDEI report also found that the public finds it difficult spontaneously to identify AI in their day-to-day lives but are aware the latest technology is at work behind the scenes. For health care applications, information about the scope of usage, possibility for intervention, training data sets and their origin along with diagnostic accuracy is critical.

The issue of low-prevalence disease and the impact that has on positive predictive value need to be appropriately handled and explained. For example, a simple “99%” accuracy is meaningless without the provision of greater context about how this was reached, the data and reasoning used, and the limitations, scope and biases involved. When considering data and findings from low-prevalence disease it should be borne in mind that the potential for false positives is significant. This is a concept that current AI cannot understand, but that may significantly impact on healthcare provision. The principle in the white paper states that “transparency should be proportionate to the risk”. However, the Government approach appears to avoid ownership and places the burden of interpretation and implementation on regulators. It is our view that this may lead to variable approaches between regulators which may negatively affect patient outcomes.

Section 2 - Regulation, and the role of regulatory bodies

The MHRA has publicly raised that there is a lack of clarity on how to best meet medical device (and software) requirements for products utilising artificial intelligence, to ensure these products achieve the appropriate level of safety and meet their intended purpose. The objectives set out in the AI as a Medical Device Change Programme and AI as Software published in June 2023 establish the need for regulation of AI as a medical device (AlaMD) and software as medical device (SaMD). We agree these are important processes and need to be woven into a clear and transparent structure of regulation, engagement, safety, deployment and patient redress. The key principles are:

  • Utilise existing regulatory frameworks to ensure AIaMD placed on the market is supported by robust assurance that it is safe and effective
  • Develop supplementary guidance to better ensure AIaMD placed on the market is supported by robust assurance with respect to safety and effectiveness
  • Outline technical methods to test AIaMD to ensure the device is safe and effective.

While it is somewhat reassuring that this is on the agenda of the MHRA, we are concerned that the MHRA has, in our experience been slow to react to emerging challenges and given the fast-paced nature of AI development, we question if they are the appropriate mechanism to enforce and implement these regulatory principles. Sufficiently clear and meaningful information should be available to allow the identification of biases linked to AI systems, evaluation of biases and steps taken to mitigate the impact of these biases.

We therefore believe that the current routes to contestability or redress for AI-related harms are not adequate. Focussing on healthcare, until a decision is taken about the limit of accountability for AI, redress will remain ambiguous and difficult to implement. Worldwide, there have been two broadly emerging trends in AI regulation. The USA has so-far leaned towards a fault-based (negligence) approach, while the EU has leaned towards a strict liability approach. It still appears unclear what the UK government is proposing, and this was one of the initial criticisms of the UK approach to AI regulation, we favour a strict liability approach for high risk areas such as healthcare.

One well-known challenge with AI is that as the technology develops, the ability to interrogate the “black box” diminishes due to the complex algorithms used. This could lead to the deskilling of healthcare professionals, alongside a lack of technical ability within regulators or the courts. Without effective national policy to define systems for accountability, contestability and redress - there is a risk that AI failures and harms fall into a legal and regulatory gap, that may ultimately disadvantage patients. Defining a system for accountability will be good for patient safety, but it will also give a clearly understood framework for technology providers to work and innovate within. Ambiguity about accountability and redress will be bad for patients and harm innovation too.

There is consensus that the usage of AI in healthcare is high risk and should be treated with caution. It is our view that if as suggested a proportional approach to risk is taken then, by definition, areas such as healthcare should fall under a strict liability approach so that the risk sits with the AI developer. It is essential that Government provides a clearer policy about its approach in this area.

In relation to improvements to appropriate routes to contestability or redress for AI-related harms, as we have stated, our view is that this starts with clear regulation around responsibility, which in turn starts with clear national guidance and direction. Further, it is our view that AI outcomes, not simply harms, should be open to challenge at both an individual and collective level. This may be via a form of administrative redress akin to an ombudsman, consumer champion or consumer group. Introducing a mechanism that removes cost burdens from those affected may reduce the need for expensive and in many cases prohibitive legal action to obtain redress. Effective systems for contestability and redress are essential to ensure public trust in AI, allowing systemic issues to be identified early, and to foster a transparent learning culture within the development of AI.

Regardless, any process of redress needs to be firmly engrained in legislation to ensure regulators are clear on their roles and responsibilities. We do have some concerns about where these regulatory powers would sit given the complexity of the regulatory landscape in the UK, especially in healthcare, and the associated risk poor regulatory function could have on patients and the public.

With regards to implementation, we feel the terminology in the consultation document, ‘correct and appropriate implementation’ is too vague to be effective. It is also ultimately risky in the context of a fast-paced technology such as AI, as it leaves too much open to interpretation in regulatory terms. In our view it is the role of government to ensure a clear and effective approach to regulation, to avoid a wide variance of interpretation by regulators which could lead to piecemeal, suboptimal and confused regulation.

We think that the public will benefit from an approach which proactively ensures their safety, and fosters innovation without taking unnecessary risks of harm. As such we advocate for more tightly defined guidance, with a wide scope. For example, systems such GPT-3 are general technologies; they are not formally designated as AI in the healthcare space. Narrow approaches to regulated function may leave gaps as new and innovative uses are discovered for technologies such as GPT-3. However, if the area of use is regulated rather than the technology itself, that would provide a safer option in our view. Therefore, rather than regulating systems marked as ‘for healthcare’ which may lead to loopholes, we would suggest that, when a system is deployed within healthcare then is should be treated and regulated as if that was its original use. This should be the case regardless of regulatory or sector boundaries or the original intent.

We agree that the introduction of a statutory duty on regulators to have due regard to the principles could clarify and strengthen regulators’ mandates to implement the five principles of AI regulation contained within the framework. However, we are unconvinced that it goes far enough.  We would go further. While we appreciate the limitations of parliamentary time, given the risks associated with the area of rapidly emerging technology, we believe a statutory footing should be a priority especially for high-risk areas such as healthcare.

With regards to the intent to create new functions to support the AI pro-innovation regulatory framework, we agree that a central monitoring and evaluation framework in conjunction with appropriate data gathering is important. It is only by a wide view of the interconnected parts that true impact can be measured. We support the feedback received to date that the current regulatory framework is patchy and given the current review of the regulatory landscape in healthcare, a lack of central coordination will create both a growing barrier to innovation, but also significantly increase risk, if left unaddressed. However, it is our opinion that as currently defined, this is too vague and greater detail will be required.

Relating to regulator capabilities, we disagree that regulators are best placed to apply the principles and believe that government is best placed to provide oversight and deliver central functions. Our regulator, the General Optical Council (GOC), regulates individuals and some businesses against a set of standards that it develops and revises in consultation with the public and the eye care sector. Quite reasonably, it does not currently have the expertise to regulate AI usage in optometry. Given the rapid pace of change regarding this technology many, if not all, individual regulators are unlikely to have the in-house skills appropriately to regulate AI.

In our response to the Professional Standards Authority draft strategic plan for 2023-26, we commented on the future scope of regulatory powers for regulators in a changing healthcare environment, for example in the field of remote diagnostics and testing and online sale and supply, given regulators such as the GOC are unable to regulate outside UK jurisdiction, and have even indicated a light-touch approach to illegal practice in the UK. We are concerned that an area as complex and specialist as AI may prove difficult for the GOC to regulate effectively.

Neither do we think the MHRA should be the sole regulatory body for AI given their recent significant reduction in establishment and the disbandment of their medical devices division. If there was a significant increase in their capacity then we would have a different position. One option could be the creation of AI regulator, but this could be onerous and difficult to manage given the different applications of AI across industry. Another option could be to implement a ‘trusted partner’ model, which we describe further in section four below.

We advocate that for high-risk areas such as healthcare, a strict liability approach is needed, that places the liability on the developer of the AI. This would reduce or even remove the pressure from non-specialist regulators, but a suitable place for this function to sit would need to be identified or created. As we have said, our experience suggests that the MHRA may not be the correct place to deliver this function, or at least not in its current structure and budget.

We feel there is a disparity in the proposal in striking the right balance between supporting AI innovation, addressing the unknown, the identification and prioritisation of risk, and future-proofing the AI regulation framework. We feel the proposal is too light-touch and in stark contrast to recent media statements by the prime minister, which have been far clearer in articulating the wider risks. These risks are of particular concern regarding health care:

  • Rishi Sunak: Guardrails needed to regulate growth of AI | The Independent
  • AI does not understand traditional borders, needs regulation: Rishi Sunak (msn.com)
  • UK not too small to be centre of AI regulation, says Rishi Sunak | Artificial intelligence (AI) | The Guardian
  • Prime Minister calls for UK to act as global leader in AI regulations amidst rising fears (bmmagazine.co.uk)
  • Rishi Sunak races to tighten rules for AI amid fears of existential risk | Artificial intelligence (AI) | The Guardian
  • Rishi Sunak draws up plans to police AI after meeting with the boss of Google | Daily Mail Online
  • Sunak and Google CEO discuss ‘striking right balance’ on AI regulation | The Independent
  • The UK will lead on limiting the dangers of artificial intelligence, says PM Rishi Sunak (ibtimes.co.uk)

Section 3 - Patient confidence, and safety

Are there additional activities that would help individuals and consumers confidently use AI technologies?

The consultation document was clear that the public had concerns about the risks in the use of AI, particularly the inability to know/understand when a ‘line had been overstepped’. Three specific risks were highlighted by stakeholders in the report:

  1. Invasion of privacy
  2. AI’s role in influencing public opinion
  3. Negative health outcomes

In our view, confidence around AI originates from transparency and reassurance that the regulations around the technology are robust and fit for purpose. These factors become more crucial based upon the impact of any decisions that the AI takes. Transparency requirements should be formalised as we have set out above. This could take the form of a regulated statement of conformity. We also suggest that a process for studying emerging risks should be clearly designed and articulated so that where changes to regulation are required, they can be more proactive and less reactive.

In terms of additional activities to support patient/public/consumer confidence there needs to be up to date and relatable guidance from organisations such as NHS England, NHS Confederation, and healthcare regulators that is co-designed with experts in AI and more specifically healthcare AI. This needs to be collaborative and adaptable for individual healthcare professions. We would expect NHS England to lead the way in developing assurances and evidence that will help to instil patient confidence and safeguard against patient risk. However, given recent devolvement of accountability in NHS England and the inception of Integrated Care Systems along with associated budgetary cuts, we would welcome some assurance as to the role and capacity of NHS bodies in the AI approval and regulatory space given the aim to accelerate deployment into clinical settings. Based on the current direction of travel, we have concerns as to whether NHS England has sufficient resource and capability.

Section 4 - Monitoring and evaluation

Do you know of any existing organisations who should deliver one or more of our proposed central functions? What, if anything, is missing from the central functions? Do you agree with our overall approach to monitoring and evaluation? What is the best way to measure the impact from the framework?

The current proposal for the central risk function as appears to suggest that its role will be one of monitoring, advice, and suggestion, with some, but maybe not all, decisions sitting with individual regulators. As common thread through this response is that the regulatory system in the UK is complex and varied both in terms of remit and quality of performance. In optometry we have seen that the GOC approach is sometimes less robust than we would like in terms of quality, consistency, and accuracy, all of which present a risk to patients and the public.

One logical organisation to deliver these functions is the Information Commissioner’s Office. However, given the scale and pace of change in AI we favour the approach suggested by the Alan Turing Institute of a centralised independent and authoritative body for AI and Regulation Common Capacity Hub (ARCCH). This proposal sees the creation of a politically-independent institution to act as a trusted partner for regulatory bodies. It would contain a multidisciplinary knowledge-base as well as national and international experience. It is suggested that the hub would:

  • Convene, facilitate, and incentivise regulatory collaborations around key AI issues
  • Cultivate state-of-the-art knowledge on the use of AI by regulated entities
  • Conduct risk mapping, regulatory gap analysis, and horizon scanning
  • Provide thought leadership on regulatory solutions and innovations
  • Develop proofs of concept and build shared AI tools for regulators
  • Supply training and skills development
  • Build up and facilitate sharing of human and technical resources across the regulatory landscape
  • Act as an interface for regulators to interact with relevant stakeholders including industry and civil society

Understandably this list is not exhaustive. However, as part of the healthcare sector in our view it is important also to add a specific focus on the risk of harm to patients. We recommended that a specific principle on patient safety, impact and transparency is included. We would formally embed interaction with this process as a requirement for regulators.

Regarding the specific questions about sandboxes, it is our view that to avoid differing interpretations a multiple sector, multiple regulator sandbox approach is needed to avoid gaps.

To maximise the benefit of any AI sandbox, one approach could be to permit the testing of higher risks applications only within the sandbox environment. This would enable the free use of higher risk technology in a safe and controlled environment and would mitigate risks for developers during the transition to a live product in line with a strict liability basis. However, it would be important that participation in the sandbox did not absolve developers from the strict liability requirements if a product is brought to market. Instead, it would mitigate work during testing, but still allow appropriate safeguards. 

With regards to the measurement of impact and output from the proposed framework, we are not sure at this stage what the qualitative and quantitative impact measures would be. For healthcare AI we would expect such impact measures to cover risk identification, risk reduction, and risk prevention; a measure of transparency, engagement, and consultation; data on patient confidence and approval, and data on the impact of the AI sandbox approach linked to acceleration and adoption.

Section 5 – Application and delivery

We have already highlighted our concerns above about the complexity, scale, and capability across the different regulators (in healthcare). In our recent submission to the PSA, we flagged that there was an urgent need for healthcare regulation better to address the emerging and potentially increasing risks of harm to the public linked to inappropriately regulated AI. Crucially, where AI regulation ultimately sits will determine where responsibility for ‘errors’ sits but is also intrinsic to the future regulation of professions.

Furthermore, based on our experiences, it is possible that regulators, for example the GMC and the GOC, will take different approaches and interpretations to the application of regulation, potentially creating a situation where a technology is both permitted and restricted for different professions including those within broadly the same professional field, for example ophthalmologists versus orthoptists or versus optometrists. More widely there could be similar issues between medical practitioners and radiographers, the former being regulated by the GMC and the later by the HCPC.

With regards to the implementation of our principles through existing legal frameworks, we are unsure about the statement at para 82 of the consultation document: that “It is not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI life cycle.” Without a clear direction or understanding of who will be responsible and liable, it feels as if there is a significant risk in this area. The proposals to address this lack detail and timescales are unclear and without assurances it is not possible to endorse this proposal. As we have said and broadly in line with the aim for a risk-based approach, we think that healthcare is a high-risk area, and should follow a strict liability approach.

The most likely challenge for regulators in trying to determine legal responsibility for AI outcomes is the risk of AI hallucinations. These have been widely reported and are seen in solutions such as ChatGPT. They are where the AI presents a plausible and confident answer that is incorrect. The challenge for regulators as this technology advances links to our previous point about the challenge of interrogating/challenging the “black box”. In many instances these technologies will outstrip more junior grades of employee regarding knowledge and the ability of most existing regulators to appraise.

Beyond that is the risk of novel applications of such models in a way that we have not currently considered, and that it may be difficult to consider. As technologies are often built upon a foundation model, it is important that these models are fully transparent, so that training data, data governance, risk, potential bias, and even energy usage are clear.

We agree that measuring compute provides a potential tool, but it is unclear that there is currently a good understanding of the baseline. As such it may be difficult to detect whether significant increases demonstrate increased usage and reliance or simply a system maturing. Therefore, while we agree it is a potential measure, we think more work is needed.

In our view a risk- and context-based approach to foundation AI models is sensible. This should include a specific focus on emerging risks and follow the principle that as foundation models can be used for a wide range of use cases, which could include healthcare, that they should be carefully monitored. It should be made clear to producers of foundation models that if they are used within healthcare, they will be subject to the same regulation as other healthcare AI products.

Like other forms of AI, these models are at risk of bias within the training data, either accidental or deliberate. They also contain the risk of hallucination that we have discussed earlier. Finally, given that foundation models may be built upon and refined by others, it can be challenging to work out who is responsible for errors: the original model, or the subsequent enhancements.

Overall, we think a strict liability model, supported by a “safe space” lower-risk sandbox for testing provides the best balance.

To maximise the benefit of sandboxes to AI innovators, we feel one approach could be to permit the testing of higher risk applications within the sandbox environment. This would permit the free use of higher risk technology in a safe and controlled environment. This would mitigate risks for developers. However, it would be important that participation in the sandbox did not absolve developers from the strict liability requirements once a product is brought to market. In terms of those industries and sectors that would benefit most from an AI sandbox, we believe it is those professions or industry sectors where there is the greatest risk to public or patient harm, negative impact, or unintended consequence such as the health, life sciences, food and agriculture and defence sectors.