Artificial Intelligence (AI) has enormous potential to transform healthcare but is uncertainty around applicable regulatory frameworks hampering technological advances and creating a lack of trust?
AI in Healthcare
On 31 December 2019, BlueDot, a Toronto-based digital-health company that uses an AI driven algorithm to track and anticipate the spread of infectious diseases, alerted its clients to the detection of unusual pneumonia cases in Wuhan, China. Nine days later, the World Health Organisation released their first public warnings about a new coronavirus. On 11 February 2020, the virus would be given its official name, Covid-19.
AI promises to transform the way we live generally but its impact in the healthcare sector may be more far reaching that in any other field. In many ways, AI and healthcare are uniquely suited given that a primary facet of healthcare is pattern recognition.
AI is already widely applied across the healthcare spectrum in clinical care, administration and pharma and current applications are only scratching the surface in terms of the positive disruptive impact that AI can have in healthcare. The most widely cited examples of AI in a current healthcare setting relate to automation of analysis of medical images, patient intake and communication (including chatbots) and remote monitoring of vital signs for early detection. Meanwhile, the pinnacle of AI potential is the promise of precision medicine which aspires to create patient treatments that are individualised to a patient’s particular genetic, behavioural and environmental context based on thousands of personal data points.
Given the pace at which AI solutions are being developed, the healthcare regulatory environment is still playing catch-up with technology and this has influenced companies offering AI medical devices or solutions which opt to show their AI to be decision supporting (i.e. assisting a doctor) as opposed to decision making. While this might make regulatory approval easier, it is problematic in terms of withholding the progression of innovative AI devices or solutions.
Regardless of technological advancement, the take up of these solutions will be dependent upon the trust that can be placed in them and a key driver of that is a robust regulatory framework governing AI in healthcare.
Regulation of AI in Europe Generally
The European Commission is in the process of carrying out an impact assessment initiative aimed at addressing the ethical and legal issues raised by AI for the purpose of considering appropriate harmonised frameworks across the single market while ensuring inclusive societal outcomes (Inception impact assessment document Ares (2020) 3896535). A key objective of the impact assessment is to seek to create a harmonised framework for AI and to reduce compliance costs derived from legal fragmentation of divergent Member State approaches.
Key challenges associated with regulation of AI generally (which are referenced in the inception impact assessment) include the following:
- AI’s technological capabilities could result in potential breaches of fundamental rights from sources of risk that did not exist before (e.g. remote biometric surveillance).
- Bias and discriminatory outcomes may result from decisions taken or supported by AI systems that may remain completely unperceived.
- AI may generate safety risks for users and third parties that are not expressely covered by current product safety legislation.
- The characteristics of AI technologies may make it difficult for persons who have suffered harm to obtain compensation as it may be difficult or impossible to trace back the outcome to a particular human action or omission.
The inception impact assessment considers various policy options that could be undertaken at European level to include the following:
- Option 0 / Baseline: This option would result in no EU policy change on the basis that current EU legislation on the protection of fundamental rights, consumer protection and product safety and liability would remain relevant and applicable to a large number of emerging AI applications. Inherent risks in this approach would be increased fragmantation due to interventions at Member State level and a lack of clarity regarding AI specific risks.
- Option 1 / EU “soft law”: This option would promote industry initiatives for AI based on a “soft law” approach that would build on existing ethical codes /guidelines and consist of monitoring and reporting on voluntary compliance with such initiatives.
- Option 2 / An EU voluntary labelling scheme: This option would establish a voluntary labelling scheme to enable customers to identify AI applications that comply with certain requirements for trustworthy AI with the label functioning as an indication to the market that the labelled AI application is trustworthy.
- Option 3 / EU legislative instrument: This option would involve an EU legislative instrument which would establish mandatory requirements for all or certain types of AI in relation to training data, record keeping on datasets and alogorithms, accuracy and human oversight. Such a legislative instrument could be (i) limited to a specific category of AI; (ii) limited to “high-risk” applications or (iii) cover all AI applications.
- Option 4 / Combination Approach: This option would envisage a combination of any of the above options.
Regulation of AI in Healthcare / Medtech
In September 2020, MedTech Europe published a response to the inception impact assessment in the context of the medtech sector and specifically highlighted the following for consideration in the context of the above outlined options:
- In most cases in healthcare, AI is a tool used in the development of other healthcare products and not as a separate entity of its own and whether AI is embedded in a medical device or is a self-standing medical device software, it would be covered by the existing medtech sectoral regulations (i.e. Medical Device Regulation (“MDR”) and In-Vitro Diagnostic Regulation (“IVDR”)) which trigger considerable strict legal requirements including post market surveillance and vigilance obligations.
- Many requirements that the European Commission have identified as being relevant for AI are already included in MDR/IVDR (such as safety and performance of AI) and as such would be taken into account in the risk/benefit and security assessments during development and design of the medical device or medical device software.
- Consideration of existing sectoral legislation is essential in order to avoid creating legal uncertainty that may be caused due to conflicting legislation.
- Medical device software must conform to GDPR requirements and these requirements are equally applicable to AI based medical device software.
- Medical technologies including medical device software and AI with an intended medical purpose are subject to change management requirements under MDR/IVDR.
Given that the medtech sector is already heavily regulated, MedTech Europe have advised caution against generally applicable AI specific legislation in view of the risk that this could create conflict with existing legislation (to include MDR/IVDR in the context of the healthcare / medtech sector). At the same time, MedTech Europe acknowledge that within the existing medtech regulatory regimes, it may be necessary for additional guidance to issue over time to provide interpretation and direction on novel AI approaches.
What does the future hold?
The period for the public consultation on the impact assessment is now at an end with the European Commission due to issue feedback in early 2021.
While MedTech Europe make a strong case for regulation of AI healthcare solutions to remain within current medtech regulatory frameworks, it is unclear if such regimes are fully equipped to deal with particular AI issues such as opacity and lack of transparency with regard to decision making. However, a sectoral or risk based approach to regulation would certainly appear sensible in the already heavily regulated healthcare / medtech sector given that the industry is currently expending significant resources in meeting compliance with MDR/IVDR and many of the concerns around regulation of AI are catered for within the current regulatory regime.
The regulatory approach that will ultimately be proposed by European regulators remains to be seen. However, it is clear that certainty of approach is needed sooner rather than later in order to avoid patient/practitioner mistrust or a failure to realise the full transformative potential of AI in healthcare. If the correct balance can be struck, given that an AI algorithim was one of the first indicators of Covid-19, it is difficult to see how AI won’t be a critical component in predicting or preventing future pandemics.