AI Medical Device Software under EU MDR & IVDR
Revolutionary yet controversial, Artificial Intelligence (AI) or Machine Learning was already present in CE-marked medical device software well before the EU AI Act was published. As horizontal legislation, the AI Act intersects with the EU MDR and IVDR for certain medical devices.
This article explores the implications of the EU AI Act for medical device software, while also considering the areas of uncertainty that remain until the regulation becomes fully applicable.Â
Key takeaways
- There is no simple answer to what types of medical device software (MDSW) algorithms fall under the AI Act. You need to understand the ambiguity in the legal definition of âAI systemâ and, to a lesser extent, of âGeneral-purpose AI modelâ (GPAIM).
- MDSW manufacturers should start by determining how the AI Act impacts their software, based on the qualification as AI system or GPAIM, the type of AI System or GPAIM, and the identification of any prohibited AI practices that would need to be eliminated.
- While awaiting MDCGâs planned guidance document on the intersection between the EU MDR/IVDR and the AI Act, manufacturers can only apply common sense in extrapolating the existing requirements for MDSW.
Content
What is the EU AI Act?
The EU Artificial Intelligence (AI) Act, implemented in Regulation (EU) 2024/1689 (AI Act), is part of the EUâs digital strategy to frame the development and use of safe and lawful AI that respects EU fundamental rights.Â
The AI Act follows the principles of the EUâs New Legislative Framework, and will require CE-marking of high-risk âAI systemsâ and certain obligations for âGeneral-purpose AI modelsâ, with exemptions for military, defense or national security use as well as for scientific research and development purposes. In addition, AI systems released under free and open-source licenses are out of the scope of the AI Act provided they do not qualify as âhigh-riskâ, involve prohibited AI practices (per Art. 5) or interact directly with natural persons (per Art. 50).
Because it applies horizontally to all sectors, the AI Act overlaps with Regulation (EU) 2017/745 on medical devices (EU MDR) and Regulation (EU) 2017/746 on in-vitro diagnostic devices (IVDR), which logically generates some duplication and even inconsistent requirements and terminology.
The definitions of âAI systemâ and âGeneral-purpose AI modelâ are essential to understand the implications for medical device software (MDSW).
- âAI systemâ is defined as:
âa machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.â
[AI Act, Art. 3(1)] - âGeneral-purpose AI modelâ (GPAIM) is defined as:
âan AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.â
[AI Act, Art. 3(63)]
What types of MDSW algorithms fall under the AI Act?
The simple answer would be: any MDSW algorithms that qualify as AI systems or GPAIM.
Now, given the multi-layered and partially imprecise legal definitions of AI system and GPAIM, there is no simple answer.
Recital (12) in the AI Act indicates that the AI system definition:
âshould be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approachesâ.
Such key characteristics include:
- Designed to operate with varying levels of autonomy, which Recital (12) describes as having some degree of independence of actions from human involvement and of capability to operate without human intervention. That âsome degreeâ would need an official position on the minimum level of autonomy that characterizes an AI system. Particularly, how autonomy differs from rules âto automatically execute operationsâ (i.e. automation), which should be excluded from the definition of AI systems according to Recital (12).Â
- Capability to infer, which refers, according to Recital (12), to the process of obtaining outputs (such as predictions, content, recommendations, or decisions), which can influence physical and virtual environments, as well as to the capability to derive models and algorithms from inputs or data. The same Recital further indicates that techniques that enable inference include approaches to machine learning as well as logic- and knowledge-based approaches, and that the capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modeling. These descriptions are still too generic and would warrant official interpretation (including examples) to distinguish AI models from inferential statistical methods.
- Potential adaptiveness, which refers to self-learning capabilities, allowing the system to change while in use. Now, according to the criteria in Article 6(1), adaptiveness is not a defining characteristic since an AI system âmayâ or may not exhibit adaptiveness.
In other words, an algorithm intended to calculate the risk of a given clinical condition from a series of health parameters (i.e. inferential model) that is developed and validated based on a large set of scientific publications but is itself a static, predetermined model (i.e. no adaptiveness) could correspond to an AI system if it were viewed as designed with âsome degreeâ of autonomy.
As to GPAIM, Recital (97) in the AI Act also intends to define the concept based on its key functional characteristics, in particular:
- Significant generality andÂ
- Capability to competently perform a wide range of distinct tasks.
Again, the level of description of these two characteristics is insufficient for those not familiar with AI terminology. Recital (97) merely explains that GPAIM are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. And Recital (98) indicates that models with at least a billion parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks. The concept is in fact clearer in the EU Parliamentâs At a Glance paper on General Purpose Artificial Intelligence. General purpose AI is a term commonly used to refer to AGI (Artificial General Intelligence) technologies with generative capabilities trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. Another similar term is âfoundation modelâ. With this information, it is obvious that the extent of model parameters and training data are more decisive than the extent of the purpose of application.
In other words, a large generative AI model designed to address an extremely narrow medical diagnosis purpose could still qualify as GPAIM.
The below table summarizes the key concepts, as described in the Recitals of the AI Act, that need to be well understood by MDSW manufacturers.
How does the AI Act impact medical device software (MDSW)?
The impact of the AI Act on MDSW depends on the algorithm qualification as AI system or GPAIM, and then on the type of AI system or GPAIM, as well as on the use of any AI practices that are banned under AI Act Art. 5.
MDSW as AI Systems
MDSW that qualifies as a âhigh-riskâ AI system shall be subject to dual CE marking, under the AI Act as well as under the EU MDR or IVDR but the certification process is meant to be common, i.e. it can be conducted simultaneously by the same Notified Body.
According to Art. 6(1) of the AI Act, an AI system is considered to be âhigh-riskâ when it is either a safety component of a product or a product itself, where such product is required to undergo third-party conformity assessment under any of the EU harmonized legislation listed in Annex I of the AI Act. Because Annex I of the AI Act includes both the EU MDR and the IVDR, any AI-enabled MDSW and any AI-enabled âsafety componentâ of an MDSW that requires Notified Body involvement for certification falls under the definition of âhigh riskâ AI system.
In determining whether an AI-enabled algorithm within MDSW qualifies as a high-risk AI system, the definition of âsafety componentâ is important. It means:
âa component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.â [AI Act, Art. 3(14)]
The closest concept in the EU MDR or IVDR is âparts and componentsâ that significantly change the performance or safety characteristics or the intended purpose of the device, per EU MDR Art. 23(2) and IVDR Art. 20(2).
In consideration of Art. 6(1) of the AI Act, it can therefore be concluded that:
- AI-enabled MDSW that classifies above Class I medical device or Class A IVD, and
- AI-enabled algorithm with safety/performance impact within a MDSW that classifies above Class I medical device or Class A IVD,
would be viewed as âhigh-risk AI systemâ.
But thatâs not all.
Per Art. 6(2) of the AI Act, AI systems listed in Annex III are also considered to be âhigh-riskâ. The following cases may be relevant for MDSW, particularly for Class I MDSW that do not fulfill the criteria in Art. 6(1):
- AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services. [Annex III, section 5(a)]
This could be the case for algorithms evaluating the clinical eligibility of costly or scarce surgical treatment (e.g. transplantation). - AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems. [Annex III, section 5(d)]
Last, MDSW that corresponds to an AI system but does not qualify as âhigh riskâ, does not need CE marking under the AI Act but is not completely free of requirements. For more details, see section What requirements apply to MDSW under the AI Act?
General-purpose AI Models (GPAIM) in MDSW
When determining whether a MDSW algorithm corresponds to a General-purpose AI Model (GPAIM), it is important to correctly interpret the criteria of âsignificant generalityâ and âcompetently performing a wide range of distinct tasksâ in the definition of GPAIM.
GPAIM refers to AI technologies with generative capabilities trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. Typical GPAIM include large generative AI models like image recognition, large language models, or video and audio generation tools. For MDSW, GPAIM that support natural language processing (e.g. GPT-4), image recognition and/or data analysis (e.g. ResNet, U-Net), are particularly useful and already integrated as foundation in numerous MDSW. The MDSW manufacturer must be able to identify such GPAIM in the case of third-party software, e.g. a chatbot.
GPAIM are in turn classified based on their âsystemic risksâ, according to the following criteria in AI Act Art. 51:
- High impact capabilities, corresponding to GPAIM trained using a cumulative computing power of more than 1025 FLOPs (Floating Point Operations), or
- Capabilities of equivalent impact to those with high impact capabilities, as decided by the EU Commission, based on the criteria in Annex XIII of the AI Act.
In brief, GPAIM are considered to have âsystemic riskâ when they are very capable and widely used (i.e. currently, at least 10â000 registered business users in the Union market) and could, for example, affect many users if they propagate harmful biases across many applications.
The EU Commission might adopt delegated acts to amend the thresholds that determine the classification of GPAIM as having âsystemic riskâ.
As described in AI Act Recital (97), GPAIM may be placed on the market through libraries, application programming interfaces (APIs), as direct download, or other means. The GPAIM may then be further modified (e.g. addition of a user interface) or fine-tuned into new models by the AI system provider. GPAIM being an essential component of AI-enabled MDSW, the corresponding GPAIM providers would become critical suppliers of the MDSW manufacturer under the EU MDR or IVDR, which raises some interesting regulatory questions about the feasibility of such a relationship.
Prohibited AI practices
AI-enabled MDSW cannot incorporate any of the prohibited practices listed in Art. 5 of the AI Act, which are those considered as a potential threat to the EUâs fundamental rights and values, as they are associated with significant potential for manipulation, exploitation, and social control.
The prohibited AI practices that the MDSW manufacturer must exclude are:
- Subliminal techniques or purposefully manipulative/deceptive techniques that impair the ability to make an informed decision, thus causing significant harm.Â
- Techniques that exploit the vulnerability of a person or specific group of persons (e.g. specific age, disability, social/economic status) so as to distort their behavior, thus causing significant harm.
- Establishing a social score based on social behavior or personal characteristics, where such score leads to detrimental or unfavorable treatment of the person or group of persons in unrelated social contexts or in unjustified or disproportionate manner.
- Profiling techniques to predict the risk of a person committing a criminal offense (unless the person is already involved in a criminal activity).
- Untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.
- Inference of emotions in the workplace and educational institutions, except for medical or safety reasons.
- Biometric categorization systems to infer a personâs race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (unless lawfully acquired for law enforcement purposes).
- âReal-timeâ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, with some exceptions and conditions.
What requirements apply to MDSW under the AI Act?
The requirements for AI-enabled MDSW under the AI Act depend on the classification of the specific AI algorithms as either:
- âhigh-riskâ AI system,Â
- ânon-high riskâ AI system,Â
- GPAIM with âsystemic riskâ, orÂ
- GPAIM without âsystemic riskâ.
Providers of âHigh-riskâ AI systems must comply with the following requirements in consideration of the intended purpose and the generally acknowledged state of the art on AI and AI-related technologies:
- Risk management system, per Art. 9
- Data governance, per Art. 10Â
- Technical documentation, per Art. 11Â
- Record keeping, per Art. 12Â
- Transparency and provision of information to deployers, per Art. 13Â
- Human oversight, per Art. 14Â
- Accuracy, robustness and cybersecurity, per Art. 15Â
- Quality Management System (QMS), per Art. 17Â
- Document retention, per Art. 18 (CE-marking documentation) and Art. 19 (automatically generated logs)
- Corrective actions and duty of information, per Art. 20
- Cooperation with competent authorities, per Art. 21
- Assessment of impact on fundamental rights, per Art. 27
- Conformity assessment, per Art. 43. The conformity assessment route is based on:
- Internal controls (i.e. self-declaration) for MDSW that is not subject to Notified Body oversight under the EU MDR or IVDR and corresponds to high-risk AI systems in Annex III of the AI Act, or
- Notified Body certification, for MDSW already subject to Notified Body oversight under the EU MDR or IVDR. The Notified Body shall conduct the certification according to the EU MDR or IVDR, incorporating additional requirements from the AI Act, namely: points 4.3, 4.4, 4.5 and fifth paragraph of point 4.6 of Annex VII as well as Section 2, Chapter III.
- For high-risk AI systems in Annex III (i.e. if they correspond to MDSW not subject to Notified Body oversight): registration in the EU database described in Chapter VIII, per Art. 49(1)
- PMS monitoring, per Art. 72
- Reporting of serious incidents, per Art. 73. For MDSW, this would concern only the infringement of EU legislation on the protection of fundamental rights, as described in Art. 3 (49)(c).
Many of the requirements are expected to be fulfilled together with those for MDSW CE marking under the EU MDR or IVDR but AI-Act specific requirements (e.g. impact on fundamental rights, human oversight) might overcomplicate the common conformity assessment.
AI-enabled MDSW manufacturers will have to run a gap assessment of the additional elements needed and determine whether it makes sense to merge the cumulative evidence of conformity in a single technical documentation set.
For providers of AI systems that do not correspond to âhigh-riskâ, significantly fewer requirements apply:
- Ensure that the AI system is trustworthy by developing their own voluntary codes of conduct or adhering to codes of conduct adopted by other representative associations.
- Transparency obligations for AI systems intended to interact with natural persons, per Art. 50.Â
- Registration in the EU database described in Chapter VIII, per Art. 49(2).
Providers of GPAIM with systemic risk must fulfill the following obligations:Â
- Compile and maintain a Technical Documentation according to Annex XI, to be kept available to the EU AI Office and national competent authorities, per Art. 53(1)(a).
- Disclose documentation to providers of AI systems that integrate the GPAIM, per Art. 53(1)(b).
- Establish a policy to comply with Union law on copyright and related rights, per Art. 53(1)(c).
- Compile and make publicly available a sufficiently detailed summary on the content used for training of the GPAIM per Art. 53(1)(d). The EU AI Office shall provide a template for this summary.
- Perform model evaluation, incl. adversarial testing, per Art. 55(1)(a).
- Assess and mitigate possible systemic risks, per Art. 55(1)(b).
- Keep track, document and report serious incidents and possible corrective measures to AI Office and national competent authorities, per Art. 55(1)(c).
- Ensure adequate cybersecurity protection, per Art. 55(1)(d).
And providers of GPAIM that do not have âsystemic risksâ are still subject to the following requirements:
- Compile and maintain a Technical Documentation according to Annex XI, to be kept available to the EU AI Office and national competent authorities, per Art. 53(1)(a).
- Disclose documentation to providers of AI systems that integrate the GPAIM, per Art. 53(1)(b).
- Establish a policy to comply with Union law on copyright and related rights, per Art. 53(1)(c).
- Compile and make publicly available a sufficiently detailed summary on the content used for training of the GPAIM. The EU AI Office shall provide a template for this summary.
For all GPAIM providers, adherence to codes of practice described in Art. (56) allows to demonstrate conformity with the above requirements until a harmonized standard is published.
How shall MDSW manufacturers determine the impact of the AI Act?
MDSW manufacturers should:
- Assess the applicability of the AI Act to their MDSW, considering the definitions of AI systems and GPAIM, as well as the exclusion of scope for military, defense and national security purposes. The exclusion of scope for research-only products makes no sense for MDSW, as the software would not qualify as a medical device either.
The determination of applicability might not always be obvious when SOUP (Software of Unknown Provenance) is integrated in the MDSW, and the manufacturer does not have full details about the algorithms behind the SOUP. - Conduct and document an assessment to exclude use of any of the prohibited practices described in AI Act Art. 5.
- If the MDSW falls under the AI Act, determine the appropriate classification, i.e. whether any AI system qualifies as âhigh-riskâ, per Art. 6(1) or Art. 6(2), and whether any GPAIM in the MDSW qualifies as involving âsystemic riskâ, per Art. 51(1).
The flowchart below illustrates the impact determination process.
When does a manufacturer of AI-enabled MDSW need to take action?
The AI Act will apply from 2 August 2026, with the following exceptions:
- The general provisions (Chapter I) and the prohibition of AI systems posing unacceptable risks (Chapter II) will apply from 2 February 2025.
- Aspects relative to Notifying authorities and Notified Bodies for high-risk AI systems (Chapter III, section 4), the obligations for GPAIM (Chapter V), governance aspects (Chapter VII), confidentiality (Art. 78), and penalties (Chapter XII), except for fines for GPAIM providers (Art. 101), will apply from 2 August 2025.
- The obligations relative to high-risk AI systems newly placed on the market, except for those that concern Notifying authorities and Notified Bodies (Chapter III, section 4) will become applicable from 2 August 2027.
According to AI Act Art. 111, a few exceptions and additional periods apply to AI systems and GPAIM already placed on the market:
- For high-risk AI systems placed on the market or put into service before 2 August 2026, the AI Act will only apply to concerned operators in case of significant design changes after that date. In the absence of an official description of significant design changes under the AI Act, MDCG 2020-3 for medical devices (non-IVDs) and MDCG 2022-6 for IVDs could be followed as guiding principles. However, until AI-specific guidance is published, this exemption could result in old AI-enabled MDSW never being subject to CE-marking under the AI Act because no significant change would have been brought to the AI algorithm.Â
- If the high-risk AI system is intended to be used by public authorities, providers and deployers shall comply with the AI Act by 2 August 2030, i.e. irrespective of whether or not significant changes are made to the AI system design.
- Providers of GPAIM that have been placed on the market before 2 August 2025 shall comply with the obligations of the AI Act by 2 August 2027.
In brief, the AI Act will apply progressively, with deadlines scattered before and after the actual Date of Application (DoA), as shown in the below infographic.
In addition to the impact assessment described in the previous chapter (see How shall MDSW manufacturers determine the impact of the AI Act?), with particular focus on the identification of prohibited AI practices that will need to be discontinued by 2 February 2025, manufacturers of AI-enabled MDSW should already start preparing for at least the following aspects:
AI literacy
Per Art. 4, providers (basically, legal manufacturers) and deployers (i.e. AI users within a professional activity) of AI-enabled medical devices or IVDs are required to ensure sufficient level of âAI literacyâ for their own staff as well as for any third parties who operate and use the AI systems on their behalf.Â
Art. 3(56) of the AI Act defines âAI literacyâ as the skills, knowledge and understanding that allow making an informed deployment of AI systems, as well as gaining awareness about the opportunities and risks of AI and possible harm it can cause. Although it does not provide specific criteria, fulfilling the definition might apply at different levels within the organization depending on the staff involvement in AI deployment and assessment of opportunities and risks.
As such, AI-enabled MDSW manufacturers must take measures by 2 February 2025 to determine which job positions need which level of AI knowledge/skills and hire or train effectively the concerned individuals.
Notified Body identification for new or significantly modified âhigh-riskâ AI systems
August 2027 might seem to leave ample time to prepare but manufacturers of AI-enabled MDSW that corresponds to âhigh-riskâ AI systems should immediately set out to find whether their Notified Body is already preparing or planning to be designated under the AI Act. If that is not the case, a change in Notified Body would need to be envisaged. Now, the availability of sufficient Notified Bodies with dual designation under the EU MDR/IVDR and AI Act might become a bottleneck in the certification process, and AI-enabled MDSW manufacturers could face the need to involve separate Notified Bodies.
Team-NB already indicated in their Position Paper on AI designation that they foresaw challenges with their capacity.
What do regulators expect for AI-enabled MDSW until the AI Act becomes applicable?
For the time being, there is no official position under the EU MDR or IVDR. However, the Medical Device Coordination Group (MDCG) is planning to issue a FAQ document on the interplay between the EU MDR/IVDR and the EU AI Act by the end of 2024.
In the meantime, Notified Bodies are running webinars and publishing position papers mostly recommending common sense extrapolation of the existing guidance for MDSW, e.g. MDCG 2020-1 on clinical/performance evaluation of MDSW and MDCG 2019-16 on cybersecurity. They also piggyback on US FDA requirements, which are being developed around the following guiding principles:
- Good Machine Learning Practice for Medical Device Development: Guiding Principles (October 2021)
- Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles (October 2023)
- Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles (June 2024)
In brief, navigating the current regulatory uncertainty requires becoming familiar with and adapting to various best practices and recommendations from professional organizations and academic experts, a sizable and increasing pool of international standards relevant to AI but not specific to AI-enabled MDSW, and EU initiatives for AI-enabled MDSW (e.g. EU network of real-world testing facilities TEF-Health, EU Parliamentâs paper on the Artificial intelligence in healthcare).
The AI Act itself sets forth some generic basis for AI systems (e.g. on training, validation and testing data sets, on accuracy, robustness and cybersecurity, and on testing high-risk AI systems in real world conditions) that could already be incorporated by manufacturers in their development process for AI-enabled MDSW.
How Decomplix can help
Decomplix provides expert regulatory and clinical assessment of your situation, the qualification and classification of your MDSW both under the EU AI Act and the EU MDR or IVDR, as well as a complete roadmap to obtaining a CE mark for your AI-enabled MDSW.
You can learn more about our services here.