AI Medical Device Software under EU MDR & IVDR

Revolutionary yet controversial, Artificial Intelligence (AI) or Machine Learning was already present in CE-marked medical device software well before the EU AI Act was published. As horizontal legislation, the AI Act intersects with the EU MDR and IVDR for certain medical devices.
This article explores the implications of the EU AI Act for medical device software, while also considering the areas of uncertainty that remain until the regulation becomes fully applicable.

Replaces the version of 30.08.2025

Key takeaways

  • There is no simple answer to what types of medical device software (MDSW) algorithms fall under the AI Act. You need to understand the ambiguity in the legal definition of “AI system” and, to a lesser extent, of “General-purpose AI model” (GPAIM), which are not fully clarified by the official guidelines published by the EU Commission.
  • MDSW manufacturers should start by determining how the AI Act impacts their software, based on the qualification as AI system or GPAIM, the type of AI System or GPAIM, and the identification of any prohibited AI practices that would need to be eliminated.
  • Despite guidance document MDCG 2025-6 on the intersection between the EU MDR/IVDR and the AI Act, and various Team-NB positions, manufacturers must still apply substantial common sense in extrapolating the existing requirements for MDSW.

Content

What is the EU AI Act?

The EU Artificial Intelligence (AI) Act, implemented in Regulation (EU) 2024/1689 (AI Act), is part of the EU’s digital strategy to frame the development and use of safe and lawful AI that respects EU fundamental rights. 

The AI Act follows the principles of the EU’s New Legislative Framework, and will require CE-marking of high-risk “AI systems” and certain obligations for “General-purpose AI models”, with exemptions for military, defense or national security use as well as for scientific research and development purposes. In addition, AI systems released under free and open-source licenses are out of the scope of the AI Act provided they do not qualify as “high-risk”, involve prohibited AI practices (per Art. 5) or interact directly with natural persons (per Art. 50).

Because it applies horizontally to all sectors, the AI Act overlaps with Regulation (EU) 2017/745 on medical devices (EU MDR) and Regulation (EU) 2017/746 on in-vitro diagnostic devices (IVDR), which logically generates some duplication and even inconsistent requirements and terminology.

The definitions of “AI system” and “General-purpose AI model” are essential to understand the implications for medical device software (MDSW).

  • “AI system” is defined as:
    “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
    [AI Act, Art. 3(1)]
  • “General-purpose AI model” (GPAIM) is defined as:
    “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”
    [AI Act, Art. 3(63)]

The Medical Device Coordination Group (MDCG) has finally published a long-awaited guidance document on the interplay between the AI Act and the EU MDR/IVDR (MDCG 2025-6). The document refers to AI systems used for medical purposes as Medical Device Artificial Intelligence (MDAI), and also covers “Annex XVI products” and accessories to medical devices and IVDs. It is recommended to become familiar with this new term.

NOTE: In this article, we use the term AI-enabled MDSW generically, encompassing MDAI as well as software that would not qualify as an AI system but would still be AI-enabled. This is due to the unclear definition of AI systems under the AI Act.

What types of MDSW algorithms fall under the AI Act?

The simple answer would be: any MDSW algorithms that qualify as AI systems or GPAIM.

Now, given the multi-layered and partially imprecise legal definitions of AI system and GPAIM, there is no simple answer. 

The EU Commission committed to issue clarifying guidelines to help with the interpretation of these legal definitions. The first one concerned AI system definitions and included various examples, but in fact generated further confusion.

Recital (12) in the AI Act indicates that the AI system definition:

“should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches”.

Such key characteristics include:

  • Designed to operate with varying levels of autonomy, which Recital (12) describes as having some degree of independence of actions from human involvement and of capability to operate without human intervention.
    The EU Commission guidelines on AI system definitions indicate that human involvement and intervention can be either direct, e.g. through manual controls, or indirect, e.g. though automated systems-based controls which allow humans to delegate or supervise system operations. It also concludes that any system with “some reasonable degree of independence” fulfills the definition of an AI system. Yet, the guidelines fail to provide examples that allow telling what is reasonable.
    In brief, this criterion needs further clarification on the minimum level of autonomy that characterizes an AI system. Particularly, how autonomy differs from rules “to automatically execute operations” (i.e. automation), which should be excluded from the definition of AI systems according to Recital (12). 
  • Capability to infer, which refers, according to Recital (12), to the process of obtaining outputs (such as predictions, content, recommendations, or decisions), which can influence physical and virtual environments (considered by the EU Commission guidelines to concern rather the AI system’s use phase), as well as to the capability to derive models and algorithms from inputs or data (considered by the EU Commission guidelines to concern rather the AI system’s building phase). The same Recital further indicates that techniques that enable inference include approaches to machine learning as well as “logic- and knowledge-based approaches”, and that the capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modeling.
    The EU Commission guidelines surmise that the capability to “infer how to obtain outputs” that is part of the legal definition should be understood as referring to the building phase of the AI system. This helps. However, the interpretation of  “logic- and knowledge-based approaches” in Recital (12) is where confusion starts. The confusion grows considerably with some of the examples and explanations on machine learning systems that would fall out of the scope of AI system definition.The most puzzling ones concern systems that make predictions with the objective of improving computational performance.
  • Potential adaptiveness, which refers to self-learning capabilities, allowing the system to change while in use. Now, according to the criteria in Article 6(1), adaptiveness is not a defining characteristic since an AI system “may” or may not exhibit adaptiveness.

It is therefore clear that the gray area in the qualification of software as an AI system has widened with the EU Commission’s guidelines on AI system definitions and requires careful determination..

As to GPAIM, Recital (97) in the AI Act also intends to define the concept based on its key functional characteristics, in particular:

  • Significant generality and 
  • Capability to competently perform a wide range of distinct tasks.

Again, the level of description of these two characteristics is insufficient for those not familiar with AI terminology.

Recital (97) merely explains that GPAIM are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. And Recital (98) indicates that models with at least a billion parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks. General purpose AI is a term commonly used to refer to AGI (Artificial General Intelligence) technologies with generative capabilities trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. Another similar term is “foundation model”.

The GPAIM concept has been clarified in the second guidelines published by the EU Commission under the AI Act, relative to GPAIM definitions and related requirements. These guidelines are effective from 2 August 2025, which is also the date of application of the obligations for GPAIM providers under Chapter V of the AI Act.

In brief, an AI model is considered a GPAIM if its training compute is greater than 10^23 FLOP (the level of compute typically required to train a model with a billion parameters and large amount of data) and it can generate language (text or audio), text-to-image, or text-to-video. But:

  • If a model meets the indicative criterion but exceptionally does not display significant generality or perform a wide range of tasks, it is not a GPAIM. Various examples are provided.
  • Conversely, if it does not meet the criterion but exceptionally displays significant generality and performs a wide range of tasks, it is a GPAIM. No examples are provided and leaves room for interpretation of the exceptionality.

The below table summarizes the key concepts, as described in the Recitals of the AI Act, that need to be understood by MDSW manufacturers.

How does the AI Act impact medical device software (MDSW)?

The impact of the AI Act on MDSW depends on the algorithm qualification as AI system or GPAIM, and then on the type of AI system or GPAIM, as well as on the use of any AI practices that are banned under AI Act Art. 5.

MDSW as AI Systems (MDAI)

MDAI that qualifies as a “high-risk” AI system shall be subject to dual CE marking, under the AI Act as well as under the EU MDR or IVDR but the certification process is meant to be common, i.e. it can be conducted simultaneously by the same Notified Body.

According to Art. 6(1) of the AI Act, an AI system is considered to be “high-risk” when it is either a safety component of a product or a product itself, where such product is required to undergo third-party conformity assessment under any of the EU harmonized legislation listed in Annex I of the AI Act. Because Annex I of the AI Act includes both the EU MDR and the IVDR, any AI-enabled MDSW and any AI-enabled “safety component” of an MDSW that requires Notified Body involvement for certification falls under the definition of “high risk” AI system.

Note that MDAI that is manufactured and used within a healthcare institution (so-called “in-house” devices) does not require Notified Body involvement, per EU MDR/IVDR Art. 5(5), and consequently would not qualify as a high-risk AI system. This seems surprising since an “in-house” device could be similar to a Class III device or a Class D IVD, subject to significant Notified Body scrutiny under the EU MDR or IVDR. Guidance document MDCG 2025-6 on the interplay between the AI Act and the EU MDR/IVDR confirms this anomaly but indicates that specific guidelines will be issued on MDAI that are manufactured and used within healthcare institutions.

In determining whether an AI-enabled algorithm within MDSW qualifies as a high-risk AI system, the definition of “safety component” is also important. It means:

“a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.” [AI Act, Art. 3(14)]

The closest concept in the EU MDR or IVDR is “parts and components” that significantly change the performance or safety characteristics or the intended purpose of the device, per EU MDR Art. 23(2) and IVDR Art. 20(2).

In consideration of Art. 6(1) of the AI Act, it can therefore be concluded that:

  • AI-enabled MDSW that classifies above Class I medical device or Class A IVD, and
  • AI-enabled algorithm with safety/performance impact within a MDSW that classifies above Class I medical device or Class A IVD, 

would be viewed as “high-risk AI system”.

But that’s not all.

Per Art. 6(2) of the AI Act, AI systems listed in Annex III are also considered to be “high-risk”. The following cases may be relevant for MDSW, particularly for Class I MDSW that do not fulfill the criteria in Art. 6(1):

  • AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services. [Annex III, section 5(a)]
    This could be the case for algorithms evaluating the clinical eligibility of costly or scarce surgical treatment (e.g. transplantation).
  • AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems. [Annex III, section 5(d)]

Last, MDSW that corresponds to an AI system but does not qualify as “high risk”, does not need CE marking under the AI Act but is not completely free of requirements. For more details, see section What requirements apply to MDSW under the AI Act?

General-purpose AI Models (GPAIM) in MDSW

When determining whether a MDSW algorithm corresponds to a General-purpose AI Model (GPAIM), it is important to correctly interpret the criteria of “significant generality” and “competently performing a wide range of distinct tasks” in the definition of GPAIM.

Interestingly, MDCG 2025-6 does not refer at all to medical devices that are GPAIM. This is certainly due to the unlikelihood of such a scenario. However, MDSW can and does frequently incorporate GPAIMs, so it is important to understand the related concepts.

GPAIM refers to AI technologies with generative capabilities trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning. Typical GPAIM include large generative AI models like image recognition, large language models, or video and audio generation tools. For MDSW, GPAIM that support natural language processing (e.g. GPT-4), image recognition and/or data analysis (e.g. ResNet, U-Net), are particularly useful and already integrated as foundation in numerous MDSW. The MDSW manufacturer must be able to identify such GPAIM in the case of third-party software, e.g. a chatbot.

GPAIM are in turn classified based on their “systemic risks”, according to the following criteria in AI Act Art. 51:

  • High impact capabilities, corresponding to GPAIM trained using a cumulative computing power (“training compute”) of more than 10^25 FLOPs (Floating Point Operations), or
  • Capabilities of equivalent impact to those with high impact capabilities, as decided by the EU Commission, based on the criteria in Annex XIII of the AI Act.

The cumulative training compute includes all computation contributing to model capabilities, such as pre-training, synthetic data generation (including discarded data), and fine-tuning. The Annex to the Guidelines on GPAIM from the EU Commission illustrates how to estimate/calculate training compute, either through hardware-based or architecture-based approaches.

In brief, GPAIM are considered to have “systemic risk” when they are very capable and widely used (i.e. currently, at least 10’000 registered business users in the Union market) and could, for example, affect many users if they propagate harmful biases across many applications.

The EU Commission might adopt delegated acts to amend the thresholds that determine the classification of GPAIM as having “systemic risk”.

As described in AI Act Recital (97), GPAIM may be placed on the market through libraries, application programming interfaces (APIs), as direct download, or other means. The GPAIM may then be further modified (e.g. addition of a user interface) or fine-tuned into new models by the AI system provider. GPAIM being an essential component of AI-enabled MDSW, the corresponding GPAIM providers would become critical suppliers of the MDSW manufacturer under the EU MDR or IVDR, which raises some interesting regulatory questions about the feasibility of such a relationship.

MDAI manufacturers who modify a GPAIM in a manner that leads to a significant change in the model’s generality, capabilities, or systemic risk, become GPAIM providers. The EU Commission Guidelines on GPAIM clarify that this is the case when the training compute used for the modification is greater than a third of the training compute of the original model.

Two men in an office look. The man in the foreground explains something to the other.

AI-enabled MDSW cannot incorporate any of the prohibited practices listed in Art. 5.

Prohibited AI practices

AI-enabled MDSW cannot incorporate any of the prohibited practices listed in Art. 5 of the AI Act, which are those considered as a potential threat to the EU’s fundamental rights and values, as they are associated with significant potential for manipulation, exploitation, and social control.

The prohibited AI practices that the MDSW manufacturer must exclude are:

  • Subliminal techniques or purposefully manipulative/deceptive techniques that impair the ability to make an informed decision, thus causing significant harm. 
  • Techniques that exploit the vulnerability of a person or specific group of persons (e.g. specific age, disability, social/economic status) so as to distort their behavior, thus causing significant harm.
  • Establishing a social score based on social behavior or personal characteristics, where such score leads to detrimental or unfavorable treatment of the person or group of persons in unrelated social contexts or in unjustified or disproportionate manner.
  • Profiling techniques to predict the risk of a person committing a criminal offense (unless the person is already involved in a criminal activity).
  • Untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.
  • Inference of emotions in the workplace and educational institutions, except for medical or safety reasons.
  • Biometric categorization systems to infer a person’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (unless lawfully acquired for law enforcement purposes).
  • ‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, with some exceptions and conditions.

The prohibited practices apply since 2 February 2025 and are enforced since 2 August 2025. For more details, the EU Commission has published extensive Guidelines on prohibited artificial intelligence (AI) practices.

What requirements apply to MDSW under the AI Act?

The requirements for AI-enabled MDSW under the AI Act depend on the classification of the specific AI algorithms as either:

  • “high-risk” AI system (considered high-risk MDAI in MDCG 2025-6), 
  • “non-high risk” AI system (considered MDAI in MDCG 2025-6), 
  • GPAIM with “systemic risk”, or 
  • GPAIM without “systemic risk”.

Providers of “High-risk” AI systems must comply with the following requirements in consideration of the intended purpose and the generally acknowledged state of the art on AI and AI-related technologies:

  • Risk management system [Art. 9]
  • Data governance [Art. 10]
  • Technical documentation [Art. 11]
  • Record keeping [Art. 12]
  • Transparency and provision of information to deployers, [Art. 13]
  • Human oversight [Art. 14]
  • Accuracy, robustness and cybersecurity [Art. 15]
  • Quality Management System (QMS) [Art. 17]
  • Document retention [Art. 18 (CE-marking documentation) and Art. 19 (automatically generated logs)]
  • Corrective actions and duty of information [Art. 20]
  • Cooperation with competent authorities [Art. 21]
  • Assessment of impact on fundamental rights [Art. 27]
  • Conformity assessment [Art. 43]. The conformity assessment route is based on:
    • Internal controls (i.e. self-declaration) for high-risk MDAI that is not subject to Notified Body oversight under the EU MDR or IVDR and corresponds to high-risk AI systems in Annex III of the AI Act, or
    • Notified Body certification, for high-risk MDAI already subject to Notified Body oversight under the EU MDR or IVDR. The Notified Body shall conduct the certification according to the EU MDR or IVDR, incorporating additional requirements from the AI Act, namely: points 4.3, 4.4, 4.5 and fifth paragraph of point 4.6 of Annex VII as well as Section 2, Chapter III.
  • For high-risk MDAI in Annex III (i.e. not subject to Notified Body oversight): registration in the EU database described in Chapter VIII [Art. 49(1)]
  • PMS monitoring [Art. 72]
  • Reporting of serious incidents [Art. 73]. For MDAI, this would concern only the infringement of EU legislation on the protection of fundamental rights, as described in Art. 3 (49)(c).

Per AI Act Article 8(2) and MDCG 2025-6, MDAI manufacturers may choose to integrate the necessary processes, information and documentation they provide with regard to their MDAI into the technical documentation and QMS procedures that they have already in place under the EU MDR/IVDR, for a single conformity assessment. However, AI-Act specific requirements (e.g. impact on fundamental rights, human oversight, and various aspects of data governance) will require supplementary efforts. 

MDAI manufacturers will have to run a gap assessment of the additional elements needed and determine whether it makes sense to merge the cumulative evidence of conformity in a single technical documentation set.

Tablet, Laptop on a table

Many of the requirements are expected to be fulfilled together with those for MDSW CE marking under the EU MDR or IVDR.

For providers of AI systems that do not correspond to “high-risk”, significantly fewer requirements apply:

  • Ensure that the AI system is trustworthy by developing their own voluntary codes of conduct or adhering to codes of conduct adopted by other representative associations.
  • Transparency obligations for AI systems intended to interact with natural persons. [Art. 50] 
  • Registration in the EU database described in Chapter VIII. [Art. 49(2)]

Providers of GPAIM with systemic risk must fulfill the following obligations: 

  • Compile and maintain a Technical Documentation according to Annex XI, to be kept available to the EU AI Office and national competent authorities. [Art. 53(1)(a)]
  • Disclose documentation to providers of AI systems that integrate the GPAIM. [Art. 53(1)(b)]
  • Establish a policy to comply with Union law on copyright and related rights. [Art. 53(1)(c)]
  • Compile and make publicly available a sufficiently detailed summary on the content used for training of the GPAIM [Art. 53(1)(d)]. The EU AI Office shall provide a template for this summary.
  • Perform model evaluation, incl. adversarial testing. [Art. 55(1)(a)]
  • Assess and mitigate possible systemic risks. [Art. 55(1)(b)]
  • Keep track, document and report serious incidents and possible corrective measures to AI Office and national competent authorities. [Art. 55(1)(c)]
  • Ensure adequate cybersecurity protection. [Art. 55(1)(d)]

And providers of GPAIM that do not have “systemic risks” are still subject to the following requirements:

  • Compile and maintain a Technical Documentation according to Annex XI, to be kept available to the EU AI Office and national competent authorities. [Art. 53(1)(a)]
  • Disclose documentation to providers of AI systems that integrate the GPAIM. [Art. 53(1)(b)]
  • Establish a policy to comply with Union law on copyright and related rights. [Art. 53(1)(c)]
  • Compile and make publicly available a sufficiently detailed summary on the content used for training of the GPAIM. The EU AI Office has published a template for this summary.

“Legacy” GPAIMs, i.e. those placed on the market before 2 August 2025, providers have until 2 August 2027 to comply. In such cases, retraining or unlearning is not required if not possible, or when the information is unavailable/disproportionately burdensome, provided this is disclosed in the technical documentation.

The EU Commission has clarified that the lifecycle of a GPAIM begins at the start of the large pre-training run, and any subsequent development (e.g., distillation, quantization, merging model weights) by or on behalf of the provider is considered part of the same model’s lifecycle, not a new model.

For all GPAIM providers, adherence to codes of practice described in Art. (56) allows to demonstrate conformity with the above requirements until a harmonized standard is published.

How shall MDSW manufacturers determine the impact of the AI Act?

MDSW manufacturers should:

  1. Assess the applicability of the AI Act to their MDSW, considering the definitions of AI systems and GPAIM, as well as the exclusion of scope for military, defense and national security purposes. The exclusion of scope for research-only products makes no sense for MDSW, as the software would not qualify as a medical device either.
    The determination of applicability might not always be obvious when SOUP (Software of Unknown Provenance) is integrated in the MDSW, and the manufacturer does not have full details about the algorithms behind the SOUP.
  2. Conduct and document an assessment to exclude use of any of the prohibited practices described in AI Act Art. 5.

If the MDSW falls under the AI Act, determine the appropriate classification, i.e. whether any AI system qualifies as “high-risk”, per Art. 6(1) or Art. 6(2), and whether any GPAIM in the MDSW qualifies as involving “systemic risk”, per Art. 51(1).

The flowchart below illustrates the impact determination process.

Decision flowchart AI Act applicability

When does a manufacturer of AI-enabled MDSW need to take action?

The AI Act will apply from 2 August 2026, with the following exceptions:

  • The general provisions (Chapter I) and the prohibition of AI systems posing unacceptable risks (Chapter II) will apply from 2 February 2025.
  • Aspects relative to Notifying authorities and Notified Bodies for high-risk AI systems (Chapter III, section 4), the obligations for GPAIM (Chapter V), governance aspects (Chapter VII), confidentiality (Art. 78), and penalties (Chapter XII), except for fines for GPAIM providers (Art. 101), will apply from 2 August 2025.
  • The obligations relative to high-risk AI systems newly placed on the market, except for those that concern Notifying authorities and Notified Bodies (Chapter III, section 4) will become applicable from 2 August 2027.

According to AI Act Art. 111, a few exceptions and additional periods apply to AI systems and GPAIM already placed on the market:

  • For AI systems placed on the market or put into service before 2 August 2026, the AI Act will only apply to concerned operators in case of significant design changes after that date. Now, because AI Act Art. 6 (which includes the definition of “high-risk AI system”) will only enter into force on 2 August 2027, for high-risk MDAI the cut-off date is 2 August 2027.
    While awaiting EU Commission’s guidelines on significant design changes under the AI Act, MDCG 2020-3 for medical devices (non-IVDs) and MDCG 2022-6 for IVDs could be followed as guiding principles. Note that this exemption could result in old AI-enabled MDSW never being subject to CE-marking under the AI Act because no significant change would have been brought to the AI algorithm. 
  • If the high-risk AI system is intended to be used by public authorities, providers and deployers shall comply with the AI Act by 2 August 2030, i.e. irrespective of whether or not significant changes are made to the AI system design.
  • Providers of GPAIM that have been placed on the market before 2 August 2025 shall comply with the obligations of the AI Act by 2 August 2027.

In brief, the AI Act will apply progressively, with deadlines scattered before and after the actual Date of Application (DoA), as shown in the below infographic.

The exemption of applicability for “legacy” AI systems could result in old AI-enabled MDSW never being subject to CE-marking under the EU AI Act.

In addition to the impact assessment described in the previous chapter (see How shall MDSW manufacturers determine the impact of the AI Act?), with particular focus on the identification of prohibited AI practices that must have been discontinued by 2 February 2025, manufacturers of AI-enabled MDSW should already start preparing for at least the following aspects:

AI literacy

Per Art. 4, providers (basically, legal manufacturers) and deployers (i.e. AI users within a professional activity) of AI-enabled medical devices or IVDs are required to ensure sufficient level of “AI literacy” for their own staff as well as for any third parties who operate and use the AI systems on their behalf. 

Art. 3(56) of the AI Act defines “AI literacy” as the skills, knowledge and understanding that allow making an informed deployment of AI systems, as well as gaining awareness about the opportunities and risks of AI and possible harm it can cause. Although it does not provide specific criteria, fulfilling the definition might apply at different levels within the organization depending on the staff involvement in AI deployment and assessment of opportunities and risks.

As such, MDAI manufacturers must have taken measures by 2 February 2025 to determine which job positions need which level of AI knowledge/skills and hire or train effectively the concerned individuals.The national market surveillance authorities will start supervising and enforcing the rules as of 2 August 2026.

The EU Commission has made available a webpage with Q&A on AI literacy but they are not very directive since there is no legal requirement to impose a specific approach to AI literacy acquisition or training certification.

Somewhat related is AI Act Art. 26 applicable to deployers of high-risk AI systems, who must ensure that their staff dealing with the AI systems in practice is sufficiently trained to handle the system and ensure human oversight. This article however will only apply from 2 August 2026.

Notified Body identification for new or significantly modified “high-risk” AI systems

August 2027 might seem to leave ample time to prepare for certification under the AI Act, but manufacturers of high-risk MDAI should immediately set out to find whether their Notified Body is already preparing or planning to be designated under the AI Act. If that is not the case, a change in Notified Body would need to be envisaged. 

Now, the availability of sufficient Notified Bodies with dual designation under the EU MDR/IVDR and AI Act might become a bottleneck in the certification process, and high-risk MDAI manufacturers could face the need to involve separate Notified Bodies.

Team-NB already indicated in their Position Paper on AI designation that they foresaw challenges with their capacity.

Adapting QMS processes in the light of MDCG 2025-6 on MDAI

Some of the processes required for producers of high-risk AI systems under the AI Act overlap with those required for MDSW manufacturers under the EU MDR/IVDR and would need minimum tweaks to the Quality Management System (QMS) to expand the scope to cover also the AI Act and consider any relevant harmonised standards that may be published in the future. This is typically the case for risk management, labelling (in particular, Instructions for Use), technical documentation, post-market surveillance and Vigilance reporting.

For example, risk management principles are the same but the types of risks to be considered under the AI Act must include fundamental rights, in addition to AI-specific risks such as biases, robustness, and cybersecurity (also at the data set level). Now, in an interesting regulatory turn, human oversight (a specific requirement under the AI Act) in fact becomes a useful risk mitigating factor for high-risk MDAI, as recognized by MDCG 2025-6.

Conversely, additional requirements such as AI literacy, data governance, AI log record-keeping (for traceability of performance in the post-deployment phase), transparency, explainability, human oversight, and pre-determined changes (for high-risk AI systems that continue to learn in the post-deployment phase) must be newly considered in the QMS of high-risk MDAI manufacturers. 

Data governance is clearly the most critical topic for AI systems since poor training, validation or test data sets cannot yield an appropriate AI model. High-risk MDAI manufacturers shall be able to demonstrate that data sets are relevant for the intended purpose, sufficiently representative, and as free of errors and complete as possible. The scarcity of high-quality data sources is currently a challenge to ensure compliance, even for Notified Bodies who will have to verify data sets, as clearly described in Team-NB’s position paper on the AI Act (v.2). MDCG 2025-6 indicates that data governance compliance can be demonstrated through third-party certification, and this may end up becoming mandatory. Other than the upcoming harmonised standards on this topic, the EU Commission will issue guidelines on data and data governance for high-risk AI systems.

Last, clinical/performance studies shall remain subject to EU MDR/IVDR requirements. Even if they would correspond to “real-world testing” for high-risk MDAI, MDCG 2025-6 clarifies that such testing is permitted if conducted under the requirements of the EU MDR or IVDR.

Decomplix’ recommendations for high-risk MDAI until the AI Act becomes applicable

Navigating the current regulatory uncertainty on the AI Act requires becoming familiar with and adapting to EU Commission’s confusing or insufficient guidelines, various best practices and recommendations from professional organizations and academic experts, a sizable and increasing pool of international standards relevant to AI but not specific to AI-enabled MDSW, and EU initiatives for AI-enabled MDSW (e.g. EU network of real-world testing facilities TEF-Health).

Unfortunately, the long-awaited MDCG 2025-6 on the interplay between the EU MDR/IVDR and the EU AI Act, does not really help high-risk MDAI manufacturers at the present critical time, when preparation for AI Act certification by a Notified Body needs to be correctly planned to avoid failing to CE-mark a high-risk MDAI under the AI Act. 

Moreover, there is no indication as to the date from when MDCG 2025-6 shall apply. Since the certification process under the AI Act is expected to take some time, the sooner manufacturers comply with MDCG 2025-6, the better. Also, in consideration of a Team-NB position paper on MDCG guidance documents published in 2022, the roll-out of such guidance documents in the Notified Body’s QMS was proposed to happen within 12 months of the publication date, this would mean June 2026 for MDCG 2025-6.   

Despite its shortcomings, MDCG 2025-6 remains the cornerstone for now. See our summary under: Adapting QMS processes in the light of MDCG 2025 6 on MDAI.

Decomplix recommends starting by reading and understanding chapter III, section 2 of the AI Act. It sets forth the generic basis for high-risk AI systems (e.g. on data governance, accuracy, robustness, cybersecurity, human oversight and transparency) that should already be incorporated by high-risk MDAI manufacturers in their development process. 

In addition, common sense should be applied in extrapolating the existing guidance for MDSW, e.g. MDCG 2020-1 on clinical/performance evaluation of MDSW and MDCG 2019-16 on cybersecurity, based on the guiding principles of MDCG 2025-6.

At this stage, it is unclear how much the practices and expectations of individual Notified Bodies will align with Team-NB’s questionnaire on AI in medical devices, which dives in detail on the expectations under the EU MDR/IVDR but not under the AI Act. Still, we also recommend our customers to turn this questionnaire into a regulatory input checklist, identify the elements that make sense, and prioritize the most important aspects for which their high-risk MDAI would need evidence.

In any case, high-risk MDAI manufacturers should be taking action already.

How Decomplix can help

Decomplix provides expert regulatory and clinical assessment of your situation, the qualification and classification of your MDSW both under the EU AI Act and the EU MDR or IVDR, as well as a complete roadmap to obtaining a CE mark for your AI-enabled MDSW.

You can learn more about our services here.

Further reading

 

This article has been updated with information from guidance document MDCG 2025-6, the different EU Commission’s guidelines on prohibited AI practices, AI system definitions, and GPAIM obligations, as well as in consideration of Team-NB’s questionnaire on AI for medical devices and the updated version of Team-NB’s Position Paper on the AI Act.

Wie hilfreich war dieser Beitrag?

Bitte bewerten Sie:

 

More articles