Now that a deal has been reached for the European Union’s landmark artificial intelligence legislation, the AI Act, attention will naturally turn to the murky road to implementation. Standards developed by industry-led organizations will be a key component of putting the act into practice, guiding companies through assessing and mitigating risks from their AI products. However, AI standards remain incomplete and immature relative to those in comparable industries. This threatens to make compliance expensive and enforcement inconsistent, jeopardizing the EU’s hope that the legal certainty provided by the act will promote innovation. Worryingly, the task of developing AI standards to the same level of quality as those for comparable industries faces fundamental challenges.

At the core of the AI Act are safety requirements that companies must meet before placing an AI product on the EU market. As in other EU legislation, requirements are merely outlined at a high level, leaving significant room for standards to fill in the blanks. Mature standards would make expectations clear and strike a balance between protecting EU citizens from faulty AI and keeping compliance inexpensive for companies. The EU must avoid the mistakes made with the General Data Protection Regulation (GDPR), where a lack of legal clarity and the associated compliance costs disproportionately undermined small and medium enterprises.

To strike this balance, standard setters will need to contend with the diverse and evolving landscape of technologies, applications, and possible harms covered by the act. In practice, it is difficult to develop precise standards for a technology as novel and complex as AI, as well as to address risks of discrimination, invasion of privacy, and other nonphysical harms. This article draws from standards in more established sectors to elucidate what AI standards should look like—their content, structure, level of detail—to highlight what’s missing today and why the gap will be difficult to fill. We break down the problem into three parts, offering recommendations for each: standards for risk assessment, standards for risk mitigation, and the unique challenges presented by general purpose AI (GPAI) systems like OpenAI’s GPT-4.

Background: The AI Act and Standards

The requirements in the AI Act cover AI systems intended to be deployed in “high-risk” contexts, such as in medical devices or the education sector. These requirements (called “essential requirements” in EU parlance) cover all aspects of the product life cycle, including documentation, data governance, human oversight, accuracy, and robustness, with the aim of safeguarding “health, safety, and fundamental rights.” Deliberately open-ended, the rules allow for a spectrum of interpretations, making it the responsibility of the provider to judge the level of risk and take appropriate measures to mitigate the risks. For example, the act requires without further specification that AI systems be designed such that they can be “effectively overseen by natural persons” and that data have “appropriate statistical properties.”

Hadrien Pouget
Hadrien Pouget is an associate fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace.
More >

The act instructs providers to put in place a risk management system to grapple with these judgements. A risk management system is an organizational practice that places assessing and mitigating risks at the heart of the entire design and development process; such a system will serve as a backbone for the act’s other requirements. The risk management system will help AI providers determine, for instance, what an “effective” level of human oversight is for their product or what statistical properties are “appropriate” for their data given the risks.

Guidance on implementing a risk management system and meeting the act’s other requirements should (and in many cases will) be provided through standards, primarily developed by the European standard development organizations, the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC), in conjunction with their international counterparts, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).1 The standards will translate the act’s essential requirements into actionable steps. They could, for example, outline which statistical properties of data are worth measuring, in which contexts. Although these standards are not mandatory, providers that follow standards adopted by CEN and CENELEC will benefit from a “presumption of conformity,” meaning that they will be assumed to be in compliance with the act’s relevant essential requirements.

In response to the proliferation of AI, a number of AI-specific standards have already been developed, and an even greater number are underdevelopment. In addition, many sector-specific standards are technology-agnostic and may be applicable when AI is used (for example, some medical device standards are general enough to cover cases where AI is used). However, despite the growing number of AI-relevant standards, vital problems remain, as explored in the rest of this article.

The First Challenge: Risk Assessment

How Risk Assessment Works in Established Sectors

To understand which measures should be put in place to mitigate risks and how strong those measures need to be, AI providers will need to first conduct a risk assessment. Under the AI Act, this is explicitly required as part of the risk management system that providers must implement.

Ranj Zuhdi
Ranj Zuhdi, a seasoned software quality and regulatory consultant, focuses on AI and software as medical device (SaMD) standards. He applies his experience in medtech to research on the integration of AI safety within new and existing regulatory frameworks.

Implementing a risk management system requires product manufacturers to establish and document responsibilities, risk management steps, and methods used in assessing and evaluating risks. This includes the establishment of severity of risk levels (table 1) and the likelihood of risk levels (table 2), which are then plotted on a risk assessment matrix (figure 1). This matrix, typically in 3 × 3 or 5 × 5 grid format, visually cross-references the potential impact and likelihood of risks to aid in prioritization.

Table 1. Example Qualitative Severity Levels of Health and Safety Harms
Severity Level Description
Significant Results in serious injury, permanent impairment or death
Moderate Results in minor injury or impairment requiring medical or surgical intervention.
Negligible No injury or temporary discomfort

 

 

Table 2. Example Qualitative Likelihood Levels
Likelihood Level Description
Very likely Likely to happen, frequently or always.
Possible Can happen, but not frequently. Likely to occur a few times during the lifetime of the system.
Unlikely Unlikely to happen, rare or remote
Note: Manufacturers may also use quantitative probability levels.

 

 

Assessing risks starts with the identification of hazards that stem from the use, or misuse, of the product, taking into consideration all types of harms that may be involved (such as physical harms, breaches of privacy, or discrimination). Then assessors determine likelihood and severity levels for each hazardous situation, using guidance and categories from a risk management standard. Arranging this information in a matrix produces a risk level (shown in figure 1).2

This matrix serves as a tool to evaluate which risks are tolerable and which require elimination or further mitigation, usually according to a risk acceptability policy, also shown in figure 1. In many regulated sectors, manufacturers are required to assess the risks for each product they develop and to implement appropriate actions or mitigations so that the risks are either eliminated or reduced to an acceptable level. They must also monitor the product after commercialization, update the risk management file when needed, and take additional actions if required when new risks emerge or the original estimates underestimated the risk.

If residual risks remain after initial efforts at mitigation, manufacturers must demonstrate that mitigations have been implemented according to the “state of the art” and that the benefits of the system outweigh the risk. In sectors such as medical devices and pharmaceuticals, guidelines and standards help direct risk-benefit analysis. Frameworks by the U.S. Food and Drug Administration or the European Medicines Agency provide clear criteria for assessing the safety and efficacy of medical devices and drugs. These criteria are based on clinical literature, long-established practices, and quantifiable outcomes.

Gaps in Risk Assessment for AI

Traditionally, product safety legislation and standards have focused on health and physical safety. Some sectors do attempt to address a broader set of ethical considerations, but this has proven challenging and typically continues to be driven by an underlying concern for physical safety. For instance, although the regulatory standards for medical devices encourage manufacturers to include diverse population groups in clinical investigations, they do so because disparities have physical impacts on patients. Unfortunately, these regulatory frameworks lack specificity and actionable requirements, and the lack of patient diversity and limited publicly available data continue to make it challenging for clinicians and patients to determine the safety and effectiveness of medical devices for specific demographic groups.

The AI Act extends risk management beyond health and safety to consider impacts on fundamental rights, such as the protection of personal data, respect for private life, and nondiscrimination. This expanded approach is more relevant in some contexts than others, but applications such as assessing job applications, consumer creditworthiness, and welfare eligibility are examples where fundamental rights considerations are especially important.

AI developers now need to consider the potential impacts of their systems on a wide range of factors, such as privacy, discrimination, and working conditions. Determining which fundamental rights to consider (the Charter of Fundamental Rights of the EU includes fifty of them), how to assess the severity of violations of those rights, and what constitutes an acceptable risk-benefit trade-off are all complex issues that are typically the purview of courts and tend to vary across legal systems within the EU.

How to approach risk assessment for fundamental rights represents a key ambiguity in the act’s requirements that AI developers, auditors, and market surveillance authorities will be left to grapple with, and further guidance is needed. Existing risk management standards, whether generic risk management (ISO 31000) or AI-specific—such as AI risk management (ISO/IEC 23894) and the U.S. National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework 1.0—require consequences of risks to be documented and risk acceptability criteria to be determined, but they do not provide meaningful guidance on how to do this.

Barriers to Closing the Gap

Standard setters at CEN and CENELEC are reluctant to address this gap by setting acceptability metrics or thresholds for risks related to fundamental rights because they feel they have neither the expertise nor the authority to interpret legislation. This cautious approach is echoed by consumer and civil organizations such as European Consumer Voice in Standardisation (ANEC) and European Digital Rights (EDRi), which argue that “decisions, with potentially severe implications on people’s rights and freedoms, need to remain within the remit of the democratic process, supported by experts in areas such as equality and non-discrimination as well as in the socio-technical impacts of technology and data.”

This leaves the question of who would develop such guidance. Guidance from the European Commission itself may carry the most legitimacy (see box 1), and indeed the AI Act proposal that the European Parliament adopted in June 2023 allows for the commission to create “common specifications” (a term for technical standards developed by the commission) to “address specific fundamental rights concerns.”

Box 1: Recommendation

The European Commission should consider developing guidelines or adopting common specifications for conducting assessments of risks to “fundamental rights.”

The commission could outline levels of severity of infringement for different rights and factor in the likelihood of risks to determine risk levels. It could also work with representatives from civil society to set risk acceptability thresholds. All of these activities may need to be customized to different rights and use cases. For example, the commission could decide that for systems that perform job evaluations, a consistent 20 percent disparity in scores between protected groups (such as gender, race, and age) would require further investigation, but for a system determining access to medical services, the threshold might be much lower.

The Second Challenge: Risk Mitigations

How Safety and Risk Mitigation Work in Established Sectors

Established sectors focused on safety, like machinery or medical devices, follow a three-step approach to risk reduction and maintaining functional safety. First, products are made inherently safe from the design stage. For example, rounded edges on surgical instruments eliminate sharp points that could puncture tissue. Second, protective measures or guards are put in place. For instance, complex passwords and data encryption safeguard sensitive information from unauthorized access or decryption. Finally, end users are provided with clear safety information. This can be seen in the form of detailed user manuals for electronic appliances or warning labels on medical devices. This comprehensive approach ensures safety is integrated into every aspect of a product's life cycle.

The international standard for electronic functional safety (IEC 61508, which covers software), for example, assigns one of four “safety integrity levels” based on the risk posed by failure in the system or its functions. A table then describes which actions should be taken based on the safety integrity level, and the level determines the performance required to maintain and achieve safety (see figure 2). The automotive and medical device industries have adapted this principle into similar approaches: automotive safety integrity levels (ASILs) in ISO 26262 and medical device software safety classification in IEC 62304, respectively.

When a standard requires that a particular measure be taken, such as for certain types of testing or development practices, details of its implementation need to be precise—or, in the language of CEN and CENELEC, “objectively verifiable.” A look at common specifications the European Commission has developed in the past may help demonstrate the level of detail the commission is looking for. One such specification lays out in detail how to evaluate medical tests that diagnose conditions like HIV or SARS-CoV-2. It lays out the number of experiments to be run, whether laypeople or experts should administer the test, the required metrics to record, and the thresholds that tests must exceed to be accepted. Very little ambiguity is left, which helps provide legal certainty to those complying.

Gaps in AI Risk Mitigations

Safety and test requirements for AI are poorly covered by existing standards, both those that existed before AI came into relevance and newer, AI-focused standards. In general, AI represents a distinct challenge in terms of safety and evaluation mechanisms, and requirements in AI-specific standards remain more subjective than those in other fields where standards focus on measurement and testing of physical properties such as voltage, current, and electromagnetic field. Some standards organizations have suggested that the EU not use its traditional product safety framework (as it has for the AI Act) if it is going to include requirements that involve “subjective testing” and that “insist on such a high level of legal certainty.”

Many existing standards could be applied to AI systems, but they generally fail to account for the unique difficulties of evaluating the quality and trustworthiness of AI systems. If AI were used as part of a medical device or machinery, medical device standards and machinery standards would apply, but these standards have little to say about specific measures for evaluating the use of AI. Similarly, software standards could be a relevant place to look to for guidance. They cover AI systems by default, given that AI systems are built on software. However, the standard on functional safety for software, IEC 61508-3:2010, offers little insight. In fact, it recommends against the use of AI in safety-critical systems, because it is difficult to verify AI system functions and components at a granular level, compared to traditional software.

Those standards that are specific to AI currently fall short of providing the kinds of concrete requirements that exist in other fields. A study by the European Commission’s independent research body, the Joint Research Centre, assessed eight promising AI standards and determined that all had middling or poor “maturity and level of detail” and that there would be challenges to assessing compliance with almost all of them. Similarly, a report by the EU Agency for Fundamental Rights points out that discussions about AI frequently emphasize the need for “high quality data” but often fail to provide specific definitions or guidelines about what this actually means. This ambiguity, the report states, is because there is “currently no standardised way of describing datasets agreed upon in the field of AI.”

The transparency standard for autonomous systems created by the Institute of Electrical and Electronics Engineers (IEEE), IEEE 7001-2021,3 is an example of a standard that, despite giving an encouraging amount of detail, remains ambiguous in important aspects. The original draft of the AI Act requires “design and development solutions that ensure transparency . . . to enable users to understand the system’s output and use it appropriately.” The IEEE standard outlines five specific and measurable tiers of transparency to different stakeholders (users of these systems, auditors, and the broader public) that can be assessed objectively. These transparency levels, however, are not tied to a risk assessment, so there is little guidance as to which tier of transparency is appropriate to various domain-related risks. In addition, the standard’s requirements such as the one for “a user-initiated functionality that produces a brief and immediate explanation of the system’s most recent activity” give some direction, but what constitutes an effective and appropriate explanation remains a complex debate within the AI research community. So, ambiguities in how to effectively implement the standard remain.

Barriers to Improving Safety Standards

Moving toward more concrete standards is difficult for several reasons. For one, AI is not a single type of technology; it encompasses a range of distinct development methodologies. For instance, AI often relies on learning-based approaches, which differ from the traditional life cycle used in non-AI based software. Specific safety and quality measures that apply to some AI systems may not apply to others. In addition, systems developed through machine learning, the approach driving most modern advances, are especially complex, and some are largely inscrutable, making their inner workings opaque and difficult to assess.

Current functional safety standards for traditional non-AI based software, such as IEC 61508, require systems to be robust and predictable through testing and mitigations against systematic failures (which are reproducible errors with a specific cause, such as design flaws and code errors). Like other human-designed complex machinery such as engines, traditional software can be broken down into simple components whose intended behavior can be checked in isolation and whose safety in combination can be validated. The code can be objectively verified from the lowest- to the highest-level system behavior and checked against document requirements and acceptance criteria.

However, most machine learning systems are more like the human body than an engine. Although they are built on software, machine learning systems are not hand-designed, and their internal operations are extraordinarily complex, which has led many to call them “black boxes.” Their behavior cannot be understood through principled reasoning but rather must be assessed through intensive testing. Performance metrics, while valuable for task-specific applications, offer limited insight as they often fail to capture the nuanced and context-specific nature of AI decisionmaking. A recent case study also found that the lack of standardized evaluation metrics is a crucial challenge when implementing AI auditing procedures.

Relying on testing also makes it difficult to anticipate how AI systems will work in new situations. For instance, despite companies like Waymo and Cruise logging over a million miles of evaluations and rigorous safety testing with their autonomous cars, unacceptable accidents still occur, endangering the legal status of driverless cars. Rigorous methods for verifying safety that have worked for conventional software, which rely on the software’s logical structure and legible form, are still evolving in machine learning—and are very difficult to develop. This is why many AI standards focus on governance processes, transparency, and human control and remain vague with regard to technical specifications.

Nonetheless, outlining precise technical specifications may be necessary if conformity to standards is to be “objectively verifiable” (see box 2). ISO and IEC have announced work on ISO/IEC TS 22440, which would provide requirements for “the use of AI technology within a safety-related function.” However, this would be a technical specification, a precursor to full standards that is used when significant uncertainty remains.

Box 2: Recommendations

The European Standardisation Organizations should precisely delineate the scope of AI technical standards, specifying the relevant AI technologies, learning methods, and use cases.

For instance, AI accuracy, bias, and testing standards for natural language processing (NLP) applications such as chatbots or translation services may not be applicable or necessary in other domains, like image recognition for autonomous vehicles. The organizations should develop a comprehensive mapping that correlates standards with various AI types and their respective domain use cases. This will enable developers and auditors to discern standards applicable to their projects.

Domain-specific standards should adapt generic AI technical standards to the domain’s particular needs.

First, standards should identify which existing domain-specific standards already address the requirements of the AI Act, without the need for new AI standards. Second, domain-specific standards should identify which risk mitigations from generic AI standards are applicable to the domain. For example, explainability and decision interpretability are crucial in some domains, such as when AI is used to make employment decisions. However, in safety-critical and pressing tasks such as emergency healthcare, pausing to understand an explanation is not always possible, and a broader range of risk controls are necessary.

The EU should invest in AI talent for market surveillance authorities and notified bodies.

Market surveillance authorities in each member state are regulators responsible for ensuring products on the market conform with the AI Act’s requirements; notified bodies are organizations empowered by the EU to run third-party assessments of products according to the act. If clear, precise standards are difficult to develop, then the ecosystem will need to rely more heavily on knowledgeable regulators and auditors who can assess systems, ask the right questions, and give advice. Since notified bodies of many different sectors will need to assess AI systems, knowledgeable experts will need to be available throughout European economies.

The Third Challenge: General-Purpose AI Models

GPAI models designed to perform a wide variety of tasks—like OpenAI’s GPT-4—are subject to their own set of requirements under the AI Act. These models represent an especially challenging technology for product safety standards.

Firstly, they often have either no intended purpose or a very general one. GPT-4 was created just to predict the most likely next word in a text, and ChatGPT is an adaptation of GPT-4 (and its predecessor GPT-3.5) designed to be easier to have a conversation with. Despite their simple goals, both technologies can ultimately be used for a wide range of tasks. Outlining desired behavior and possible risks is difficult, because such an analysis would have to anticipate potential uses.

For this reason, conventional risk management standards assume a specific intended use—and reasonably foreseeable misuse—to assess risk and inform risk mitigation. When the intended use is narrow, like providing medical advice for specific clinical conditions, the systems can be evaluated against that use. The same system would have to meet different requirements if used to run a job interview. Because discussions of risk from GPAI are often not grounded in a particular scenario, the focus has been on generic risks that are deemed relevant in most applications. These might include the ability to generate hate speech and otherwise enable bad actors or to memorize and reveal personal data. Without more concrete scenarios, however, it will remain difficult to turn concerns into reliable and verifiable risk management practices. Significant amounts of work will be left to downstream developers.

Secondly, technical challenges already present in assessing and mitigating the risks of AI in general are intensified in these models. The datasets they are trained on are huge and varied, including books, code, websites, images, audio, and video, making them hard to scrutinize. Since AI models tend to allow for such a rich set of inputs and outputs, they are liable to produce very different outputs for seemingly similar inputs, and they are known to confidently state inaccurate information, known as “hallucinating.” Such inconsistencies challenge the predictability and robustness of GPAI models, and predictability and robustness are essential characteristics required by the safety standards of critical systems. Although efforts to effectively mitigate the risks of GPAI models are underway, some believe that fixing the underlying technical challenges will be hard.

Because they are more complex than other kinds of AI systems, GPAI models are even harder to evaluate within existing sectoral regulation. In the EU, many AI-based medical devices have been granted CE marking, indicating conformity with European regulations, despite having no standards that cover AI in this context. This was because the majority of AI devices leaned on narrow applications that were trained via targeted methods on well-defined, closed datasets. Regulators were able to feel more confident in these systems’ safety because they understood the context, datasets, inputs, and outputs well enough to make judgements despite the lack of AI-specific standards. From 2015 to March 2020, there were 240 CE marked medical devices in Europe, over half of which were for radiology. But with growing excitement about GPAI models in domain-specific areas such as healthcare, regulators will need to develop new rules for their use, because they challenge traditional regulatory frameworks designed for more static and predictable technologies. If new standards for AI systems take time to be developed, sectoral standards will be an important fallback, but they may not apply as well to GPAI.

Thirdly, the AI Act requires the implementation of “state of the art” design and development techniques to mitigate risks. However, AI standards do not directly address GPAI models, leading to ambiguity regarding what constitutes “state of the art” mitigations. The UK’s recent AI Safety Summit collected GPAI model safety policies from leading companies, revealing that there is currently no consensus on the right approach (see box 3). The European Commission has clarified that “state of the art” does not refer to the latest experimental research or methods with “insufficient technological maturity” but rather to well-established and proven techniques. This raises an important question: are there “state of the art” approaches that adequately mitigate risk at this stage?

Box 3: Recommendations

Standards should clarify to what extent existing standards apply to GPAI.

The current lack of clear guidance in existing standards as applied to GPAI models leads to uncertainties, potentially allowing the use of unsafe AI technologies in critical and high-risk domains. It is crucial for standards organizations like CEN, CENELEC, ISO, and IEC to define explicit requirements in their standards about how safety and robustness measures apply (or do not apply) to rapidly evolving AI technologies, including GPAI models. They could explore producing technical reports as a quicker option than standards, which usually take about three years to be developed and often more than five years to be updated.

Standards should clarify which assumptions are made about an AI provider’s level of access to the system and guide the relationship between upstream and downstream developers.

Present functional safety standards, including those addressing AI technologies, typically encompass the entire design and development life cycle. These standards assume that developers have full control over data, algorithm design, and evaluation processes. However, developers are increasingly adopting existing GPAI models as a base for their applications, finding themselves “downstream” of the original developers. The extent to which these downstream developers can effectively mitigate risks, such as bias and reliability issues, through fine-tuning and other techniques remains uncertain. However, it would be beneficial for standards to undertake the following.

  1. Develop a standardized set of transparency requirements for GPAI models, ensuring that developers have clear and comprehensive information about the capabilities and limitations of these models. This information should include details about training data, potential biases, and performance thresholds/metrics.
  2. Develop comprehensive guidelines and criteria for downstream developers that assist them in effectively evaluating the capabilities of GPAI models. This evaluation should enable developers to align GPAI capabilities with existing AI safety and robustness standards, thereby facilitating informed decisions and mitigation actions regarding the appropriateness of using such models in high-risk and domain-specific areas. This recommendation finds a parallel in the realm of medical device software development. In this context, the IEC 62304 standard mandates that manufacturers rigorously assess the safety of any off-the-shelf or open-source software integrated into medical device software. The approach adopted is risk-based, emphasizing the importance of evaluating existing documentation provided by the developer. Additionally, it requires manufacturers to implement specific controls, alongside detailed risk analysis and testing when adequate documentation is not made available by a third-party software provider.

The European Commission should encourage CEN and CENELEC to engage with the Frontier Model Forum and the AI Alliance.

The Frontier Model Forum is composed of a few leading companies developing GPAI models and aims to produce safety standards for these systems. The AI Alliance has a broader membership but also aims to promote the responsible development of AI through benchmarks and standards. The knowledge accumulated within these organizations will be critical to producing viable standards.

Conclusion

The EU’s AI Act marks a critical step toward regulating the fast-evolving field of artificial intelligence, setting a precedent for global digital regulation. However, its success hinges on the effective translation of its high-level safety requirements into precise, actionable standards by CEN and CENELEC. Comparisons with standards providing guidance for risk assessment and mitigation in established safety-critical sectors reveal the need for standards that are not only detailed and legally certain but also flexible enough to accommodate the unique characteristics of AI technologies.

The current ambiguity concerning acceptable risk thresholds and specific technical mitigation measures leaves both developers and regulators in limbo. Taking inspiration from best practices in other regulated sectors, the European Commission should work with CEN and CENELEC to proactively develop detailed guidelines and common specifications, particularly regarding fundamental rights violations, to provide much-needed clarity and legal certainty.

GPAI systems are uniquely challenging and defy traditional risk assessment and mitigation approaches due to their broad applicability and unpredictable nature. The development of standards must therefore be an iterative, inclusive process that draws on a wide range of expertise and keeps pace with technological advancements.

The recommendations outlined in this article—from developing guidelines for fundamental rights risk assessments to enhancing transparency in GPAI model adoption—are aimed at bridging the gap between the AI Act's aspirations and its practical implementation. The involvement of the European Commission in guiding these efforts, coupled with collaboration between ISO, IEC, CEN, CENELEC, and organizations like the Frontier Model Forum and the AI Alliance, will be crucial. Only through a concerted, cooperative effort can the EU ensure that its pioneering legislation effectively safeguards consumers and fosters innovation, setting a benchmark for AI regulation worldwide.

Notes

1 The European Standardisation Organizations CEN and CENELEC have partnerships with the international organizations ISO and IEC. These partnerships require CEN and CENELEC to mirror ISO and IEC standards as much as possible, so work done at the international level will be important.

2 The definition of risk in the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework draws out the same basic dimensions: “risk refers to the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.”

3 Because the standard was developed by IEEE, not by CEN and CENELEC or ISO and IEC, it will not be directly applicable to the AI Act. IEEE 7001-2021 could nonetheless inform efforts by CEN and CENELEC. It is freely accessible through the IEEE GET Program.