The United States’ lack of a flagship legislative AI initiative akin to the EU’s AI Act often leads observers either to mistakenly suspect that the United States has not taken substantive action on AI or to point to individual fragments such as the recent Blueprint for an AI Bill of Rights or AI Risk Management Framework as emblematic of the broader U.S. strategy. In fact, painting a complete portrait of the U.S. approach to AI requires the context of how U.S. strategy has been structured and resourced by congressional legislation; how it has been guided by political direction from Joe Biden’s and Donald Trump’s presidential administrations; and how it has (and hasn’t) been carried out by federal agencies.

Structure and Resources: Congress

In addition to defining substantive rules, Congress structures, resources, and grants authorities to the federal agencies that are responsible for enforcing them. Although Congress has thus far avoided new regulation on how AI systems are used in the private sector, it has poured resources into AI R&D and increasing the federal government’s capacity to use and manage AI within its existing authorities.

The significant AI-related legislation passed by Congress in recent years (see table 1) has focused on bipartisan priorities. Congress has largely stayed away from creating new laws intended to shape industry use of AI, and proposed legislation that would regulate private sector AI use (like the Algorithmic Accountability Act) struggles to gain traction. Instead, Congress has rallied around calls to “ensure continued United States leadership in artificial intelligence research and development” (National AI Initiative of 2020), and legislation has focused primarily on encouraging and guiding the government’s own use of AI, such as efforts to “facilitate the adoption of artificial intelligence technologies in the Federal Government” (AI in Government Act of 2020). Bipartisan legislation has rooted ethical concerns in existing law (“civil rights”) or agreeable high-level values (“trustworthy” systems, “responsible” use) without making potentially divisive prescriptions, acknowledging AI-relevant ethical principles (such as “bias,” “privacy,” and “explainability”) without being specific about how they should be applied in the context of AI.

Hadrien Pouget
Hadrien Pouget is an associate fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace.
More >

Ultimately, this leaves federal agencies, led by the White House, with both constraints and freedom. Without laws that grant agencies new authorities, agencies are constrained to rely on reinterpretations of their existing authorities to regulate industry use of AI. But by staying less prescriptive about how AI-relevant ethical principles should be applied, agencies can retain some freedom in deciding how to manage this regulation and their own use of AI. To enable agencies, legislators have aimed to build the government’s AI capacity by encouraging coordination (such as via the Select Committee on AI), training (such as outlined in the AI Training for the Acquisition Workforce Act of 2022), and guidance (such as through memos that the Office of Management and Budget [OMB] was directed to develop). Legislation has also funded AI R&D more broadly, with some emphasis on addressing potential ethical concerns.

The AI Risk Management Framework (RMF) is emblematic of Congress’s light-touch approach to AI regulation. While it was created by the National Institute for Standards and Technology (NIST), a federal agency, it was mandated very specifically by Congress and is reflective of Congress’s broader approach. The RMF is a thorough collection of risk management practices that can be applied to AI applications and serves to collect expertise and offer guidance without being prescriptive. NIST is relentlessly explicit about the RMF’s voluntary nature, even encouraging those applying it to use only parts of the framework and to adapt it to their needs. Despite describing ethical issues and trade-offs between them in more detail, the RMF also continues to carefully avoid drawing lines that could ruffle partisan feathers.

Matt O'Shaughnessy
Matt O’Shaughnessy is a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his technical background in machine learning to research on the geopolitics and global governance of technology.
More >

Taken together, Congress’s efforts, including the RMF, tend to improve the government’s regulatory capacity by giving agencies the tools and knowledge they need, but they avoid creating specific requirements about how AI should be handled. This sets the stage for what could be more binding regulation in the future, giving the government the tools required to identify and mitigate problems, although it is far from clear whether this will come to pass. For example, NIST’s 2014 Cybersecurity Framework and relatedstandards (collectively fulfilling a similar role to the RMF’s) were referenced in subsequent executive orders, legislation, and the Biden administration’s recent National Cybersecurity Strategy.

Guidance: Biden and Trump White Houses

The White House leads federal agencies in interpreting and enforcing laws passed by Congress. In doing so, it possesses numerous tools that allow it to guide regulatory trade-offs and prioritize certain issues. White House guidance changes between administrations, but many key AI-related priorities have survived political transitions.

Given Congress’s focus on capacity building for AI, the White House is left with a significant amount of discretion in setting priorities. The Biden administration’s flagship AI policy document, the Blueprint for an AI Bill of Rights, aims to do just this. By taking stronger positions than congressional action has thus far, the blueprint has led to some partisan tensions. However, these tensions may not run as deep as they seem: a look at Trump-era executive orders shows that the two administrations have focused on similar issues, albeit with different emphases.

The blueprint, which was published in October 2022, provides an overview of the Biden administration’s approach to AI. Like other White House strategy documents, it is intended to coordinate the efforts of a diverse set of federal agencies around a core set of priorities. With a focus on the administration’s political priorities like civil rights and equity, the blueprint enumerates standards that individual users should be able to expect from algorithmic systems, such as efficacy, privacy, and protection from discrimination. But the White House office that developed the document, the Office of Science and Technology Policy, is an advisory group without direct regulatory power. A legal disclaimer at the beginning of the document states that the blueprint is a strictly voluntary instrument and does not constitute official U.S. policy.

As a result, the blueprint was met with somedisappointment from groups who saw it as another example of “toothless” U.S. AI policy. However, the nonbinding nature of the blueprint is in line with the White House’s tendency to guide, rather than to directly dictate, the execution of tech policy. Unable to pass legislation as Congress can, the White House seeks to lead the relevant federal agencies that create binding regulation to implement laws passed by Congress. It possesses several powerful political and procedural levers that allow it to supervise agencies. It oversees the clearance process for regulation, nominates the top-level appointees who set the direction of agencies, and can direct considerable political attention to topics of interest. Executive orders allow the White House to provide legally binding direction to agencies; a recent Biden administration executive order on equity, for example, contained provisions related to data and algorithms that were inspired by the Blueprint for an AI Bill of Rights. But for the most part, White House AI policy actions are intended to serve as top-down guidance that, like the blueprint, is intended to orient and prioritize.

Nonetheless, the blueprint caused some controversy by stepping away from the agreeable language of congressionally mandated action like the RMF. The differences reflect the differing motivations behind the documents. Where, for example, the RMF is a flexible tool aimed at helping users identify and mitigate the AI-related risks that matter to them, the blueprint seeks to provide more explicit guidance on how certain value-laden design choices should (or, by calling out past mistakes, shouldn’t) be made. Because it moves beyond the limits of the RMF—making judgments and recommendations rather than simply pointing out issues and possible solutions—the blueprint was inevitably divisive. Unsurprisingly, a group of congressional Republicans rebuked the blueprint, pointing out that its approach differs from Congress’s thus far. But eventually, interpretations of law and values are necessary for implementation, and the blueprint is a move in that direction. If precise interpretations are not specified in legislation, then it will be up to the executive branch to make them, led by the White House—with, of course, checks from the legislative and judicial branches.

Despite the partisan tension the blueprint caused, and despite their different emphases and rhetoric, the Biden administration’s AI policy work is informed by Trump-era efforts. At first glance, the precautionary approach advocated by the Biden White House’s top-down guidance may appear to stand in sharp contrast to the Trump administration’s advocacy for a more hands-off, free market–oriented political philosophy and the perceived geopolitical imperative of “leading” in AI. A Trump administration executive order on “maintaining American leadership in AI” and a more detailed memo developed by the Trump White House’s OMB established specific methodologies for agencies considering establishing new rules. Focused on supporting “free markets, federalism, and good regulatory practices,” the memo left the door open to “narrowly tailored and evidence-based” AI-related rules—but it pressed agencies to consider nonbinding approaches and avoid actions that would “needlessly hamper AI innovation and growth.”

But beneath the different rhetorical and normative emphases, the Biden and Trump administrations’ AI-related guidance overlaps considerably in substance. And indeed, the Biden administration has neither enforced nor repealed Trump administration executive orders on AI, suggesting differences in prioritization rather than outright disagreement. Both administrations highlight algorithmic unfairness as a problem, for instance, but while the Biden administration’s blueprint suggests “proactive and continuous measures” to mitigate it, Trump administration regulatory guidance gives primacy to “innovation,” while brightly noting that the impact of AI systems could also be positive, since they “have the potential of reducing present-day discrimination.”

Sketching themes that would later feature much more prominently in the Biden administration’s blueprint, the Trump administration’s OMB memo directed agencies to consider potential risks to civil rights and privacy, even as it admonished “precautionary” approaches to regulation. Another Trump administration executive order, on government use of AI, encouraged agencies to adopt “trustworthy” AI tools and required a federal government–wide inventory of the use of AI systems.

Execution: Federal Agencies

Federal agencies are responsible for enforcing laws made by Congress, according to the authorities granted to them by Congress, while typically being led by a presidential appointee. In practice, the extent to which agencies have publicly responded to calls for either regulation or scrutiny of internal AI use has been limited, and the extent to which agencies have acted varies greatly. In addition, some agencies have been largely omitted from current requirements.

There has undeniably been significant, though not comprehensive, movement across a dozen agencies—movement that the White House highlighted alongside the blueprint for an AI Bill of Rights. The Equal Employment Opportunity Commission (EEOC) serves as a shining example through its exploration of the impact of AI on employment decisions, with the goal of specifying how employers can continue to comply with equal opportunity laws. It is important to note that without new legislation from Congress, agencies are limited to actions that operationalize existing legislation. These actions can be as significant as creating new rules based on existing authorities or as simple as an agency announcing new AI-relevant interpretations to the world, such as the FTC’s reminder to “keep your AI claims in check.”

Despite some shining examples, the broader picture of trickle-down influence in AI from Congress or the White House can be murky. Guidance on the use of AI in government, for example, was supposed to flow through OMB down to agencies as part of the AI in Government Act of 2020. Unfortunately, OMB is over a year late in issuing this guidance, leaving agencies to navigate uncertain waters when faced with other requirements, where compliance has also been spotty. It is also not obvious how much the blueprint will galvanize new initiatives; conversely, many initiatives (like the EEOC’s) preceded the blueprint or would have occurred regardless of the blueprint’s existence. Top-down strategy perhaps gives direction, support, and political cover to motivated agencies, but agencies that deviate from administration guidance or congressional requirements, for example because of a lack of staffing or prioritization, face little pressure at the moment. These issues are not unique to AI policy, and they should not be hastily interpreted as a sign of unusual resistance from federal agencies. Nonetheless, acknowledging the stuttering nature of agency compliance, especially on issues not perceived as priorities, is an important part of understanding the state of AI policy in the United States.

Meanwhile, law enforcement and national security agencies have been largely left to set their own procedures for the use of AI. Domestic law enforcement was explicitly exempted from the blueprint’s precautionary guidelines, and congressional mandates for government use of AI mostly exempted security-related agencies. The Department of Defense has independently developed its own set of responsible AI principles, and in some ways its implementation plan and bureaucratic innovations can serve as examples for other domestic efforts.

Conclusion

Those looking for a single description of U.S. AI policy won’t find an easy analogue to the EU’s proposed AI Act. But that doesn’t mean that no action has been taken. Indeed, substantial investment has been directed toward AI R&D and government AI capacity. And while more attention has certainly been given to nonbinding, soft-law approaches to governance, sectoral regulators and state and local governments continue to impose meaningful binding rules.

Nonetheless, the path forward for U.S. AI policy remains hazy. The faltering track record for implementation of existing legislation and policy highlights the need for additional attention and resources. The information-gathering called for by some of these policies will also play an important role in informing new legislation by identifying gaps in agencies’ authorities.

With divided control of Congress and limited appetite in either party for major horizontal regulation, a comprehensive approach to legislating AI does not currently appear likely. Bipartisan agreement might yet be forged on narrowly tailored legislation focused on topics like privacy, platform transparency, or protecting children online, and Congress’s continued focus on geopolitical competition may lead to further investments in American AI R&D. Congress could also tighten its grip to enforce agency compliance with existing law; its reaffirmation of requirements from the AI in Government Act of 2020 in the Advancing American AI Act of 2022 at least shows that Congress has not abandoned those requirements.

New ideas and momentum might flow from recommendations made by recently created national AI advisory and coordinating bodies, but these efforts currently lack the resources that past influential AI advisory groups have possessed. These bodies could also support the implementation of existing policy efforts, which would lay the groundwork for a more unified and intelligent approach to U.S. AI policy.

Federal agencies, with missions to mold both their internal use of AI and the use of AI within their jurisdiction, are where onlookers should look for immediate, concrete action. Nonetheless, shifting political winds, such as changing administrations, can make developments difficult to follow or predict. Agencies can slow-walk compliance to avoid redundant work when it is not clear the next administration will follow up. New administrations can also reshape regulatory interpretations; Trump-era staff at the Consumer Financial Protection Bureau, for instance, took a hands-off approach to regulating creditors’ use of algorithms, while the Biden appointees who replaced them have presided over a tightening of these rules. Ultimately, in the absence of new legislation, the United States is in effect experimenting with how far adaptations to current regulatory frameworks can go.

Table 1: A Selection of Key U.S. AI Policy Actions
Actor Action Function
Trump administration Executive Order 13859: "Maintaining American Leadership in AI" (2019) Describes principles and strategic objectives meant to guide AI-related agency actions toward increasing U.S. competitiveness in AI. Requests that OMB develop guidance for agencies considering regulating AI applications. Establishes AI as a key priority in R&D investment, agency data sharing, and workforce development.
  Executive Order 13960: "Promoting the Use of Trustworthy AI in the Federal Government" (2020) Describes principles for government use of AI and requests that OMB develop more detailed guidance. Requires an inventory of AI systems used by agencies and the creation of interagency forums to create best practices. Requests fellowship programs to prioritize bringing AI-related talent into government.
  OMB Memo M-21-06: "Guidance for the Regulation of AI Applications" (2020) Written in response to Executive Order 13859. Provides more elaborate guidance and considerations for agencies considering regulating AI applications. Advocates a focus on voluntary measures. Requests that agencies report information about AI use cases in their regulatory authority to OMB.
Biden administration Blueprint for an AI Bill of Rights (2022) Lays out nonbinding ethics- and civil rights-based principles for government and industry use of AI and describes example agency actions taken in support of these principles.
  Executive Order 14091: "Further Advancing Racial Equity and Support for Underserved Communities . . ." (2023) Encourages a government-wide focus on equity, including reiterating efforts to enable data-driven assessments of equity in agency actions and directing agencies to "protect[] the public from algorithmic discrimination."
Congress AI in Government Act of 2020 Creates an AI Center of Excellence to facilitate government AI adoption. Instructs OMB to create guidance informing government AI adoption and policy development.
  National AI Initiative Act of 2020 Directs billions of dollars to the Department of Energy, Department of Commerce, and National Science Foundation to support AI R&D. Mandates that NIST develop the RMF. Establishes AI-related coordination and advisory bodies in government.
  Advancing American AI Act (2022) Defines principles for government use of AI; encodes into law requirements similar to Executive Order 13960's, which requires an inventory of agency AI use and the development of coordinated guidance from OMB.
  AI Training for the Acquisition Workforce Act (2022) Requires AI training course for government acquisition employees.
Federal agencies (select examples) EEOC: AI and Algorithmic Fairness Initiative (2021) Issues guidance on use of AI in employment decisions. Collects best practices. Gathers information on use of AI in employment decisions.
  Health and Human Services: "AI at HHS" (2021) Executes cross-cutting strategy for agency-wide responsible use of AI. Ensures compliance with AI-related federal mandates.
  NIST: AI Risk Management Framework (2023) Voluntary framework intended to help any organization deploying AI to assess risk and identify points of intervention. Mandated by National AI Initiative of 2020.
  National AI Research Resource Task Force (report published 2023) Task force mandated by National AI Initiative of 2020 to investigate creation of a National AI Research Resource. Would provide researchers with computational resources, data, and support, among other tools.
State and local governments Legislation on topics such as digital privacy and AI use cases Establishes locally binding requirements for AI-related issues.

Correction: The description of Congress’s responsibilities has been clarified to change its work of “writing regulations” to “defining substantive rules.”