Buying things—formally known as public procurement—may be one of the government’s most effective AI regulatory tools.

Ben Polsky
Ben Polsky is a consultant with Carnegie California.

Each year, the federal government purchases nearly $700 billion in goods and services from outside contractors, lending it tremendous influence over suppliers. Procurement represents a major driver of economic activity in the United States and a vehicle—one that is often overlooked—for achieving important policy goals at all levels of government. As highlighted in a recent paper by Carnegie California, regulatory frameworks at national, state, and local levels underscore the significant potential of procurement in establishing guidelines around AI for public good. Amid an emerging and evolving AI landscape, governments in California are transforming this traditionally mundane function of procurement into a strategic lever to promote greater AI vendor transparency and accountability.

HOW PROCUREMENT WORKS

The procurement process can take many forms, but it generally begins when a government agency identifies the need for a good or service. Governments often procure AI solutions to enhance service delivery (such as chatbots for citizen inquiries) or ensure public safety (such as traffic management). Depending on the procurement scenario, the agency may issue a request for proposals (RFP) and seek responses from companies until a closing date, at which time it may enter into a contract with the winning bidder. In other circumstances, the agency may not issue an RFP and instead pursue a “sole-source” procurement request, in which only the identified supplier can provide the desired service or product.

Leila Doty
Leila Doty is a privacy and AI analyst at the city of San José.

Companies that seek to win an agency’s contract must meet the basic quality standards required by law, in addition to the context-specific safety and performance requirements indicated in the RFP or sole-source procurement proposal. Regardless of the type of procurement at hand, as agencies begin to embed responsible AI requirements into the procurement process, a need for greater vendor accountability and a refinement of the vendor requirements has surfaced.

VENDOR ACCOUNTABILITY

The public procurement process allows agencies to set standards and requirements for potential vendors. During procurement, governments can outline requirements that promote responsible AI, including defining standards around human oversight, algorithmic bias, and auditability of AI systems.

In establishing procurement requirements for AI vendors, agencies have been met with a common challenge: the difficulty of enforcing vendor accountability. When a vendor is accountable for its AI system, it is transparent about the system’s capabilities and limitations, takes responsibility for the outputs, and works to remedy errors. Particularly when it comes to transparency around AI systems, agencies have been met with varying success in what vendors are willing to disclose. In one agency’s experience purchasing AI-powered translation software, a vendor claimed that the translation accuracy metric was proprietary and could not be shared. This caused tension between the agency and potential vendor, because the agency’s task was to assess the AI system’s performance on a comparative basis during a competitive bid against other vendors.

WHAT DOES ACCOUNTABILITY LOOK LIKE?

Subnational AI policy making occurs in the context of frameworks and regulations being developed at numerous other levels, including national and international. National governments are leading on questions of catastrophic risk and national security, whereas subnational governments tend to lead on issues of everyday AI interactions.

Within that context, subnational governments are pioneering innovative processes for responsible AI procurement in the public sector. Take the GovAI Coalition: shepherded by the city of San José, the coalition is a partnership between more than 150 local, county, and state agencies to create new frameworks and practices to pilot municipal use cases for responsible AI. (Disclosure: One of the authors works for San José on AI issues.) This month, the GovAI Coalition released a suite of policy templates that any public agency can use to build their AI governance program. Of these policy documents, two are critical to reforming the AI procurement space: the AI FactSheet and the standard contractual clauses for AI systems.

The AI FactSheet, intended to be completed by the vendor during the procurement process, captures technical details about a given AI system. These details include “AI nutrition facts,” such as training and testing data, performance metrics, algorithmic bias, inputs, and outputs. These details communicate important information about the accuracy of a system, what kind of data the system collects and produces, and who might be disproportionately harmed by the use of the system. Understanding the capabilities, strengths, and weaknesses of an AI system via the nutrition facts allows an agency to determine if the system is technically capable and contextually appropriate for its use case.

Whereas agencies have experienced difficulties obtaining this information from vendors before, the AI FactSheet, if adopted across local jurisdictions, will promote greater transparency of AI systems in a standardized manner that benefits both agencies and vendors. Agencies will have the technical information they need, and vendors can save time by using a single document for many customers.

Voluntary commitments are important but not sufficient. To formalize the AI FactSheet and foster greater accountability from industry at large, the GovAI Coalition produced standard contractual clauses that outline a set of standards and requirements. The document functions as a legal addendum to final purchasing agreements for procurements that involve an AI system. For example, via the legal addendum, a vendor will be required to complete and maintain an accurate version of the AI FactSheet for its AI system, provide the agency with an explanation of how the AI system generates outputs, and demonstrate that bias is effectively managed. When contracting with a member of the GovAI Coalition, a vendor will be required to adhere to these standards and others as set out in the standard contractual clauses. This formalization of responsible AI standards by public agencies would encourage developers to build AI systems with increased accountability and allow agencies to make informed decisions on applying AI systems to the appropriate use cases.

PUBLIC BUYING AS MEANS TO ACCOUNTABILITY

With a $310.8 billion budget, the state of California recognizes its procurement activities as an immensely influential demand-side policy lever. California’s executive order on AI directs severalstateagencies and centers to reform public sector procurement so that agencies consider uses, risks, and training needed to improve AI purchasing. The same goes for geographically strategic coalitions of local governments that have the ability to scale quickly and share resources. The value of public procurements means these subnational entities can leverage their buying power to move emerging markets in ways that benefit the public good.

Two market conditions loom over public procurement of AI. First, firms cannot sell different products to different jurisdictions. Tech companies do not want AI systems for California that differ substantially from those used in New York or Texas, let alone different standards for Los Angeles and San Francisco. Second, state and local agencies comprise a significant customer base for AI companies. These market conditions incentivize all parties to have a set of common procurement standards across various jurisdictions. Moreover, firms that heavily rely on state and local agencies as customers will feel the pressure to ensure their products and services meet specifications set by those agencies. Public entities should use this leverage to ensure consumer-friendly procurement standards to demand vendor transparency and accountability through their public procurement processes.

WHAT’S NEXT

California has emerged as an epicenter of subnational innovation in AI governance. The California state legislature has roughly thirty pending bills focused on AI regulation in the state, including a recent bill introduced by Senator Scott Wiener that calls for mandated predeployment safety testing. While many of the bills themselves are in their infancy, many of the policy proposals embedded in them are being translated into practical action by state agencies, cities, and coalitions. In the absence of clear national guidelines, the standards that California imposes, through the executive order and state and local groups such as the GovAI Coalition, have the potential to become the de facto national or even global standards due to the state’s large purchasing power. Although still in early stages, the GovAI Coalition’s AI FactSheet and standard contractual clauses for AI systems comprise a promising set of procurement measures that will promote responsible AI. Together, the California-led procurement overhaul can be a powerful vehicle for vendor accountability by shifting the burden away from public agencies and toward companies developing and selling AI systems.