President Biden signed a sweeping artificial intelligence executive order Monday, wielding the force of agencies across the federal government and invoking broad emergency powers to harness the potential and tackle the risks of what he called the “most consequential technology of our time.”
“One thing is clear: To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said during a White House address ahead of the signing Monday, calling the order the “most significant action any government anywhere in the world has ever taken on AI safety, security and trust.”
The order arrives as policymakers and regulators globally consider new measures to oversee and bolster the technology’s deployment, but also as attempts to pass comprehensive AI legislation in Congress remain in their infancy, limiting federal government leaders to enforcing existing protections and following executive action.
The order tackles a broad array of issues, placing new safety obligations on AI developers and calling on a slew of federal agencies to mitigate the technology’s risks while evaluating their own use of the tools, according to a summary provided by the White House.
The order requires that companies building the most advanced AI systems perform safety tests, a practice called “red teaming,” and notify the government of the results before rolling out their products. The order uses the Defense Production Act — a 1950 law that has been leveraged in recent crises including the coronavirus pandemic and the baby formula shortage — to require that companies share red-teaming results with the government.
Biden said the powers are typically reserved for “the most urgent moments” such as times of war and that he planned to use the “same authority to make companies prove that their most powerful systems are safe before allowing them to be used.”
The order harnesses federal purchasing power, directing the government to use risk management practices when using AI that has the potential to impact people’s rights or safety, according to a draft of the order viewed by The Washington Post. Agencies will be required to continuously monitor and evaluate deployed AI, according to the draft.
The order also directs the government to develop standards for companies to label AI-generated content, often referred to as watermarking, and calls on various agencies to grapple with how the technology could disrupt sectors including education, health services and defense.
The order comes amid a flurry of efforts to craft new laws, conduct consumer protection probes and collaborate with international regulators to curb the risks of AI. The action will have broad implications for almost every agency within the federal government, along with a host of Silicon Valley companies racing to build advanced AI systems.
Implementing the order marks a significant test for the Biden administration, which has struggled to live up to promises of crafting guardrails for powerful Silicon Valley companies. Biden and Vice President Harris have pledged since they were on the campaign trail to address competition in tech and the harms of social media, signaling an intention to take a tougher line against the tech industry than the Obama administration did.
But there are limits to how much the Biden administration can achieve without an act of Congress. Besides nominating key enforcers with a history of antagonism toward Silicon Valley, the White House has taken scant action on tech issues. Congress, meanwhile, hasn’t passed any major tech legislation, despite years of attempts to craft rules around privacy, online safety and emerging technologies.
In a sign of these restrictions, the order urges Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids,” according to the White House summary — a move that serves as a tacit acknowledgment of Biden’s constraints.
“I can see the frustration in this [executive order] that a lot of this should be done by Congress but they’re not doing anything,” said Ryan Calo, a law professor specializing in technology and AI at the University of Washington.
It’s unclear how deeply the order will affect the private sector, given its focus on federal agencies and “narrow circumstances” pertaining to national security matters, Calo added.
A senior Biden administration official, who briefed reporters on the condition of anonymity ahead of the order’s unveiling, said that because they set a “very high threshold” for which models are covered, the safety testing requirements probably “will not catch any system currently on the market.”
“This is primarily a forward-looking action for the next generation of models,” the official said.
“This executive order represents bold action, but we still need Congress to act,” Biden said Monday.
Senate Majority Leader Charles E. Schumer (D-N.Y.), who attended the signing, and White House Office of Science and Technology Policy Director Arati Prabhakar both said at a Washington Post Live event last week that Congress has a role to play in crafting AI legislation too.
“There’s probably a limit to what you can do by executive order,” Schumer said. “They are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”
Schumer is leading a bipartisan group of lawmakers focused on crafting AI legislation, but they are likely months away from unveiling a proposal. He is expected to host a pair of AI Insight Forums this week, which have gathered top industry executives, civil society leaders and prominent AI researchers for discussions about the need for federal AI guardrails as well as greater funding for research. Biden said he plans to meet with Schumer and other lawmakers to discuss AI legislation at the White House on Tuesday.
Rep. Zoe Lofgren (Calif.), the top Democrat on the House Committee on Science, Space and Technology, said that Congress will also need to “adequately fund our federal science agencies to be able to do the important research and standards development described in this executive order.”
The executive order directs multiple government agencies to ease barriers to high-skilled immigration, amid a global battle for AI talent. Silicon Valley executives for years have pressured Washington to take steps to improve the process for high-skilled immigrants, but experts say they hope Congress will follow the Biden administration’s lead and consider new immigration laws amid its debate over AI.
“This is perhaps the most significant action that will supercharge American competitiveness,” said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists.
The Biden administration is acting as other governments around the world plow ahead with efforts to regulate advanced AI systems. The European Union is expected to reach a deal by the end of this year on its AI Act, a wide-ranging package that aims to protect consumers from potentially dangerous applications of AI. Meanwhile China has new regulations for generative AI systems, which attempt to boost the growth of the country’s generative AI tools while retaining a grip on what information the systems make available to the public.
On the same day of the executive order signing, the G-7 — which includes the United States, France, Germany, Italy, Japan, Britain and Canada, as well as the European Union — announced voluntary guidance for companies, called the International Code of Conduct for Organizations Developing Advanced AI Systems. The guidelines call on companies to conduct regular assessments of the risks of their models, and to devote attention to systems that could pose a threat to democratic values or society, such as by enabling the creation of biological or nuclear weapons.
The European Commission described the code as a “living document” that will be updated to respond to developments in the technology.
This flurry of activity has caused some lawmakers in Washington to worry that the United States has fallen behind other countries in setting new regulations for the technology.
The executive order comes just days before Harris is expected to promote the United States’ vision for AI regulation at Britain’s AI Summit, a two-day event that will gather leaders from around the world to talk about how to respond to the most risky applications of the technology. The executive order signals that the Biden administration is taking a different approach than the United Kingdom, which to date has signaled a light-touch posture toward AI companies and is focusing its summit on long-term threats of AI, including the possibility that the technology overpowers humans.
“We intend that the actions we are taking domestically will serve as a model for international action,” Harris said ahead of the signing Monday.
Reggie Babin, a senior counsel focused on AI regulation at Akin Gump Strauss Hauer & Feld, said the executive order sends a “signal to the world” about U.S. priorities for reining in AI.
Until now, “a lot of people have seen the Americans as, I don’t want to say absent, but certainly not playing a central role in terms of laying out a clear vision of enforceable policy in the way that our status as a global leader might suggest that we should,” said Babin, who previously served as chief counsel to Schumer.
The Biden administration first announced it was working on the executive action in July, when it secured voluntary commitments from companies including OpenAI and Google to test their advanced models before they are released to the public and commit to sharing data about the safety of their systems.