Six Facts to Know: President Biden's Executive Order on AI

Artificial Intelligence (AI) has become the center point of every tech policy conversation. Whether discussing the benefits or risks, the central question is always: “What role should the government play in the development and regulation of AI?”  

On October 30, the Biden administration announced an executive order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence. The EO does a few things. It requires developers of AI systems that might pose risks to US national security, the economy, public health, or safety to share critical data with the federal government. The EO pushes the federal government to encourage high-skilled legal immigration through the H-1B visa process to keep the US competitive in AI development. It also requires the federal government to guide how to use AI to ensure equity and civil rights. And it introduces a “Biden AI Doctrine” in an attempt to set a global standard that governs AI use and development.  

Here is what to know about the Biden administration’s executive order on AI.

1. The executive order provides broad, high-level guidance across key issue areas.

While the EO will tackle several issue areas, it does have several key provisions that will enhance the way government and the private sector interact, notably: 

  • Create new safety standards for AI by requiring developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before they are released to the public. The executive order also directs the Commerce Department to create guidance for AI watermarking and creates a cybersecurity program that can leverage AI tools to identify flaws in software critical to the national interest.

  • Protect consumers and workers by establishing rules around consumer privacy and funding research to understand and demonstrate the health implications of the new technology on the American workforce and the labor market.

  • Advance equity and civil rights by providing guidance to landlords and federal contractors who might use AI algorithms in decision-making. The EO also calls on the federal government to create best practices on the appropriate role of AI in the justice system. 

  • Promote competition and innovation by boosting AI research nationally through grant programs, the support of small AI developers, and modernization of the visa processes to retain skilled AI experts from abroad.

  • Establish international partnerships to achieve global, unified standards around the world.

  • Issue federal guidance on how the government will leverage AI technology, including increasing the speed by which the government can hire among experts in the field.

2. The private sector will continue to shape future regulation—as it did on this executive order.

The EO has been built out of conversations with the major tech players. Back in September, the Biden administration and eight tech companies announced commitments to mitigate the challenges and risks presented by AI. The federal government has been seen as lagging behind the fast-moving private sector with regulation, legislation, and enforcement—a classic dynamic with cutting-edge technology. The private sector has played a key role in rulemaking because it has set the pace and tone, forcing Congress and the President to be reactive. We expect this trend to continue..

3. The government's attempt to balance safety with tech's instincts to disrupt may yield mixed outcomes.

One of the critical concerns for the federal government is safety in how AI is leveraged. Most Americans lean toward concern rather than excitement when it comes to the use of AI in their everyday life, especially in the area of privacy and potential for misuse. The EO requires organizations to share safety test information with the federal government before releasing AI systems to the public. The federal government will also prioritize the National Institute of Standards and Technology’s (NIST) development of standards for AI “red-teaming” to stress-testing the defenses and potential problems within systems. It will also develop standards for watermarking AI-generated content.

While Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have to promoting the safe, secure, and transparent development and use of AI technology, the administration has been vague on how exactly it will use “safety test information” to protect consumers and what the standards will look like. Will the federal government mandate a minimum safety standard before a private company can push a product forward? Will the federal government then act as a barrier to content it does not like? In practice, government review tends to slow innovation, the exact opposite of the intent of the EO. With this new attempt to balance safety with innovation, it’s very possible the tech mentality of “move fast and break things” could slow

4. The executive order tries to address the future of the American workforce.

Biden’s executive action addresses the evolving workforce in two significant areas: legal immigration and labor. To fill roles and stay competitive, companies recruit overseas talent and have actively lobbied the federal government to ensure they have access to a global workforce. Through a proposed rule in H-1B visas, the administration hopes to make it easier to recruit and retain global talent. While this is a win for corporations and organizations leveraging this talent and for American competitiveness overall, this is also sure to drive part of the immigration conversation as we head into an election year where immigration reform and illegal immigration are both topics that will be leveraged by political candidates. 

Biden’s executive action also seeks to address the worry that AI will impact the workforce by displacing jobs and creating redundancies. Understanding and mitigating workforce impacts from AI are paramount for this administration and will feature prominently in many of the regulations that will be formulated by agencies as a result of the EO. The administration will want to push these protections as a major storyline over the next year, which will have notable implications for companies that are exploring how to leverage AI in their own environments. 

5. Congress and regulatory bodies still have work to do.

The administration specifically calls on Congress to pass bipartisan legislation to mitigate the risks posed by AI, especially regarding children and the vulnerable. This poses an opportunity for the private sector (which has actively been lobbying and, in many cases, teaching Congress about these tools and innovations) to formulate standards that might eventually become national and global standards for consumer protection. Congress has shown an increased interest in this field, with Senate Majority Leader Schumer hosting his third AI-insight forum on November 1.

6. The Biden AI doctrine hopes to set a global standard amid the international debate.

The United States joins a few major nations who have put forward a comprehensive set of principles and foreign policy priorities for AI development and governance. The field is growing crowded. Currently, the global drive for AI regulation is decentralized among Beijing, Brussels, London, and Washington. Among the G7’s new code of AI Conduct, China’s Global AI Governance Initiative, the EU’s draft AI regulations, and the UK AI Safety Summit, Biden’s AI Doctrine stands out with a unique approach. The EO is a clear signal of American interest and values in the emerging debates over ethics, safety, and standards. It both asserts American leadership in emerging international conversation and lays down markers that the UN and other multilateral discussions will need to consider and react to. The closer the world moves toward global ideas and standards among AI and tech regulations, the less global companies need to navigate a web of confusing and often conflicting regulatory frameworks.