<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

Biden's Sweeping AI Executive Order: What You Need to Know

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

U.S. President Joe Biden has made big waves in AI policy by signing a sweeping executive order to guide the development and oversight of artificial intelligence in the U.S.

The order introduces new consumer protections around AI, requires AI companies to meet safety standards, and tasks federal agencies with regulating AI risks.

There’s tons of parts to the order detailed in the fact sheet released by the White House. Some parts jumped out as particularly significant to us:

  • It has requirements for companies to share their safety test results with the U.S. government when developing powerful AI systems.
  • The Department of Commerce will be in charge of developing guidance for content authentication and watermarking to clearly label AI-generated content.
  • Best practices will be developed for fair use of AI in the criminal justice system.
  • Best practices will be developed to mitigate the harm AI causes to workers through job displacement.
  • The federal government is accelerating its own rapid hiring of AI professionals.

Why It Matters

“It is a really big deal,” says Roetzer. While an executive order isn’t law, it does have significant meaning and can lead to real action and enforcement.

Connecting the Dots

On Episode 70 of The Marketing AI Show, Marketing AI Institute founder and CEO Paul Roetzer talked me through what to pay attention to in the executive order.

  1. It contains lots of promising ideas across a wide range of issues. It includes action on safety, privacy, studies on AI labor impact, and more.
  2. But it’s light on specifics. There isn’t much yet in it that tells us when any of this will happen or how it will be enforced, says Roetzer.
  3. And it’s all we’re going to get for awhile. We have to be realistic about the current state of dysfunction in the U.S. Congress, says Roetzer. It’s unlikely we’re going to pass any big AI legislation soon. He doesn’t see anything major happening until at least 2025 after the presidential election.
  4. Consumer safety is logically a big focus. There are stipulations about putting stronger consumer protections around AI in place, which makes sense says Roetzer. “It’s the thing they can control. Because they can’t create laws through this but can force government agencies to enact existing laws more aggressively.”
  5. Everything in it will take time. It’s going to take at least through 2024 to even get some of the items here off the ground, figure out how to govern some of these technologies, and get studies underway. “This is going to be a prolonged implementation,” says Roetzer.

What to Do About It

There’s not much to do about an executive order unless you’re planning on running for office. But the range of commentary around the executive order—and regulation as a whole—is instructive for any business leader trying to develop even more robust thinking about AI.

It’s important to understand who you’re listening to on core AI issues and why they say what they say, says Roetzer. There are a handful of broad groups to keep in mind as having significant voices in conversations about regulations:

  • Doomers. “These are the people that are winning most of the mainstream media headlines right now about AI and the existential risk of humanity,” says Roetzer. These are a loud minority of AI researchers who seriously believe AI could threaten humanity at large and we should, generally, slow down our development of the technology until it can be developed safely.
  • Accelerationists. “These are the people that believe that rapidly advancing technology leads to growth and abundance—period.” They accept that AI can have downsides but that the faster you move and the more you create technology, the better off everyone is.
  • Big Tech. This is Google, Microsoft, OpenAI, and all the other leaders building the most powerful models right now. “They are the ones pushing for the regulation,” says Roetzer. But their motives aren’t all altruistic. They have significant financial motivation to maintain the lead they established in AI, and influencing regulations is a way to do that.
  • Rationalists. “This is the group of people who are focused on near-term challenges that are presented by the current models,” says Roetzer. Rationalists can be very pro-AI and technological progress but often try to balance that with human interests and ethical considerations. They don’t just assume everything will work out the faster that technology progresses. They’re less concerned with existential risks though, often focusing more on practical near-term risks like: disinformation, synthetic media, and AI’s impact on jobs and education.

These are the lenses that people are looking at regulation through and where they’re coming from when they try to influence it.

“I think you have to accept that the executive order is most likely largely influenced by Big Tech,” out of all the groups mentioned, says Roetzer.

Related Posts

Generative AI Is Getting Sued. Here's Why You Should Pay Attention

Mike Kaput | January 24, 2023

Major generative AI companies are now facing legal challenges that could have big implications for anyone using AI tools that generate text, images, or code.

4 Reasons Why AI is Essential for Your Competitive Intelligence Program

Zach Hover | September 28, 2021

AI is quickly becoming critical for any business' competitive intelligence program. This post explains why.

Adobe Takes Strides to be The Next AI Master – Or Should We Say Sensei

Sammie Fisher | May 18, 2018

Adobe’s acquisitions, product sneak peeks and culture of AI adoption position the company as a future AI powerhouse.