<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

The Biggest AI Dangers You Should Know

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

At one point during OpenAI CEO Sam Altman’s recent Congressional hearing, Altman said: “My worst fear is we cause significant harm to the world.”

Lawmakers and the other two experts in the hearing—IBM executive Christina Montgomery and Gary Marcus, a leading AI expert, academic, and entrepreneur—were in agreement.

During the hearing, they cited a common set of AI safety issues that they’re losing sleep over at night, including:

  • Election misinformation, or generative AI’s ability to create fake text, images, video, and audio at scale, as well as emotionally manipulate people consuming content, in order to influence the outcome of elections—including the upcoming 2024 U.S. presidential election.
  • Job disruption, or the possibility that AI will cause significant, rapid unemployment.
  • Copyright and licensing, or the fear that AI models are being trained on material that is legally owned by other parties and being used without their consent.
  • Generally harmful or dangerous content, or the possibility that generative AI systems create outputs that harm human users. This can happen in a variety of ways such as: hallucination, where generative AI makes up information and misleads users or a lack of alignment, where generative AI is not well-trained enough and gives users information that they can use to harm themselves or others.
  • Overall fears of the pace and scale of AI innovation, as well as our ability to control it. Experts and lawmakers fear that, without proper guardrails, AI development could move so fast that we release potentially harmful technology that can’t be adequately controlled and/or, in some more extreme opinions, actually create machines far smarter than us and outside of our control (often called broadly “AGI”).

Which of these risks do we, and lawmakers, need to take seriously?

In Episode 48 of the Marketing AI Show, I spoke to Marketing AI Institute founder and CEO Paul Roetzer to find out.

  1. Congressional focus on near-term issues is welcome. There’s plenty of attention being paid to doomsday headlines about possible superhuman AGI. And it’s important to have top minds in AI thinking about existential threats. But there are a host of very near-term problems AI can cause that we need to focus on, says Roetzer.
  2. Job loss and election interference are the most immediate dangers. Both loss of jobs (especially knowledge workers) due to AI and major AI-powered election interference are likely to be the biggest problems in the next 12 months, says Roetzer. He emphasizes these problems are here today. “There are no advancements in the technology needed for all of these things to happen.”
  3. It’s unrealistic to think companies will police themselves. Companies like OpenAI take some solid steps to ensure AI safety, like spending months working on alignment and red teaming (or trying to find flaws in systems) before they release products. But there’s a problem, says Roetzer. “They’re not incentivized to prevent this technology from getting into the world.” The handful of major companies driving AI innovation right now are financially rewarded when they release technology fast, even if it causes problems down the line. “The ethical concerns seem to be becoming secondary within some of these tech companies,” he says.
  4. And politicians have mixed motivations when it comes to regulation. On one hand, lawmakers are showing serious interest in having conversations about AI safety, which is great. But, on the other, they also have a clear interest in using the technology to win elections and enhance U.S. economic competitiveness. So they having conflicting motivations when it comes to sensibly regulating the technology.

The bottom line: There are very real near-term dangers we will experience due to AI, but no easy answers to deal with these dangers or concrete regulations to prevent them.

Don’t get left behind…

You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.

The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.

After taking Piloting AI for Marketers, you’ll:

  1. Understand how to advance your career and transform your business with AI.
  2. Have 100+ use cases for AI in marketing—and learn how to identify and prioritize your own use cases.
  3. Discover 70+ AI vendors across different marketing categories that you can begin piloting today.

Learn More About Piloting AI for Marketers

Related Posts

Reactions to Sam Altman's Bombshell AI Quote

Mike Kaput | March 12, 2024

Last week, we got a lot of attention when we shared a previously unreported quote from Sam Altman—and we all need to figure out what the quote means, fast.

Sam Altman Is Raising Trillions to Reshape AI As We Know It

Mike Kaput | February 13, 2024

OpenAI CEO Sam Altman is trying to raise as much as $5-$7 trillion to reshape AI. Here's what's going on.

OpenAI Fires Sam Altman: What's Really Going On—and What Comes Next

Mike Kaput | November 21, 2023

There’s only one story on everyone’s mind this week in AI: The sudden, controversial firing of OpenAI CEO Sam Altman—and the chaos that followed.