<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

3 Horrifying Examples of AI and Machine Learning Gone Wrong

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

This content is republished with permission from Pandata, a Marketing AI Institute partner. 

When your company deploys AI, things can go wrong. Very, very wrong.

How wrong?

You could be on the hook for nightmares like:

  • Machines that discriminate against customers or employees.
  • Machines that violate consumer rights.
  • Or machines that make bad, costly decisions, but don’t tell you why those decisions were made.

The problem is, it’s not always easy for business leaders to understand how AI can go wrong, because they’re often not trained data scientists. They don’t know what they don’t know, so they can’t take steps to prevent AI and machine learning tools from causing damage to people and brands.

Not to mention, many ways AI goes wrong result from unintended consequences of technological implementation, not malice or wrongdoing.

We’re here to help.

At Pandata, our data scientists understand the unintended consequences that can result when you build, adopt, and deploy AI. We help our clients make AI and machine learning both approachable and ethical.

One great way to do that is to show you horrifying examples of how AI can go wrong—precisely so we can all make better decisions about how to make AI go right

That’s because when businesses get AI right, everyone wins. Your customers get a better experience. You make more money and become more efficient. Your offerings improve over time. But if businesses fail to get AI right, the opposite happens and businesses pay the price.

1. AI That’s Racist 

There are plenty of examples of AI accidentally learning to use racist language. The most notable is Microsoft’s chatbot Tay

Tay was trained on conversations happening on Twitter, so that she could automatically post and converse on the platform. She started posting messages that were innocent enough. But things took a turn when Tay began to learn from the wrong types of Twitter conversations.

Tay’s creators didn’t anticipate the bot would learn from every conversation happening on Twitter, including the ones containing bigoted language about certain races. 

But that’s what she did.

In short order, Tay started posting racist content, prompting Microsoft to quickly shut down the experiment—but became a cautionary tale in the process.

2. AI That’s Biased Against Women

AI can do real damage behind closed doors, too. That’s what happened when Amazon started using an AI-powered recruiting tool to vet new job candidates.

On paper, how the tool worked made perfect sense. It scanned the resumes submitted to Amazon over the previous 10 years. Then it tried to find patterns that would help identify the very best candidates at scale.

The only problem was that the resumes were heavily skewed towards men, which tracks with gender imbalances in the technology industry.

Because the data contained uncorrected bias, the AI system made the wrong conclusion about job candidates: The best hires were more often men than women.

As a result, the system began to penalize resumes from women. Amazon promptly shut down the system, but not before it became national news.

3. AI That Pretends to Be CEOs

Not all horrifying examples of AI gone wrong are mistakes. Certain AI technologies are now so cheap and powerful that any criminal with malicious intent can use them to wreak havoc on companies.

“Deepfake” is the term for a super-realistic AI-generated video and audio of a person. It’s not real, but it looks and/or sounds a lot like a real individual. And you can make it say whatever you want. Deepfakes have been used to impersonate political figures and celebrities in malicious or humorous ways.

In at least one scenario, they’ve also been used to defraud a company.

Scammers used an audio deepfake system to trick an executive at an energy company into wiring hundreds of thousands of dollars to their account. They did it by using this AI technology to impersonate the CEO of the company. The deepfake CEO ordered his subordinate to make the transfer—and he did.

Interested in more AI horror stories? These examples came from a fantastic curated list of “awful AI” on GitHub.

Want to learn how to make AI go right?

Sign up for Marketing AI Institute's free live online course Intro to AI for Marketers.

In it, Marketing AI Institute founder and CEO Paul Roetzer will teach you exactly how to understand and get started with AI. During the class, you’ll learn:

  • What AI is, and why it matters to marketers.
  • How to identify AI use cases.
  • How to find and evaluate AI technology vendors.
  • How to classify AI applications within the five levels of the Marketer-to-Machine ScaleTM.
  • What business outcomes AI can help you achieve. 
  • How to measure the value of AI tools on your company’s efficiency and performance.
  • How to prepare your team for piloting and scaling AI.

In just 30 minutes, you can lay the foundation to transform your career and business for the better using AI.

Space is limited! Click below to reserve your spot for the next class.

Related Posts

How to Prevent Bias in Marketing AI [Workshop]

Madison Filipiak | June 27, 2022

When using AI in marketing, the wrong data—or selecting a vendor that uses data improperly—can lead to serious consequences. Here’s how to prevent them.

Why Marketers Need to Understand Amazon Web Services

Mike Kaput | April 16, 2019

Companies that use Amazon Web Services get on-demand AI and machine learning services at their fingertips, which help them deploy AI at scale.

How to Personalize Email with AI

Cathy McPhillips | February 4, 2022

AI-powered email marketing tools are being deployed today by brands to create unprecedented personalization. A new AI in Action session shows you how.