<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

2 Min Read

What Is Bias in AI—and How Do You Prevent It?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Can artificial intelligence be biased?

You bet.

Despite AI's obvious benefits, it can end up harming consumers, brands, and industries through bias.

What do we mean when we say AI can be biased? What is bias in AI?

Put simply, bias in AI is when an AI system produces an unexpected, undesirable output.

Bias happens for two big reasons:

  • Human blindspots. Humans inject their own biases, consciously or unconsciously, into data used by AI or the AI system's design. These could include direct or indirect discrimination based on age, gender, sex, race, or other characteristics.
  • Incomplete data. Data used to train AI can also create bias when it's incomplete. AI is only as good as its training data. So, if that data lacks adequate diversity or comprehensiveness, the gaps cause issues.

Why can bias in AI be harmful? 

First is the obvious harm: AI can promote blatant bigotry.

An example of this is an AI bot that Microsoft created to post to Twitter.

The bot learned from conversations on Twitter how to tweet like a person. Unfortunately, much of the language used to train it was profane and bigoted.

As a result, the bot tweeted seriously inappropriate language. Microsoft shut the test down quickly.

AI can't invent bigotry on its own. It learns it from humans that display it in data sets used for training.

If Microsoft hadn't trained its bot on a dataset that included bigoted language, it wouldn't have been bigoted. It was a mistake, not a malicious action. The company didn't anticipate the consequences of using all of Twitter as a dataset.

Yet, it still harmed people and the company's image. The result was the same as if the company had intentionally programmed the bot to be biased.

That's why bias in AI is so dangerous.

Second, is the more common, but less obvious, harm: AI can become unintentionally biased due to incomplete data.

An example of this is the Apple Card.

In 2019, Apple released a credit card product. AI used by the company automatically gave applicants a credit line based on many characteristics, like spending, credit score, earnings, etc.

However, Apple took a massive amount of flak when it turned out that their AI gave women a smaller credit line than men, even when controlling for other factors.

It happened because the AI system was using incomplete data, which didn't account for a range of gender-related factors related to income and pay.

As a result, it concluded that women deserved less credit than men, even when financials were equal.

So, how do you address bias in AI?

Once you've built a product or system, it's usually too late.

You need to address bias at every step of the process that leads to the adoption of AI in products and operations.

The examples of Microsoft and Apple prove this. Both companies are adept at AI. Both companies have world-class engineering talent. Yet, they were both still caught by surprise by bias in AI. And when they discovered it, it was too late.

The technology was sound. But the bias considerations were not.

That's because fixing bias in AI isn't just a technology problem.

Sure, you need to be completely confident your data is comprehensive, accurate, and clean.

But you also need to have people and processes in place across every business function, not just engineering, to assess bias risks. It's a holistic effort, and it takes time.

An excellent place to start is to draft an AI ethics policy for your organization, whether you build AI technology or use it in your work.

An AI ethics statement is a formal document posted in public that outlines your company's position on AI.

It provides specifics on how your company will and won't use AI.

And it details what steps you take (or will take) to make sure ethical issues and bias don't affect AI you build or use.

Related Posts

How Can Marketers Use Artificial Intelligence?

Paul Roetzer | March 29, 2019

In part two of the Artificial Intelligence in Digital Marketing, Paul Roetzer and HubSpot product manager of machine learning Kevin Walsh, discuss the reasons marketers should consider AI, how HubSpot thinks about AI in its platform, and what marketers can learn about AI from using Gmail.

7 Things Every Marketer Should Know About Artificial Intelligence

Paul Roetzer | May 4, 2020

AI may seem overwhelming, but it has the potential to drive costs down and revenue up. Here are 7 things marketers need to know about the technology.

How AI Helped Us Hit 100,000 Podcast Downloads in 2023

Mike Kaput | July 18, 2023

Our podcast just hit 100,000 downloads so far in 2023 thanks to a smarter content strategy and artificial intelligence. Here’s how it happened.