<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

Responsible AI: Ethics, Innovation, and Lessons Learned from Big Tech

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Companies large and small need to adopt AI without losing sight of the ethical implications of these solutions, or pay the price.

That’s the takeaway from Karen Hao, Senior AI Editor at MIT Technology Review, in her keynote at the Marketing AI Conference (MAICON) 2021.

In the talk, Hao offers practical advice for companies to design and develop AI responsibly, and avoid the possibility of unintended consequences…

PS - Have you heard about the world’s leading marketing AI conference? Click here to see the incredible programming planned for MAICON 2022.

 

Responsible Artificial Intelligence 101

AI can have unintended consequences, despite your best intentions, says Hao.

For instance, seemingly benign AI systems that generally work well may accidentally cause harm by infringing on privacy or categorizing identities wrong.

Take the example of AI in healthcare…

An AI system might be able to detect cancer in a medical scan, but might do so by leveraging patient information that should stay private.

Or, the system might have accidentally been trained on data pulled only from white, not black, patients, causing the system to become discriminatory.

These consequences often don’t happen because of intentional malice. But rather from engineering oversights in the AI systems themselves.

To avoid these types of consequences, says Hao, business leaders should look at the mistakes other major AI implementers like Google and Facebook have made as guides on what not to do.

Google, Facebook, and the Perils of AI

AI is good at processing massive amounts of information, and selecting relevant bits to the user. That’s why Google uses it to power almost every aspect of its search capabilities. At a fundamental level, it’s not wrong that Google uses AI.

But just because it’s not wrong, doesn’t mean it can’t go wrong.

What if Google’s AI systems are not retrieving accurate information? Or what if they’re retrieving only a subset of information, rather than the whole picture?

For example, Hao cites studies that showed Google’s search algorithm ended up associating negative terms with black women, but not white women.

But be careful…

Just because Google’s AI didn’t go wrong on purpose, doesn’t mean Google’s leaders can’t choose a very, very wrong response to the issue.

Hao cites the example of Google’s Ethical AI Team, which was charged with thinking about and conducting studies on problems related to AI. The moment this team started criticizing some aspects of Google’s language models, which are highly profitable pieces of Google’s advertising machine, management fired the leaders of the team.

The incident damaged the company, and called into question its commitment to responsible AI.

A similar scenario occurred at Facebook, which became actively notorious for its irresponsible use of AI.

Facebook integrated AI into everything on their platform: populating feed content, tagging people in photos, its Messenger product, and much more. There’s probably not a single feature on Facebook that doesn’t have some sort of AI, says Hao, who has reported extensively on the company.

On paper, the goal of Facebook’s AI systems is simple: increase the amount of engagement from users, so they spend more time on the platform and connect with more people, which in turn makes Facebook’s ad business more profitable.

In practice, however, these AI systems caused many users to see (and spread) misinformation, discriminatory messages, and inflamatory content—all because these types of content were engaging.

In 2018, Facebook started a responsible AI team to fix the problem. The team determined the company’s technology was amplifying misinformation and polarizing users.

Yet, Facebook didn’t empower the team to do anything about the issue, because reducing engagement with content meant the platform wouldn’t be as popular or profitable.

As a result, the issue wasn’t fixed. And Facebook faced even more problems down the line after a whistleblower shed light on severe issues related to how the company’s algorithms work.

What Should Business Leaders Do?

So, what should you do about it?

Hao offers a few quick tips to get started making sure your AI is responsible and avoid major issues for your company.

  1. Get started with AI now if you haven’t already. If you start now, you have a huge advantage: You can see how AI has clearly gone wrong in the last few years, which can help you avoid similar mistakes.
  2. Reward employees for asking tough questions. Praise or promote employees that ask hard questions related to your proposed or current use of AI.
  3. Prioritize people over profit. You shouldn’t be afraid to modify problematic projects, or terminate them entirely, even if it costs money in the short-term. AI must benefit humans if you want it to benefit your company, preserve the trust of your stakeholders, and avoid PR scandals.

PS - Have you heard about the world’s leading marketing AI conference? Click here to see the incredible programming planned for MAICON 2022.

 

Related Posts

AI Academy Experts Discuss Machine Learning in New Ask Me Anything Session

Sandie Young | November 17, 2020

This post offers an inside look into the latest, exclusive members-only session with Jim Sterne, President, Target Marketing of Santa Barbara. During the session, we discussed the benefits of machine learning.

HubSpot's AI Product Lead Looks at the Future of Machine Learning

Paul Roetzer | November 11, 2020

Today's spotlight features Kevin Walsh, Group Product Manager, Artificial intelligence at HubSpot on the future of machine learning.

The AI Agent Landscape: What Business Leaders Need to Know

Mike Kaput | July 16, 2024

AI agents—autonomous systems that can pursue open-ended goals and complete tasks with minimal human input—are no longer science fiction.