<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

4 Min Read

Ethical and Trustworthy AI: Lessons from the Front Lines [Video]

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

At the Marketing AI Institute, we’ve had webinars, blog posts, MAICON sessions, AI Academy for Marketers lessons, and more on ethical AI. But are the right questions being asked? Is the time and care necessary really being put into the implementation of AI-powered technologies? We’ve talked about it and read about it in academics, blog posts, and content, but when it comes to the practical side of things, what we need to help us are examples of ethical and trustworthy AI implementation.

Gemma Galdón Clavell, Founder and CEO of Eticas Research & Consulting, shared lessons learned and details on how to practically incorporate good practice into AI development and thus avoid financial, legal, and reputational risks. 

Rather than thinking of ethics after AI is implemented, ethics need to be top of mind as technology and AI are created and developed.

This keynote spurred some great conversations about AI, innovation, ethics, and the future.

Watch Gemma Galdón Clavell’s MAICON 2022 Keynote

 

Watch this full session plus all MAICON main stage keynotes, sessions, and panels with the MAICON 2022 On-Demand Bundle

Galdón Clavell opened up her MAICON 2022 as her walk-on song, Every Breath You Take by The Police, faded out. She talked about the memories that the song brings back, but how the song didn’t age well. Everywhere you go, every breath you take, everything you do, I’ll be watching you. Galdón Clavell analogized this with the work we’re doing today: “Some of the things we’re doing with data won’t age well, and some of the things we are doing with data will be seen as very creepy in a few years.”

Galdón Clavell shared trends that are changing how we work with technology, the consequences of what we’re doing, and reasons why things are not working the way they should. 

Trend #1: An increase in people's distrust of how their data is used.
We hate to love technology. We love it, but we hate how dependent we are on it. We know they do creepy things with our data, but we don’t know alternatives. And we distrust those that manage our technology.”

Trend #2: Data is becoming a liability.
“We come from years when collecting data was everything. We were told there'll be money in data in the future. So let's just get as much as we can. We don't know what we want from it. But if we're going to have an interaction with a client, let's just ask them for their date of birth, their color preferences, their partner history, anything just in case this is valuable.

“Basically, everything is sensitive data. So I encourage you, and I advise you, to see data as a toxic chemical or as something that brings a lot of power, but that also has a lot of risks. Something that can unleash enormous possibilities but that can also harm you if it is mismanaged. 

“So the times of collecting everything and seeing what happens are gone. And the future belongs to those who understand that data is becoming a liability.”

Trend #3: There are painful cases of really badly implemented AI.
“Right now, there is more badly implemented AI than good AI. One of the good things about auditing systems is that you actually look at what people are working on. You're actually opening up the black box of what they're doing. And you realize that it's a mess. If you're trusting your decision-making on an AI system, and the AI system is implemented using the wrong data, it comes up with bad decisions. And so that system you incorporate to make better decisions is actually harming you. 

Poorly implemented AI leads to:

  • Worse—and biased—decision-making
  • Organizational dissatisfaction
  • Reputational crisis

What five things does Gemma see when she works with clients and partner organizations that have contributed to where we are? 

  1. There is constant confusion between AI, technology, and data…and science fiction. It is really important to separate science fiction from the actual possibilities of the technology that we currently have. So a good understanding of what technology is and does is paramount. 
  2. We cherry-pick AI developments. We pick these projects that work because they speak to what AI does best, which is to take a stable stream of existing data that is easy to digitalize and label, and it works well. And we make the jump and say we can just do that with human behavior and human intentions. Then we think we can substitute everything with a computer prompt when that is not the case.
  3. There is a misunderstanding of what AI is and what AI does. We keep insisting that AI does what it's worse at. AI is great at going from a hundred million to a hundred. You have a lot of data. People would have to spend a lot of hours to identify trends, to label that data. AI can help you with that. From going from a hundred million to several patterns you can work with. But one hundred to one unique human? It becomes too complex. 
  4. We don’t understand the environmental impact of AI. If we are taking cars off the road because of how much they pollute, at some point, we'll have to start discussing whether some data management and processing environmental costs are acceptable. We may find that in a few years some data processing systems are outlawed because of their environmental cost. 
  5. We are living in a moment of paradigm shift. What we've been doing with data in the past will not be possible anymore. We are getting better at protecting people. And so your business model is not sustainable anymore. The ones that start to incorporate accessibility and trust into their data systems, the ones that understand what data can and cannot do, and treat people not as cheap resources for personal data but actually as assets, will pave the future and be the most successful. 

We need to decide: Are we part of the past, or do we want to join Gemma and the many leaders who care about this and want to shape the future to ensure that the future is all about really powerful, ethical, and trustworthy AI? 

Become a next-gen marketer by checking out the resources at the Marketing AI Institute. Read our blog posts, take our Intro to AI for Marketers class, attend webinars, join our community, download reports, guides, and templates (all free), read Marketing Artificial Intelligence, look into AI Academy for Marketers and Piloting AI Bundle, and our annual MAICON—Marketing AI Conference.

Get access to all MAICON main stage keynotes, sessions, and panels with the MAICON 2022 On-Demand Bundle

Related Posts

Language AI and the Future of Writing: NLP, NLG, and Storytelling

Cathy McPhillips | September 19, 2022

At MAICON 2022, we sat down with marketing AI experts to discuss the current capabilities of NLG and NLG.

Vision AI and the Future of Video and Design [Video]

Cathy McPhillips | October 31, 2022

Shutterstock, Viralspace and AnyClip experts joined MAICON 2022 to discuss the future of vision AI.

The Cognitive Human Enterprise: When AI, the Metaverse, and Humanity Collide [Video]

Cathy McPhillips | October 6, 2022

Learn about how AI, the metaverse, and humanity collide in Domnhaill Hernon’s keynote from MAICON 2022.