<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

4 Min Read

It’s Time for Us to Address the Ethics of AI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

AI should scare us all, but not for the reasons espoused by Elon Musk

Companies have moved beyond the hype and raced to build and adopt ‘intelligent’ solutions, and the unintended consequences of AI have become increasingly apparent. From Google’s gender-biased autocomplete to deadly autonomous vehicles to a racially biased algorithm that put caucasians ahead of black patients, there is no shortage of disastrous examples over the last year. 

These issues are not new. In 2016, the COMPAS program, widely used in the US to guide sentencing, was found to predict black defendants pose a higher risk of recidivism than they actually do. And, as recently as 2015, an image Google search for the term ‘CEO’ would disproportionately return images of white men. Finally, let’s not forget Tay—Microsoft’s attempt to build a chatbot to converse with and learn from the world that turned into a vile, bullying, hatebot.

So why should you care as a marketer? No doubt it’s good to worry about the greater good. But how does gender bias in natural language generation or discrimination in healthcare risk scoring affect me?

  1. You are drowning in qualitative market research, from customer surveys to online reviews, and you are considering tapping into one of the many emerging AI tools to categorize and make sense of this data. What if your sentiment tool consistently identified feedback pertaining to team members with African American names as more negative than other names?
  2. You are investing in hyper-personalized digital experiences and using AI to direct customers to new product offerings. The initial audience you start with is, say, mid-30s with an interest in home purchasing. The AI is over-exposed to this audience, and less exposed to others, and re-enforces that this is the most successful group to market to at the expense of overlooking others.
  3. You are trying to improve your lead identification and scoring process to spend more time on customers who are more likely to buy. Left unchecked, AI can hone in on potentially problematic behaviors. Let’s say your AI solution leverages online prospect behavior and engagement to produce lead scores. Purchase history, online viewing behavior, and geographic location can all be proxies to things like gender, race, age, and other protected classes. If the AI starts leveraging these factors to score leads, you may find yourself in tricky territory, especially if you operate in regulated areas like finance, healthcare, or insurance.

With more than 50% of organizations viewing AI as a priority. This isn’t a hypothetical; this is our new reality, and you need to be prepared to navigate it. 

We can all get behind “do no harm”, but when consequences are seemingly unpredictable, how does one bring a code of ethics to the development and deployment of AI solutions? It won’t suffice to have data scientists talking amongst themselves about the ethics of AI. End users and adopters need to be part of these conversations as well.

If all AI models must be trained in some capacity on source data, then we should be wary of our new AI systems learning from the mistakes of our past. As a society we have moved mountains to overcome social biases in the past several decades; we should not let our AI systems learn from the worst of human behavioral patterns and set us back. Therefore, it’s critical for stakeholders to understand how bias manifests itself through data that is, in turn, used to train AI models.

Data complexity vs goal complexity, and the quality conundrum

In a previous post, I introduced the notion of AI as a function of data complexity and analytical goal complexity. As the goal of your AI solution becomes more complex (i.e. real-time recommendations) and as the nature of the data becomes more complex (i.e. larger datasets, natural language, images, and audio), it becomes harder to measure the objective performance of the solution. What is a good recommendation? Are there multiple good recommendations? 

It is also harder or less obvious to detect bias within more complex datasets that may be learned by the AI. Yet, AI solutions with complex goals on complex data are rapidly emerging.

Explainability vs. Performance

The explosion of “Black Box” methods, largely driven by advancement in neural networks, can detect highly complex patterns and achieve state of the art predictive performance. However, the downside to these models is the lack of interpretability.

Let’s say you have a regression model that is predicting total volume of sales based on a number of factors such as date, weather, and historic order volume. Not only can you know the importance the regression model assigns to each factor, it is also relatively straightforward to look at the underlying dataset and point to each factor’s contribution to the ultimate prediction. Unlike a regression model, a neural network has many hundreds of thousands or even billions of parameters that are transformations and permutations of underlying data, mathematically tuned to identify subtle patterns beyond human comprehension.

If the goal of AI is to drive business results, then it is tempting to pursue the models with the highest horsepower and performance. In cases where the risk for unintended consequence is low, then perhaps that is an acceptable choice. Even then, when your boss or organization wants to dig into the ‘why’ behind specific predictions, black-box methods fall short. In the extreme, where bias can introduce significant risk like discriminating against customers or employees, the explainability of the solution becomes even more critical.

How Bias Enters the Equation

Understanding how bias enters into the equation is what makes ethical considerations in AI so challenging. There are many stages and building blocks that go into an AI solution.

  • Training datasets. Public datasets are a great repository to mine information, but they also perpetuate historic social norms. Bias can be introduced by blindly using these datasets, such as the historic images of male CEOs predominantly being available in the news.
  • Proxy variables. This refers to a feature that is highly correlated with another; for example, race and census tract. Even when you explicitly remove racial information from a dataset, if a proxy like census tract is fed into the model, it may introduce bias based on historic social norms.
  • Pre-trained models. With many rapidly changing advancements in the data science world, it is common to use third-party APIs and building blocks, which also use other third-party APIs and building blocks. These building blocks may have been trained on biased datasets. Even when you are extra diligent, bias may be introduced through these tools.

The Solution

Although the challenge with the ethics of AI is that there is no clear answer or guaranteed way to avoid unintended consequences, here are some important questions that any savvy team about to embark on an AI journey should think through:

  • Interpretability. Especially when employing ‘black box’ methods like deep neural networks, how can you reliably find a way to explain the ‘why’? Why did the model arrive at the result that it did? Why is that important to the business?
  • Traceability. How was a specific model trained and what do you know about the underlying datasets? Can you demonstrate that reasonable attempts were made to minimize bias through the process?
  • Auditability. How will you audit the performance and manage the risk of your AI solution performing in the wild? How will you detect potentially problematic and unexpected behaviors?

Finally, I stress the importance of dialogue. It’s everyone’s job to safeguard against bias and negative consequences. Too often I see teams shying away from AI conversations because they’re ‘not techies’, or conversely data scientists operating in isolation. The next time you find yourself having a conversation about AI, don’t just ask about what the model can do, also ask about how and why the model was trained.

Related Posts

[The Marketing AI Show Episode 64]: Top Professional Services Firms Go All-In on AI, New Study Shows AI’s Actual Impact on Our Work, and Major Predictions on Where AI Is Going Next

Cathy McPhillips | September 19, 2023

This week's episode of The Marketing AI Show covers, AI and the workforce, professional services plus AI’s impact on our work, and what’s next for AI?

McKinsey Predicts AI Will be as Impactful as the Steam Engine

Ashley Sams | September 12, 2018

Our team reads tons of AI and machine learning news every week so we can share the top stories with you. Machine learning articles worth reading this week include McKinsey's AI predictions, Google's new What-IF machine learning tool, and some of the most notable machine learning use cases.

[The Marketing AI Show Episode 60]: AI Is Going to Eliminate Way More Jobs Than Anyone Realizes, AI’s Impact on Schools, and the New York Times Might Sue OpenAI

Cathy McPhillips | August 22, 2023

Back to school: AI’s impact on education, what parents, students, and educators need to prepare themselves for. Plus, more on jobs and even more on AI and the law.