At the Marketing AI Institute, we read dozens of articles on artificial intelligence every week to uncover the most valuable ones for our subscribers and we curate them for you here. We call it 3 Links in 3 Minutes. Enjoy!
The topic of ethics in artificial intelligence is not a new one. However, as AI quickly becomes more integrated into humans’ lives, the need for guidelines has risen from a minor consideration to a serious concern. We’ve got all the recent ethics-related AI news you need here.
Privacy group calls on US government to adopt universal AI guidelines to protect safety, security and civil liberties (TechCrunch)
Last week, a set of 12 universal guidelines for the use of artificial intelligence were revealed in a meeting in Brussels. Designed to “inform and improve the design and use of AI,” the guidelines plan to maximize the benefits of AI in a way that also reduces its risks and protects human rights.
Will they be adopted? EPIC, an American privacy group, sure hopes so. Marc Rotenberg, EPIC’s president and executive director, pleads:
“By investing in AI systems that strive to meet the [universal] principles, the National Science Foundation can promote the development of systems that are accurate, transparent, and accountable from the outset.”
The guidelines already have support from more than 200 experts and 50 organizations. However, the U.S. government’s request for information on the subject is now closed and it could be months before a decision is made.
Should a self-driving car kill the baby or the grandma? Depends on where you’re from. (MIT Technology Review)
Curious about your neighbor’s view on AI ethics? This report has some insight.
Last week, MIT Media Lab shared findings from its Moral Machine experience. Since 2014, this project has been crowdsourcing people's opinions on how self-driving cars should 'react' in a deadly situation—which lives should be saved, and which should be spared? For example, in an impending accident, should the car react to prioritize saving more lives over fewer, younger lives over older, women over men? The result: Over 40 million decisions from millions of users in 233 countries, making it one of the largest studies of its kind.
One of the main takeaways from the study is that countries’ preferences differ widely, but they also correlate highly with culture and economics. Here’s how:
- The sheer number of people in harm’s way wasn’t always the dominant factor in choosing which group should be spared.
- Countries with more individualistic cultures, such as the UK and the U.S., are more likely to spare the young and are more likely to spare more lives.
- Participants from collectivist cultures like China and Japan are less likely to spare the young over the old. The researchers hypothesized this could be because of their greater emphasis on respecting the elderly.
- Participants from poorer countries with weaker institutions are more tolerant of jaywalkers.
- Participants from countries with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.
- Countries within close proximity to one another also showed closer moral preferences, with three dominant clusters in the West, East, and South.
“AI must respect human values” – Tim Cook, EU privacy speech (Computerworld)
Tim Cook spoke at the European Data Protection Conference last week in Brussels themed “Debating Ethics: Dignity and Respect in Data-Driven Life.” Here are some of the most tweet-worthy quotes from his talk.
"Now, more than ever — as leaders of governments, as decision-makers in business, and as citizens — we must ask ourselves a fundamental question: What kind of world do we want to live in?”
"Fortunately, this year, you’ve shown the world that good policy and political will can come together to protect the rights of everyone. We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe, but around the world.”
"They may say to you, ‘our companies will never achieve technology’s true potential if they are constrained with privacy regulation.’ But this notion isn’t just wrong, it is destructive.”
"It’s time to face facts. We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”
"At its core, this technology [AI] promises to learn from people individually to benefit us all. Yet advancing AI by collecting huge personal profiles is laziness, not efficiency. For Artificial Intelligence to be truly smart, it must respect human values, including privacy.”
"We can achieve both great Artificial Intelligence and great privacy standards. It’s not only a possibility, it is a responsibility.”
"In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”
More AI Ethics
Looking for more on this topic? We encourage you to subscribe to our newsletter for the latest on AI news, trends, and expert insights, as well as check out these ethics-related articles:
- Establishing an AI code of ethics will be harder than people think (MIT Technology Review)
- A starter guide to ethical marketing (Phrasee)
- Big data is reshaping humanity, says Yuval Noah Harari (Economist)
- SAP creates AI ethics guidelines and forms an advisory panel (Packt)
- Does AI Ethics Need to be More Inclusive? (Forbes)
- Ethics + Data Science (Medium - DJ Patil)
- Introducing Stanford's Human-Centered AI Initiative—A common goal for the brightest minds from Stanford and beyond: putting humanity at the center of AI. (Stanford)