Marketing AI Institute | Blog

How To Create a Digital Clone of Your Voice in Seconds Using AI

Written by Ashley Sams | Jun 6, 2018 2:54:00 PM

At the Marketing AI Institute, we read dozens of articles on artificial intelligence every week to uncover the most valuable ones for our subscribers (become a subscriber today), and we curate them for you here. We call it 3 Links in 3 Minutes. Enjoy!

Great, now you can use AI to simulate anyone’s voice saying anything you want.

Lyrebird uses artificial intelligence and natural language processing to clone your voice and create a shockingly accurate digital voice, which you can use to say anything.

Bloomberg Businessweek journalist Ashlee Vance traveled to Montreal, Canada to meet with the four young computer scientists about their new AI startup.


He found that in order for it to work, you have to record yourself talking for a few moments to create a baseline. Vance records himself reading a few sentences, as populated by Lyrebird. It then takes the system about one minute to create your digital voice, using natural language generation.

From there, you can type in anything you want your digital voice to say—even sentences that contain words you didn’t speak aloud.

Vance tests the accuracy of the AI by calling his mother. When he explains that she was talking to a machine, she exclaims, “I thought I was talking to you, that’s crazy!”

Vance and the Lyrebird team touch briefly on the dangers of this technology, like using it to fake someone’s voice, such as Donald Trump.

Lyrebird co-founder Jose Sotelo explains that while their technology could be used for evil, they want it to be used for positive things. He goes on to say that there really is no way to stop people from using it the wrong way, but they want to show people that this technology exists so that they are aware.

Meet Norman, the World’s First AI Psychopath

According to Fox News, a team of MIT scientists was interested in what would happen if they trained an AI using only the dark corners of the web. The result? Norman, the world’s first psychopath AI.

Norman was trained on images pulled from a Reddit forum, consisting mostly of people dying in horrible circumstances. Now, Norman can only see death and destruction.

When shown an image of birds in a tree, Norman sees them being electrocuted. When shown an image of people looking out a window, Norman sees one of them jumping.

The scientists trained a similar AI using images of cats, birds, and people. The results were much more positive, demonstrating the harsh reality of biases in machine learning. Professor Iyad Rahwan, part of the three-person team at MIT that developed Norman explains:

“Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

How to Reduce Biases in AI-Powered Chatbots

AI-powered chatbots are being used more frequently across many industries. However, as VentureBeat points out, they learn just like a child, from example. This creates the opportunity for AI solutions to be susceptible to social biases.

VentureBeat offers some recommendations for minimizing bias in AI applications.

First, be thoughtful about your data strategy. Make sure your data includes diverse demographics so that it performs accurately when introduced to the public.

Secondly, encourage a representative set of users. Design your AI with inclusivity in mind to better tweak content, user experience, marketing and basic capabilities to reach the full range of people the chatbot will serve.

Lastly, create a diverse development team. You are less likely to introduce biases if your team is well-rounded and diverse.

At the Marketing AI Institute, we read dozens of articles on artificial intelligence every week to uncover the most valuable ones for our subscribers (become a subscriber today), and we curate them for you here. We call it 3 Links in 3 Minutes. Enjoy!

Photo by freestocks.org on Unsplash.