Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, is leaving the company after 10 years due to new fears he has about the technology he helped develop.
Hinton, who has been dubbed a “godfather of AI,” says he wants to speak openly about his concerns, and that part of him now regrets his life’s work.
Hinton told MIT Technology Review:
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?”
He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans. He’s also concerned that once AI is able to string together different tasks and actions (like we’re seeing with AutoGPT), that intelligent machines could take harmful actions on their own.
This isn’t necessarily an attack on Google specifically. Hinton said he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google’s business.”
On Episode 46 of the Marketing AI Show, Marketing AI Institute founder and CEO Paul Roetzer broke down what you need to know about this important development...
- Hinton’s concerns should be taken seriously. Despite having an extreme view on the risks posed by increasingly advanced AI, Hinton is a key player in AI research. He has a legitimate perspective on the field that it’s important to pay attention to. Even if you don’t agree with his overall premise, he highlights a major issue in AI. “A greater need, a greater focus, on ethics and safety is critical,” says Roetzer.
- But he’s not the first—or the only one—to raise these concerns. Researchers like Margaret Mitchell and Timnit Gebru also raised safety concerns at Google in the past, says Roetzer. Unfortunately, their concerns weren’t heard by the company at the time. They were both fired from Google.
- And not every AI researcher shares those concerns. Plenty of other AI leaders disagree with Hinton’s concerns. Some share his concerns about safety, but don’t go as far to believe AI can become an existential threat. Others, like Yann LeCun, strongly disagree with HInton that increasingly advanced AI will be a threat to humanity.
- Yet Hinton is not calling for a stop to AI development. Hinton has said publicly that he still believes AI should be developed. He said he believes AI has “so much potential benefit” that it should continue to be developed safely. “He just wants to put more time and energy into ensuring safety,” says Roetzer.
The bottom line: Hinton is an important voice to pay attention to when it comes to AI safety—and one more voice in a growing chorus of researchers raising concerns.
Don’t get left behind…
You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.
The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.
After taking Piloting AI for Marketers, you’ll:
- Understand how to advance your career and transform your business with AI.
- Have 100+ use cases for AI in marketing—and learn how to identify and prioritize your own use cases.
- Discover 70+ AI vendors across different marketing categories that you can begin piloting today.
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.