<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

4 Min Read

What Elon Musk, Nick Bostrom and Demis Hassabis Think About Better-Than-Human Artificial Intelligence

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

When people like entrepreneur Elon Musk (@elonmusk), academic and AI commentator Nick Bostrom and head of Google’s DeepMind AI system Demis Hassabis (@demishassabis) get in a room together, you pay attention.

Musk, Bostrom, Hassabis and six other top minds in the field of AI gathered on-stage at the Beneficial AI 2017 Conference on January 7 to discuss the potential and peril of cognitive machines. The conference was hosted by the Future of Life Institute, an organization created by top academics and businesspeople (including Jaan Tallinn, the cofounder of Skype) to safeguard life in a world where technology develops at a breakneck pace.

The conference covered topics like how law will be affected by AI and the possibility of human-level artificial intelligence. As part of those discussions, a panel was conducted on the topic of “superintelligence,” or an intelligence “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills,” according to Bostrom, who studies the subject.

Superintelligence is the next stage of AI evolution after what’s called artificial general intelligence or AGI. AGI is artificial intelligence that is across the board as intelligent or more intelligent than a human being. Right now, AGI and superintelligence do not exist. All we have is artificial narrow intelligence (ANI), or systems that are better than humans in one or several areas, not every area.

Learn more about key artificial intelligence definitions here.

Even though we don’t have AGI or superintelligence, these are serious discussion topics among AI experts, some of whom read this blog. While it can become a deep and philosophical conversation—an AI thousands or millions of times smarter than people—it’s a real concern for some, including members of the Beneficial AI 2017 Conference panel.

Given the topic’s value—and its current and future implications for real-world AI applications used by marketers, executives and entrepreneurs—we wanted to see what we could learn from the panel and have extracted some of the most valuable takeaways below.  You can watch the full video of the panel here.

Superintelligence: Science or Fiction?

The Beneficial AI 2017 Conference panel on superintelligence included the following top names in AI and computer science:

  • Elon Musk (@elonmusk) — A founder of PayPal, SpaceX and Tesla. Also associated with OpenAI, a nonprofit that aims to develop artificial intelligence responsibly.
  • Nick Bostrom — Author of Superintelligence: Paths, Dangers, Strategies. Professor at Oxford University.
  • Demis Hassabis (@demishassabis) — Founder of DeepMind, an AI system bought by Google, where Hassabis now works.
  • Jaan Tallinn — Physicist and cofounder of Skype. One of the founders of the Future of Life Institute.
  • Ray Kurzweil — A Director of Engineering at Google working on machine learning and AI, after a long career building technology companies and publishing books on intelligent machines (including The Singularity Is Near).
  • Sam Harris (@SamHarrisOrg) — Author and commentator with degrees in philosophy and neuroscience.
  • David Chalmers — Philosopher and co-director of the Center for Mind, Brain and Consciousness at New York University.
  • Bart Selman  — Computer science professor at Cornell University.
  • Stuart Russell — Computer science professor at University of California, Berkeley.

 

Together, the panel discussed questions about the likelihood of superinteligence. Is it even possible given the laws of physics? If it’s possible, when is it coming? What happens if or when it does?

When surveyed about the likelihood of superintelligent AI, tellingly all respondents deemed it a possibility. This alone is worth considering. The viewpoint is based several assumptions:

  1. That what we call human intelligence an engineering problem, not a philosophical one. In other words, given sufficient time and processing power, intelligence comparable to that of our species could develop outside of humankind.
  2. That there is no physical limit to the development of available processing power. Given the right technology and a long-enough timeline, processing power will continue to grow.
  3. If 1 and 2 are true, then a superintelligent machine can come about given enough time. Once it reaches a certain intelligence threshold (at or somewhat above human-level intelligence), it has the potential to quickly improve itself—thus becoming exponentially smarter in a short span of time.

 

The result is potentially a superintelligence with vast power to reshape our entire planet—or, as some theorize, put it in existential danger. Whatever the outlook, these nine minds on-stage firmly believe there is no reason why superintelligence shouldn’t be possible.

This is why initiatives like OpenAI and the Benevolent AI 2017 Conference exist: if superintelligence is a question of when, not if, preparations to handle its development must begin now.

Preparing for Superintelligence

As noted on the panel, just because superintelligence is possible does not mean it will happen. Like preparing for the eventuality of a catastrophic asteroid strike on Earth, the risk may be incredibly small, but so dire that it merits serious thought. Panelists were asked “Will it actually happen?” and given the choices of “yes,” “no” and “it’s complicated.”

Every panelist answered “yes” without hesitation, except for a clearly joking Musk and Bostrom, who said “probably.” Harris added one complication, that the only way he didn’t think it would happen is if a catastrophic or world-changing event occurred to prevent the rise of superintelligence.

On the question of when superintelligence might occur after achieving human-level AI, the panel disagreed.

Several panelists thought it would be a number of years between human-level AI and superintelligence. However, others thought this development would occur in a matter of days after AI reaches human-comparable capabilities.

However, Selman made the important note that AI is not one technology, but a suite of related and connected disciplines: “I think we’ll go beyond human-level capabilities in a number of areas, but not in all at the same time,” he noted. “It will be an uneven process.”

The debate over superintelligence highlights two important truths for marketers, executives and entrepreneurs:

  1. AI’s capacity for self-learning is a massive accelerator. With the right systems teaching themselves how to improve, exponential progress is possible. Firms that develop the right systems first gain a powerful, and compounding, first-mover advantage.
  2. AI is not one technology, but a suite of technologies. There are various types of AI technology, from basic forms such as natural language generation, to advanced solutions built on neural networks. These technologies develop at different paces. A deep understanding of what’s possible in each area is necessary to make smart strategic investments in AI.

 

Then again, you might not have to worry about your business if Musk is right:

“I think if [artificial intelligence] reaches a threshold where it’s as smart as the smartest most inventive human, then I mean it could be only a matter of days before it’s smarter than the sum of humanity.”

There’s plenty more where that came from. Watch the whole video for even more insight into the possibilities and perils of superintelligence.

 

Related Posts

How One Company's Risk-Taking is Helping Them Own the AI Voice Space

Ashley Sams | December 14, 2018

At the Marketing AI Institute, our team reads dozens of articles weekly and shares the best ones here. This week we're reading all about how Kayak is taking risks to own the voice and chat space, what all K-12 students should know about AI, and Taylor Swift's facial recognition security tool.

Are Machines Better At Advertising Than Humans?

Ashley Sams | August 29, 2018

Our team reads tons of articles weekly to share the top ones with our subscribers. This week we’re reading about advertising wins from machine learning, top AI talent pools in the U.S., and the AI terms all marketers should know in 2018.

Executives Say Half of Skills Obsolete in 2 Years Thanks to AI

Mike Kaput | October 24, 2023

According to a new survey, executives believe that nearly half of all skills in today’s workforce won’t be relevant just two years from now thanks to AI.