<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

4 Min Read

Llama 3 Is Here (And It's Incredible)

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Llama 3 from Meta is here. And it's incredible.

Llama 3 is the latest version of Meta's open source foundation model.

And the company says it beats other top open source models like Google's Gemma and Mistral.

It also appears to beat Gemini Pro 1.5 and Claude 3 Sonnet on some major benchmarks.

Today, you can access two versions of Llama 3.

The first is an 8-billion parameter version called Llama 3 8B. The second is a 70-billion parameter version called Llama 3 70B.

Meta is also training a whopping 405-billion parameter model that's coming soon.

(Though it's unclear right now if that one will be open source.)

Just as exciting:

Llama 3 is now baked right into Meta AI, the company's AI assistant. And Meta AI is now live across Facebook, Instagram, WhatsApp, and Messenger. Or, you can try it out directly at Meta.ai

What does Llama 3 mean for you and your business?

What does it mean for the overall AI landscape?

I got the answers on Episode 93 of The Artificial Intelligence Show from Marketing AI Institute founder/CEO Paul Roetzer.

The real reasons behind Meta's commitment to open source

The first question to understand is:

Why is Meta releasing such a powerful model for free?

After all, OpenAI and Google charge for the most powerful versions of their models.

To understand why, you have to understand Meta's AI history, says Roetzer.

Back in 2013, Meta CEO Mark Zuckerberg started trying to build an AI lab.

Around this time, the deep learning revolution was kicking off. Deep learning is the process of giving machines human-like abilities. These include the ability to see, speak, hear, write, and understand. In fact, it's the field that made today's generative AI applications possible.

Zuck, like other big tech leaders, saw deep learning as the next frontier. And big tech is a never-ending race towards the next transformative technology. (A race fully described in the excellent AI book Genius Makers.)

To get there first, he tried to buy a cutting-edge AI lab called DeepMind. He failed. Instead, Google bought DeepMind in 2014. (Google DeepMind is now the company's AI division.)

Part of the reason the acquisition failed?

Zuck and DeepMind co-founder Demis Hassabis didn't have chemistry. Or, the companies they ran didn't. Zuck and Meta (then Facebook) had a growth-obsessed corporate culture. Hassabis and DeepMind focused on pure research to explore new frontiers.

Not to mention, Zuck didn't share Hassabis' ethical concerns around the rise of AI, says Roetzer. He even refused to promise that DeepMind's tech would be overseen by an ethics board if they got acquired.

This left Zuck with a chicken-and-egg problem.

He couldn't attract top AI researchers because he didn't have a research lab. And he didn't have a research lab because he couldn't attract top AI researchers.

So, Zuck turned to recruiting one of the top AI researchers available: Yann LeCun.

LeCun is seen as one of the godfathers of modern AI for his decades of cutting-edge research. And he initially refused Zuckerberg's overtures. Until Zuck made him an offer eh couldn't refuse.

"This is really important," says Roetzer. "He told LeCun that interactions on the social network would eventually be driven by technologies powerful enough to perform tasks on their own."

In other words, he dangled the promise of AI agents eventually being the direction of Meta's AI work.

In the short term, AI would do things like identify faces in photos and translate languages. Longer term, intelligent agents would patrol and take actions on Facebook's platforms.

So, Zuck told LeCun that almost any AI research in the digital domain was on the table. (An attractive proposition for any researcher.)

He also agreed to honor LeCun's commitment to openness. Free exchange of research and information was the norm among academics like LeCun. It wasn't the norm at big internet companies.

Zuck made the case that Facebook/Meta was the exception. They had embraced open source when they created React, a JavaScript library. So, Zuck and LeCun agreed to pursue openness in AI research together.

Facebook AI Research (FAIR), the company's AI lab, was born. 

And, today, LeCun is Chief AI Scientist at Meta.

Open source is Zuckerberg's secret weapon against OpenAI, Google, and everyone else

In part, open source resulted from Zuck's commitment to LeCun.

But it's also the key weapon that Zuck is wielding against competitors.

“This is a direct attack on OpenAI, Google, Anthropic, everybody," says Roetzer.

Many of Zuck's AI competitors charge for their AI products. His approach?

“We’ll give it away for free because everybody else is trying to charge for it," says Roetzer. "They can basically undercut everybody and not only build on top of it, but build it right into their networks.”

As the post above suggests:

Zuck's moves are also scorched earth tactics. Ones that seriously threaten competitors' business models and investments.

"Zuckerberg’s a killer. Whether you like the guy or not, he has no issues doing what has to be done to win," says Roetzer.

But is open source good for society?

The result is a high-powered open source model that anyone can build on top of.

It's very likely Llama 3 right now is on par with GPT-3.5 at least. And the 405B model likely has parity with GPT-4.

Today, its outputs, speed, and imagine generation capabilities are impressive. (It can even generate images in real-time while you type a prompt.)

"You can start to see the potential of this when it's baked into all of their apps," says Roetzer.

This is good for Meta. And, in one sense, very good for consumers. We all get powerful AI at basically no cost.

But open source AI also raises some concerns.

Open source advocates say it's better that everybody has access to the technology. That way, a handful of tech companies can't control it.

But open source also puts powerful AI that can be misused into everyone's hands. In the process, it gives bad actors powerful tools that can be used at scale without oversight.

So is open source good or bad for society?

It seems like tech leaders increasingly dodge questions around this, says Roetzer.

“I’m always kind of shocked at how poorly they all are prepared to answer this question," he says.

They seem to come back to the idea that centralizing AI could be worse than open sourcing it. 

“I’m very much in the middle here. But I don’t ever hear a good answer to the concern that you’re open sourcing a really powerful thing that can be used for disinformation and persuasion. And we can’t assume everyone’s going to be a good actor in this.”

 

Related Posts

How Big Consulting Firms Are Cashing In on AI

Mike Kaput | July 9, 2024

In just a short time, big consulting firms like Boston Consulting Group (BCG), McKinsey, IBM, and Accenture have racked up $100s of millions selling AI services.

Meta's Llama 3.1 Shakes Up the AI Landscape: What You Need to Know

Mike Kaput | July 30, 2024

Meta has released Llama 3.1 405B, its latest, most advanced open model. Here's everything you need to know about what it is—and why it matters.

OpenAI's Insane Week: Resignations, Controversies, and Delays

Mike Kaput | October 1, 2024

It’s been a whirlwind week for OpenAI, which found itself at the center of a storm of executive departures, controversial reports, and rampant speculation about the company’s future.