<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

37 Min Read

The Marketing AI Show Episode 46]: Geoff Hinton Leaves Google, Google and OpenAI Have “No Moat,” and the Most Exciting Things About the Future of AI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Paul Roetzer flies solo on Episode 46 of the Marketing AI Show while Mike is speaking out of the country. Paul covers some interesting topics, the latest news, and offers a glimpse into the future.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by MAICON, our 4th annual Marketing AI Conference. Taking place July 26-28, 2023 in Cleveland, OH. Current discounts end this Friday, May 12, so register early!

Listen Now

Watch the Video

Timestamps

00:02:42 — Hinton leaves Google

00:11:14 — “No moats”

00:24:21 — The future of AI and what excites Paul

00:35:51 — Code Interpreter

00:37:04 — White House AI meeting

00:40:09 — FTC + AI

00:41:28 — Hollywood Writers Strike + AI

00:44:11 — Box AI

00:44:39 — Inflection AI

00:48:26 — Slack GPT

Summary

Hinton departs Google

Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, has left the company after 10 years due to new fears he has about the technology he helped develop.

Hinton says he wants to speak openly about his concerns, and that part of him now regrets his life’s work. He told MIT Technology Review: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?”

He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans. He’s also concerned that once AI is able to string together different tasks and actions (like we’re seeing with AutoGPT), intelligent machines could take harmful actions on their own.

This isn’t necessarily an attack on Google specifically. Hinton said that he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google’s business.”

“No Moats”

“We have no moat, and neither does OpenAI,” claims a leaked Google memo revealing that the company is concerned about losing the AI competition to open-source technology. The memo, led by a senior software engineer, states that while Google and OpenAI have been focused on each other, open-source projects have been solving major AI problems faster and more efficiently.

The memo’s author says that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more private. What do these new developments and rapid shifts mean?

The exciting future of AI

We talk about a lot of heavy AI topics on this podcast—and it’s easy to get concerned about the future or overwhelmed. But Paul recently published a LinkedIn post that’s getting much attention because it talks about what he’s most excited about AI.

Paul wrote, “Someone recently asked me what excited me most about AI. I struggled to find an answer. I realized I spend so much time thinking about AI risks and fears (and answering questions about risks and fears), that I forget to appreciate all the potential for AI to do good. So, I wanted to highlight some things that give me hope for the future…” We won’t spoil it in this blog post, so tune in to the podcast to hear Paul’s thoughts.

Listen to this week’s episode on your favorite podcast player and be sure to explore the links below for more thoughts and perspectives on these important topics.

Links referenced in the show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I think of large language models as the CRM of the Future Corporation. So as critical as CRMs are to your business today, assume large language models or LLMs will have a similar level of importance moving forward.

[00:00:14] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:34] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:44] Paul Roetzer: Welcome to episode 46 of the Marketing AI Show. I'm your host, Paul Roetzer. I'm usually joined by my co-host Mike Kaput. But Mike is, off on a speaking gig and the, location he is in does not have revi reliable wifi. So you are stuck with me riding solo this week. So thanks for joining us again. It's always fun to be back.

[00:01:10] Paul Roetzer: We'll kind of roll with this and see how I do basically interviewing myself. So I thought last minute I create a synthetic version of myself and I decided against that play. So we'll do this, traditional way. So this episode is brought to you by the Marketing AI Conference. Macon returns to Cleveland July 26th to the 28th this year.

[00:01:32] Paul Roetzer: This is our fourth annual conference. We just launched the, preliminary agenda, two weeks ago, I think, so you can go check that out. We announced about quite about 70 or 80% of the, sessions are up there. Still a lot of announcements to be made, so stay tuned there. We can go to macon.ai. It's m a i c o n.ai to learn more.

[00:01:55] Paul Roetzer: Pricing does go up on May 12th. It goes up kind of once a month. So check that out. Try and take advantage of that pricing. If you are interested in being in person with us in Cleveland, we would love to see you there. So we have a, fun week this week for Mike to be, you know, not with us. It is.

[00:02:13] Paul Roetzer: Jam packed. I feel like I, I was traveling a lot last week myself, so I, I kind of lost track of the days, but it's kind of insane how much happened between last Monday, May 1st when we recorded last week's episode and the day May 8th when I'm recording this week's episode. So if you're new to the podcast, we pick three big topics each week and kind of go deep on those.

[00:02:38] Paul Roetzer: And then we have a collection of rapid fire. So today, I'm going to get started with the news of Geoff Hinton leaving Google. So if you joined us for episode 45, we did touch on this as a rapid fire item because it had just happened last Monday morning. I. But as the week went on and, and I saw Geoff Hinton on a number of news shows and read a bunch of articles about it, it was just kind of becoming a bigger and bigger story.

[00:03:06] Paul Roetzer: So let's kind of recap what happened here. So, Geoff Hinton, who's a pioneer in deep learning and a formerly VP and engineering fellow at Google, left the company after 10 years. As he said, due to, fears he has about the technology, he helped develop. So he wants to speak more openly about his concerns and part of it is that he is now regretting in some ways his life's work.

[00:03:33] Paul Roetzer: So he did a, in interview with m i t technology review, where he said, I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now, and they'll be much more intelligent than us in the future. How do we survive that? E questions?

[00:03:52] Paul Roetzer: So he's worried that extremely powerful AI can be misused by bad actors, especially in elections and war scenarios to cause harm to humans. Certainly I would not debate that we've talked about that concept, especially with the elections on this show a number of times before. He's also concerned that once AI is able to string together different tasks and actions, like we're starting to see with the idea behind auto GPTs and this concept of AI agents being able to take actions on our behalf, that these intelligent machines could take harmful actions on their own.

[00:04:27] Paul Roetzer: He was very clear in different interviews to say it was not an attack on Google at all. He actually still thinks very highly of Google, has lots good, good things to say, but he wants to talk, you know, quote, to talk about AI safety issues without having to worry about how it interacts with Google's business.

[00:04:44] Paul Roetzer: I. So, like I said, as the week went on, this one kind of kept building steam. I got a text about it from my dad asking about it. I was at three different conferences last week and I was asked at least a dozen times my thoughts on this topic. And then the one where I knew we'd sort of like really hit the mainstream was I saw a tweet where Snoop Dogg was commenting on this at the, he was, he was appearing at the Milkin Institute Global Conference.

[00:05:13] Paul Roetzer: And I will not. Quote Snoop Dogg. Exactly. You can go listen to the video yourself and hear, the more colorful language that Snoop Dogg used. But to summarize, he said, I heard the old dude that created AI saying this is not safe. His AI has got their own minds, and these people are going to start doing their own stuff.

[00:05:38] Paul Roetzer: So again, if you haven't heard it, you can kind of mix in your own Snoop Dogg language into what he actually said there. But I just, it was funny because it's like a 37 second clip and I feel like. Snoop Dogg actually very accurately captured the moment we find ourselves in, where no one knows what in the world to believe.

[00:05:57] Paul Roetzer: And we're hearing these crazy headlines. And, so it, it was good for a laugh. I've, I've watched it like 10 times. So, we'll, we'll link to that tweet in the show notes and you can hear it for yourself. So now to kind of unpack this, Snoop Dogg is not correct in that Hinton created ai, but he certainly has played a major role in setting the stage for the advancements we're seeing today in deep learning.

[00:06:23] Paul Roetzer: And in fact, turn, coined the term deep learning to kind of rebrand neural nets, which neural nets have been around for decades. And, you know, had a very kind of technical, perception about them. So he going deep learning, I think it was around like 2008 or 2009. But in the M I T article, it explains a little bit better what Hinton is known for.

[00:06:46] Paul Roetzer: So it says, he's Bence known, best known for an algorithm called back propagation, which he first proposed with two colleagues in the 1980s. The technique which allows artificial neural networks to learn today underpins nearly all machine learning models. In a nutshell, back propagation is a way to adjust the connections between artificial neurons.

[00:07:08] Paul Roetzer: Over and over until a neural network produces the desired output. Hinton believed that back propagation mimicked how biological brains learn, and he has been looking for even better approximations since, but he has never improved on it according to m i t. And so now he's worried that this early work and the advancements that have made sense are accelerating way faster than he expected.

[00:07:32] Paul Roetzer: Things he thought were going to be decades away appear to be at our doorstep. So in an Ms N B C interview, he said, quote, I was working on making computer models that got more and more intelligent in an attempt to understand what went on in the brain. And very recently, I realized that the computer models were now making may actually be a better form of intelligence than what's going on biologically.

[00:07:59] Paul Roetzer: Now, I, I've followed Hinton's work for. Well over a decade. I went back and listened to that excerpt like five times. That's a really hard thing to process. This idea that it's possible that these synthetic versions of intelligence we're building are actually better forms of intelligence. It's, it's a really weird thing to think about, but.

[00:08:23] Paul Roetzer: His fear is that as they get smarter than us, it could lead to the downfall of humanity. So this is certainly taking an extreme position on the negative impacts here, but I think the main thing I've taken away from it is the key talking point he seems to use is that, He wants, an equal amount of resources dedicated to trying to ensure that this is done safely.

[00:08:50] Paul Roetzer: It, it is kind of the main premise. And it seems to be the basis for him leaving Google and hitting the news circuit is he wants just more resources in brain power and maybe AI power dedicated to building safe ai. Now I think it is important to add a little context here that this is not, Geoff Fittin is not the first person to call attention to these dangers.

[00:09:13] Paul Roetzer: There are two women in particular, Dr. Tim Karu and Dr. Margaret Mitchell, who led the ethical AI team at Google and. Relieved of their duties, rather on ceremonially in late 2001 and early 2022. And they have been highlighting these dangers for years along with many other ethicists, and people in the industry.

[00:09:34] Paul Roetzer: So again, I, I think it's important just to recognize their work and their early efforts, in this areand make sure that those efforts aren't forgotten. And as we look forward, I think. Despite these headlines and fears, I, I think it's good that people are paying more attention to the risks. Now, I, I do feel like what Hinton is talking about is probably extreme for most people and not the thing that you probably need to be worrying about.

[00:10:01] Paul Roetzer: The downfall of humanity is a kind of an abstract thing, and there's no. Real clear explanation of how exactly they think that's going to occur other than, well, if they're smarter than us, then obviously they don't need us, and it would lead to the down all of humanity. But there is no logical actual step path to get there.

[00:10:18] Paul Roetzer: But I do think that. A greater need, greater focus on ethics and safety is critical. So just to kind of wrap this section up, Hinton himself did tweet because he was kind of getting called out by people like Yann LeCun and some others in the AI research space who don't agree with him and don't share these.

[00:10:35] Paul Roetzer: Dramatic concerns. And so he did say there is so much potential benefit that I think we should continue to develop it, but also put comparable resource into making sure it's safe. So he has said on multiple occasions and in this tweet, he's not calling for a stop. He did not sign the Future of Life Institute letter that we've talked about previously on the podcast.

[00:10:56] Paul Roetzer: He just wants to put more time and energy into ensuring safety. And so if that's what he spends the latter part of his career doing, Talking about these issues and hopefully working on these issues, then, you know, hopefully it's a win for the research community and for humanity. The second thing that we'll talk about today that really got a lot of traction LA last week is this, leaked Google no Mote memo.

[00:11:24] Paul Roetzer: So if you haven't heard about this, we will, put the link in the show notes. I would suggest going to read it yourself. But there's a leaked memo from a senior software engineer at Google. It says, we have no mote and neither does OpenAI. So Mote being like a defensible position in the, in the market, in the industry for a simplified word way of saying it.

[00:11:48] Paul Roetzer: The leak memo reveals that the company is concerned about losing the AI competition to open source technology. So on this show, we've talked previously about, you have open models and you have kind of the closed models. The open models are put out into the world for people to build on and experiment with, and tend to have more, or, you know, fewer restrictions.

[00:12:08] Paul Roetzer: To be able to use them where the closed models are the ones that live within, you know, OpenAI and, cohere and Anthropic and, you know, to a degree Google where, they don't share everything about them and how they work. So it's much harder to customize them and train them. So the memo, states that while Google and OpenAI have been focused on each other, so again, keep in mind this is coming from a single point of view of a, of a senior engineer at Google.

[00:12:37] Paul Roetzer: So, while Google and OpenAI have been focused on each other, open source projects have been solving major AI problems faster and more efficiently. The memo's author says that Google's large AI models are no longer seen as an advantage with open source models being faster, more customizable, and more private.

[00:12:56] Paul Roetzer: The memo suggests that Google should consider joining the open source movement and owning the platform similar to how they dominate with Chrome and Android. Also, comes a missed reports of rapid, rapid shifts within Google as the company pivots to compete with AI developments from OpenAI, Microsoft, and others.

[00:13:15] Paul Roetzer: One shift that we have talked about previously is in December when Google declared a code red to refocus on AI within the organization. Another happened in February when Google's head of ai, Jeff Dean, made a policy shift, where they were no longer going to release papers, or at least they were going to release far fewer papers, into the public until they had productized them themselves.

[00:13:39] Paul Roetzer: So, Again, as an example there, we've talked previously about the intention is all you need paper that came from the Google Brain team in 2017 that created the transformer architecture, which became the basis for G P T. In other words, OpenAI doesn't exist in the form it does today, building what they've built without the Google Brain Transformer paper.

[00:14:00] Paul Roetzer: So historically in AI research, A lot of it was open, like everybody was sharing everything. Google has published more papers probably than anybody. OpenAI started off publishing everything. Meta still, publishes everything that they do or primarily everything that they do. So there was a lot of the top AI researchers in the world.

[00:14:21] Paul Roetzer: Wanted to work for research labs where their life's work was able to be published and shared versus productized and kept private. And so there's just been a major shift in the last few months where DeepMind has pulled back a major, a AI research lab that's within Google. Google is now pulling back open.

[00:14:40] Paul Roetzer: AI has pulled back. So because of the level of competition, you're actually starting to see this pull back in the release of, of this information. So, My overall take on this memo is it's, it is a fascinating read. But again, keep in mind it is a single senior software engineer at Google, and it does not mean that his points are, shared.

[00:15:03] Paul Roetzer: His points opinions are shared within Google, nor that the decision makers within Google put much weight behind this paper. But it is out into the world. And it does actually make some pretty interesting points that are worth considering. It's a, it's a really good read. So I'm going to, I'll just go through some of the things that jumped out to me as I was reading this.

[00:15:24] Paul Roetzer: And again the show notes, there'll be a link there. So it says, we've done a lot of looking over our shoulders, and again, I'm just going to kind of quote here from excerpts. We've done a lot of looking over our shoulders at OpenAI, who will cross the next milestone? What will the next move be? But the uncomfortable truth is we are positioned to win this arms race.

[00:15:43] Paul Roetzer: We are not positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch. I'm talking of course about open source. Plainly put, they're lapping us things we consider major open problems are solved and in people's hands today. While our models still have a slight edge in terms of quality, the gap is closing.

[00:16:08] Paul Roetzer: Astonishing, astonishingly quickly. Open source models are faster, more customizable, more private, and pound for found more capable. Goes on to say we have no secret sauce. Our best hope is to learn from and collaborate with others, what others are doing outside of Google. We should pri prioritize enabling, integrations.

[00:16:31] Paul Roetzer: People will not pay for a restricted model when free. Unrestricted alternatives are comparable in quality. We should consider where our value add really is. Also when I say giant models are slowing us down, so again, these language models are, are massive, that are being trained. They take a lot of compute, a lot of money, a lot of time, and then they're trained and then they're kind of fixed on that training data.

[00:16:54] Paul Roetzer: What he's making the argument here for is, in some cases, like smaller open source models are going to have way greater value in the market. And these big models just aren't going to have the value that they thought they were, that we assumed they were going to have and, and to kind of dominate the industry.

[00:17:11] Paul Roetzer: We should make, cause I'm going to say we should make small variants, more than, an afterthought now that we know what is possible with these smaller models at the beginning of March. So this is, this is, gets into like the real gist and kind of the impetus behind this memo, it seems, he says at the beginning of March, the open source community got their hands on their first really capable foundation model as Metas LlaMA was leaked to the public.

[00:17:39] Paul Roetzer: It had no instruction or conversation tuning and no reinforcement learning with human feedback, which is the ongoing training of the model by humans telling it, you know, what it's doing is good or bad. Kind of a simple way of thinking about it. Nonetheless, the community immediately understood the significance of what they had been given.

[00:17:56] Paul Roetzer: So now to backtrack slightly here, We've talked about the lLlaMA moment previously on this show. So on February 24th, 2023, meta Rereleased LLlaMA, from the blog post where they announced this. I'll just read a couple of pieces from this to give you the context. It says, even with all the recent advancements in these large language models, full research access to them remains limited because of the resources that are required to train and run these big models.

[00:18:27] Paul Roetzer: This restricted access has limited researcher's ability to understand how and why these models work. Hindering progress on efforts to improve the robustness and mitigate known issues such as bias, toxicity, and the potential for generating misinformation. So basically, meta decided to put lLlaMA out to the research community and in a kind of a controlled release where you had to apply to get access to it.

[00:18:52] Paul Roetzer: And they said they did this, toan integrity and prevent misuse. They're releasing the model under a non-commercial license focused on research use cases. So the whole premise was that this thing was going to be put out to the research community to give them access to understand how these models work and to work on, making sure they were used responsibly.

[00:19:13] Paul Roetzer: The problem came in that within a week, the full LlaMA model. Was leaked, so everything and anybody could use it, not just the research community. So that started, changing things. What was called kind of this LlaMA moment. Heard this like a cambri and explosion of low budget models fine tuned on an expensive base model.

[00:19:35] Paul Roetzer: Dr. Jim fan from Nvidia, who we've quoted before, tweeted that out. So basically what happened is, A lot of people, developers that didn't have access to these models, at least at this level, got access to the LLA model through a leak a week after it came out. And so since that moment, all this major advancement, innovation has been occurring on the back of an open model.

[00:19:59] Paul Roetzer: And so that's what this Google researcher is basically saying is, all of our concerns are for nothing now because these models are already out in the wild. So we keep holding back on the release of our powerful models. We're not preventing anything because the world already has access to them. It's kind of the gist here.

[00:20:18] Paul Roetzer: So he goes on with some key takeaways of like his overall argument about why Google maybe has a losing strategy right now and why OpenAI may as well. So he says retraining models from scratch is a hard path, and that's basically what they're doing with their large models. Large models aren't more capable in the long run if they can't iterate faster than small models.

[00:20:40] Paul Roetzer: Data quality scales better than data size, that these open models don't need Google as much as Google needs them. And that these individuals using these things aren't constrained the same way as these big companies are. So basically what he's saying is like, we have to find a way to own this open ecosystem.

[00:21:00] Paul Roetzer: And he even says, paradoxically, the one clear winner in all of this is meta. Because the leaked model was theirs, they have efficiently garnered an entire planet's worth of free labor. And so basically it's just going on to say that Google has to find another way that this path of building these massive language models that are closed to everybody is not going to win.

[00:21:20] Paul Roetzer: So again, I'll kind of wrap up here with some. Perspective that it, it is just a single person's point of view, a well articulated point of view, no doubt and certainly seems to make some very valid arguments that are worth consideration. But it doesn't mean that it's right or wrong per se, or that this person is, you know, accurately predicting the future.

[00:21:44] Paul Roetzer: So, There was a couple of, quick tweets I'll mention. One is from, Logan Gilpatrick. He's actually a developer advocate at OpenAI, and he tweeted, if you don't think OpenAI or Google have a mote, you either have no idea what a mote is. Have no understanding of ai. Or have not spent enough time building open sources.

[00:22:06] Paul Roetzer: Powerful that, but that doesn't mean companies don't have moats. And then one other one that caught my attention is from Frazier Kelton, who's a venture capitalist at Spark. Also former head of product at OpenAI. So, you know, factor in whatever bias may come from that. But he said, we're heading toward a world where a small number of players have a defensible oligopoly on the most capable, most general models.

[00:22:30] Paul Roetzer: So think about OpenAI, anthro, cohere, Google, Amazon. These are the people building these foundation models. Product teams will use these models when required. Adjacent to this will be a vibrant, open ecosystem of smaller models. Tailored and customized to specific, specific product needs. So basically what they're saying is we're going to have both.

[00:22:49] Paul Roetzer: Open source is going to be critical. It's going to be a key part of where this all goes. So if you're thinking about this as a marketer, as a business leader, as an entrepreneur, and you're thinking about what do large language models mean in your organization, you're going to look at open source options and you're going to go look at the closed options.

[00:23:04] Paul Roetzer: That's basically where we're at. And no one knows exactly how this is going to play out, but all this kind of technical jargon simplified to this. I think of large language models as the CRM of the Future Corporation. So as critical as CRMs are to your business today, assume large language models or LLMs will have a similar level of importance moving forward.

[00:23:25] Paul Roetzer: Those models in your company, those large language models are going to be built on open or closed models. And so you're going to have these decisions you're going to be making in the months and years ahead of which models you are building around. And that's basically what this debate is all about is am I going to build on open models, closed models?

[00:23:44] Paul Roetzer: Which companies am I going to work with? Am I going to have a single large language model company or am I going to have. An array of them based on uses and nobody actually has a clue. So if this seem, if this topics is a little bit out there for you, a little bit abstract, overwhelming, don't worry. Welcome to the club.

[00:24:00] Paul Roetzer: Everyone is trying to figure this out right now, and it's really hard to do that. So just kind of stay tuned, keep an eye on this space, you know, be thinking about this within your organization, but no, you're not alone. If this is a little bit, complicated at the moment. All right. Speaking of a little complicated at the moment.

[00:24:21] Paul Roetzer: So the last main topic today is related to, something I wrote Friday morning. I was actually, I was, where was I on Friday? I was traveling back from, oh, no, Thursday. I was traveling to Charleston, South Carolina. So I wrote this flying to South Carolinand then I was coming back Friday. So basically I've, I've had a lot of listeners come up to me.

[00:24:48] Paul Roetzer: It's amazing now when I'm out at these shows giving talks like how many people will come up and say, Hey, we love the podcast. Been listening to it. Which is awesome, by the way. But I also get this like, Hey, your tone is really changed. Like in the last five, 10 episodes, like, you know, I first started listening.

[00:25:05] Paul Roetzer: It was all like really excited and everything was. You know, cool and innovative and I feel like it's just gotten heavier and there's like more of these topics that you spend like, you know, Geoff Hinton saying humanity is done for, and, you know, AI and politics and the things that I like, think about and worry about.

[00:25:21] Paul Roetzer: And the reality is like those are the things people often ask me about when I'm given talks at conferences. A lot of times the first questions, because you know, you give this information to people, you explain AI in a very approachable way, and then all of a sudden smart people start connecting the dots of the impact it's going to have on them, their careers, their businesses, their kids.

[00:25:39] Paul Roetzer: And so that's usually what people want to talk about. And so I spend all day answering questions about the potential negative side effects of ai. I also like have a daughter who at 11 is very, very tuned in to artificial intelligence. She, she knows more about AA than probably most CEOs, because I, I, I expose them to it and I explain how things work and I want them to kind of understand this stuff.

[00:26:07] Paul Roetzer: And part of what I'm trying to do is figure out what does the future look like for them. So I teach them about this stuff. So, She as a 10, you know, an artist, and that's what she wants to do for her career. She's not like the biggest fan of ai. And so a lot of times when we have these conversations, it's actually, she doesn't really want to listen to much about it because she doesn't like it.

[00:26:32] Paul Roetzer: And so I explain to her a lot like, well, I don't have to like it either. I'm trying to make it make sense to people so we can figure this out and do it in the most positive way. So there was a bit of a mix this past week of, some personal conversations that I had had and some things I was thinking about and, and working on.

[00:26:51] Paul Roetzer: And then just like this ongoing, weight of dealing with a lot of this risk and uncertainty and the fears around it and trying to help people through that. And I realized like sometimes it just weighs on me and it's just, it's just a lot and it's, I can kind of get caught in the downward spiral of thinking about all the negative stuff.

[00:27:09] Paul Roetzer: And I don't take the time to remember what got me started in this originally and why I was actually excited about ai. So anyway, so I, on Thursday on the flight to Charleston, I, I wrote, I just started writing about like, well, what, what excites me about it? And actually that Wednesday, the day I kind of had a conversation with my daughter, someone had asked me at a conference, what is the most exciting thing about AI to you?

[00:27:32] Paul Roetzer: And I didn't have an answer on the spot. So mixed all this stuff that question last week actually triggered me to write something when I realized like, why don't I have an answer to this question about what excites me? So on the flight I said, I'm going to, I'm going to write what excites me. So I thought I would kind of end today before the rapid fire with the things I am excited about.

[00:27:52] Paul Roetzer: So despite all the. Negative stuff and the, the kind of the downsides of how this could go wrong. I think that there's just massive opportunity in ai and so I wanted to kind of call attention to some of those things. So I'll just, I'll kind of read you what I, what I shared. So the first thing is an explosion of entrepreneurship.

[00:28:12] Paul Roetzer: So I said we're about to see an explosion of entrepreneurship in startups that will reinvent industries and create millions of jobs over the next, next decade. So if you think about like, It's really hard to take existing companies and have to change them fast, to, to accommodate ai. So if you think about like a big marketing agency or a law firm, or a medical practice or, I don't know, software developers, SaaS companies, like the disruption could happen really quickly and it's hard to figure out how to handle that.

[00:28:46] Paul Roetzer: And that's, that's a daunting task. But if you flip in and say, but what if I'm starting from scratch and I'm just going to build a more efficient company, a smarter company, I'm going to infuse AI from the beginning. So I'm not going to have to lay anybody off because I'm just going to hire a fewer people, but I'm still going to create jobs.

[00:29:03] Paul Roetzer: And so when I think about these AI native companies that can just be built from the ground up. With fewer human and financial resources with access to like G P T four and other tools that can help you in the planning process, reduce your need to rely on outside providers. So it's like way more cost efficient, way more real-time knowledge.

[00:29:23] Paul Roetzer: It's almost like a built-in advisor. That's a really exciting thing because you can take your domain knowledge and your experience and you can go build something and you don't need the resources it used to take to build a company. Everything's at your fingertips. So as an entrepreneur myself, I've started, you know, a couple companies.

[00:29:42] Paul Roetzer: The idea of being able to start and build a company in today's age is incredible. And so again, if you don't get caught up in. The negative aspects of potential job loss and knowledge work. And you're just thinking about building from the, from the, from the beginning, in a smarter company. That's really cool.

[00:30:00] Paul Roetzer: And so I love the idea of the future of entrepreneurship and I think we're going to go through kind of a real emergence in the startup world, with these kinds of companies. The second is new career paths. So, I do get asked a lot about like, you know, what do you think the jobs of the future will be?

[00:30:18] Paul Roetzer: What are those titles going to be? What are people going to do with AI that maybe we're not thinking about today? And the short answer is like, I'm not really sure, like I've thought about a lot about this and I, I don't know exactly what they're going to be, but, What I was saying here is like, don't wait for someone to tell you what it's going to be.

[00:30:36] Paul Roetzer: Don't wait for me to tell you what the new title or job is going to be. Go seek knowledge, go learn about ai, ma, mix it with your experiences and experimentation with ai and then connect the dots and like create your own career path. So I'll give you an example. A couple years ago I was doing a talk for a healthcare system.

[00:30:54] Paul Roetzer: It was like 150 marketing and public affairs and communications employees. And I just did kind of the intro to AI and talked about the implications within communications and public affairs. And afterwards a lady came up to me and she said, is AI ops like a role? She was like a manager level person.

[00:31:11] Paul Roetzer: And I said, I, not that I've seen. And she said, man, because listening to you talk, all I can think about is. The work that's going to be required to infuse the smarter technology, the impact it's going to have on our team, and the need for reskilling and upskilling, the need to develop new processes and workflows.

[00:31:27] Paul Roetzer: And we don't have anybody in our company who can do that. Maybe that's what I should do. And I was like, that's perfect. Like go do that. And that's what I'm saying as an example, like once you think about the impact this stuff's going to have, and you understand, especially if you're at a bigger company where there's going to need to be a lot of support.

[00:31:44] Paul Roetzer: Go create the role. Like raise your hand and walk into your boss's office. Say, I think this, we're going to need this position in the future and just go do it. So this idea of new career path is really cool. The other one is change agents. List number three, change agents emerging from everywhere. And so what I said in this post, I don't care if you're the intern or the CMO, you can be a change agent your company needs, there's uncertainty and fear about AI and organizations of all sizes.

[00:32:11] Paul Roetzer: So what can you do about it? Raise your hand to be a part of or lead an internal AI council that defines policies and procedures and explores what's possible. Start an AI book club with your coworkers. Take an AI course with your team and build an action plan. Create an AI committee that demos and tests new technology, and I ended that one with whatever you choose.

[00:32:32] Paul Roetzer: Don't wait for the business world to get smarter around you. The fourth thing is more time, and I've talked about this on, i, I, I don't know episode it was, Mike wouldn't remember maybe if he was here, but there was a whole episode where we really talked about my feeling that AI is maybe the best chance we have to create more time to expand the time in our lives.

[00:32:54] Paul Roetzer: Like we can't get more hours out of the day, but we can get more out of those hours is the basic premise. And I think this is going to be a choice. I actually had a few people comment on the LinkedIn post about this last week that I was like crazy if I thought it was going to be more time, that it was just going to make people work more and there's going to be more productive.

[00:33:13] Paul Roetzer: And my response was, I, I think it's a choice. So I'm building a company right now where I'm going to choose to give that time back to people so I don't need to make another 5% profit margin like I would rather. We enjoyed our lives, and so I'm going to focus and make choices and as I build this company to take that some of the time we gained from AI and give it back to us as humans to enjoy our lives.

[00:33:40] Paul Roetzer: That is maybe an exception to the rule. Like I, I, I want to be optimistic and think other companies will make similar choices. But that's basically the premise here is like, part of the reason I got into AI is I was trying to find a way to extend time to spend more time with my family and friends and do more things I enjoyed.

[00:33:59] Paul Roetzer: And so I want to continue to believe that it can be a path to that. So that's four. Five was a renaissance and human creativity. And so I said as ai, creativity is quickly commoditized when supply is seemingly infinite. It is human creativity that will be desired and cherished. Will grow to appreciate human creativity in new ways and the true creatives among us, such as my 11 year old daughter, it will flourish.

[00:34:26] Paul Roetzer: And then in the post, I actually shared a painting my daughter had done of this like cute sea village. It's adorable. So if you haven't seen it, go to my LinkedIn profile and you can, you can see that picture. And then the last thing that I'm very excited about, Are scientific breakthroughs that lead, that lead to improved lives.

[00:34:44] Paul Roetzer: It says, AI amplifies human intelligence and is likely to achieve superhuman intelligence across many domains. This will give scientists and researchers the ability to solve some of humanity's biggest challenges, climate change, hunger, disease, poverty, abundant clean energy, interplanetary travel, which I'm excited about.

[00:35:02] Paul Roetzer: And answer some of our deepest questions, such as are we alone in this past universe? So those were my six things. I would love to hear yours go on the LinkedIn post and leave a comment with the things you're excited about. I was looking ask Mike if Mike was with us this week, so maybe I'll ask him next week what he's excited about, but.

[00:35:20] Paul Roetzer: If we don't get bogged down by all the negative headlines in the press and the, you know, the kind of the clickbait stuff and some of these more extreme fears, I do think there's a lot to be excited about, and I hope you know, you, you balance your thoughts with that as well. All right, so that's it for the main topics.

[00:35:39] Paul Roetzer: We got a few rapid fires. I'll move through through these pretty quickly. But some really interesting stuff. Last week I actually had to cut a couple things cause there's just so much to cover. So first is, OpenAI released code interpreter, a plugin for ChatGPT. I don't have access to it yet, but I know some people do.

[00:35:58] Paul Roetzer: I. So it's kind of getting rolled out slowly is pretty incredible though from, from what I've seen online. So there's a Professor Wharton, school professor Ethan Mollick, who I believe we've mentioned before on this show. He shared in a tweet how he had used it and he said, code interpreter has turned G P T into a first rate data analyst.

[00:36:19] Paul Roetzer: Not a data analysis tool, but a data analyst. It is capable of independently looking at a dataset. Figuring out what is interesting, developing an analytical strategy, cleaning data, testing its strategy, adjusting to heirs and offering advice based on its results. And he had a whole article about this that again, we'll link to in the show notes.

[00:36:43] Paul Roetzer: It really seems like code interpreter is going to be a big deal. So it's early. Like I said, just kind of came out, just starting to see some sample use cases shared, but keep an eye on that space. It seems like it's going to have some bigger implications down the road. I. The next rapid fire was the White House AI meeting.

[00:37:03] Paul Roetzer: So the White House met with CEOs last week on advancing responsible artificial intelligence, innovation. Vice President Harris and senior administration officials met May 4th with four CEOs. It was Sam Altman of OpenAI. Dario Amadio. Amad. I'm the CEO of Anthro. Satya Nadela Chairman C of Microsoft, and Sundar Pacha, the alphabet, CEO.

[00:37:28] Paul Roetzer: Now interesting. If you don't know Dario, that's sort of like the one name that maybe didn't, fit there for you. He is the former VP of research at OpenAI and they have raised 1.3 billion. Might explain why they were at the table. Interesting to know who wasn't there. Meta. So we just talked about Meta's LlaMA model, but apparently the White House isn't factoring in open source models and their significance when they did the invite list.

[00:37:57] Paul Roetzer: So Zuckerberg and Yann LeCun would've been the obvious people from meta that might have been at the table. And then stability, AI is a major player in the. Open source play, especially with their, stable diffusion, which is, you know, big image generation. And they just recently came out with a language model as well.

[00:38:14] Paul Roetzer: But a ma musta, mustak would've been an interesting person maybe to have at the table as well. I. But regardless, so the meeting as as this from the White House, part of a broader ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

[00:38:37] Paul Roetzer: They announced a series of actions to promote responsible innovation. Yada yada. Okay. The few things that they mentioned that are worth noting here, new investments in power res, new investments to power responsible American AI research and development. This is actually a good thing. National Science Foundation is announcing 140 million in funding to launch seven new national AI research institutes.

[00:39:01] Paul Roetzer: That is a positive development. No, no negative commentary there. Public assessments, next. Public assessments of existing generative AI systems. Administration is calling, is announcing an independent commitment from leading AI developers, including anthro, Google, hugging Face, Microsoft, Nvidia, OpenAI and stability, AI to participate in public evaluation of AI systems.

[00:39:24] Paul Roetzer: Some of the commentary I saw earlier that one is just like some skepticism as to how they're going to execute that. But again concept, good, good progress. And then the last was policies to ensure the US government is leading by example on mitigating AI risks. And harnessing AI opportunities. So I don't know.

[00:39:43] Paul Roetzer: I mean, the way I look at this is I get that some people are skeptical and I, I certainly have my own opinions on some of this stuff, but at the end of the day, I feel like they're doing something and that's good. Like it's, it's better than nothing. And so they're making these positive steps and obviously some voters just for PR purposes, but a lot of this is hopefully actually showing a greater sense of urgency by the government.

[00:40:04] Paul Roetzer: So all in all, worth noting and keeping an eye on. The next rapid fire is the FTC coming in hot. So on May 1st they put up a blog post that was called the Luring Test. You know, obviously a take on the turning test. No high profile actions, but a lot of rhetoric at the moment. So these are just two quick quotes from this blog post that made me laugh.

[00:40:27] Paul Roetzer: Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, These reductions might not be a good look.

[00:40:50] Paul Roetzer: This earlier, one of the topics we talked about, the firing of ethical AI teams, I think is what they're referring to there. And then they went on to say, if you haven't, or if we haven't made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools in ways that can have actual and substantial impact on consumers.

[00:41:15] Paul Roetzer: So again, You know, it, it's, it's worth noting that the Federal Trade Commission is paying attention. I guess the one that I thought most interesting last week, another rapid fire topic is Hollywood writers go on strike, and AI is actually mentioned as one of the things that's playing into this strike.

[00:41:34] Paul Roetzer: So there was a Vice article that said The Writer's Guild of America, the group that represents writers in the entertainment industry is now on strike for the first time in 15 years, which is impacting TV and film production among other concerns, the WGA a's members are demanding that production companies regulate how AI is used when producing scripts over fears.

[00:41:54] Paul Roetzer: They will repla be replaced by ai. The article a quote. If we don't strike for this right now, the AI technology will advance so quickly that it will no longer be possible to negotiate a fair contract in the context of ai. The article said that outside of Netflix headquarters, there was a couple hundred people from the W G A W G A marching, and there were signs that said writers generate all of it.

[00:42:20] Paul Roetzer: Don't let ChatGPT write Yellowstone. I told ChatGPT to make a sign and it sucked. Don't Uber writing. So, it went on to say the Alliance of Motion Picture and television, television producers, which represents a number of major entertainment companies, including Netflix, Amazon, apple, and Disney.

[00:42:42] Paul Roetzer: Told the Guild that rather than ban the use of AI as as source material, they would be open to annual meetings to discuss advancements in technology. This response immediately turned on alarm bells for writers in the guild who realized that executives were more than willing to try and replace writers with ai.

[00:43:01] Paul Roetzer: And the final excerpt I'll mention here says, we basically came to the table and said, scripts are written by writers, and writers are people. And they came back with the dystopian proposal of, well, what if they aren't? John Goleman, a TV comedy writer and member of W G A told Motherboard at Wednesday's protest.

[00:43:23] Paul Roetzer: Rather than opening up a discussion about how AI can be integrated into the industry and what protections for writers need to be in place once that happens, Goleman said, the reaction was just once a year, we'll update you with how many of you will re be replaced with machines. So, I, I mean, it's the first kind of big, contract negotiation.

[00:43:46] Paul Roetzer: I've seen AI come to play in publicly, so, It's going to be really interesting to see how this plays out and what sort of concessions are made with ai and then whether or not that has any implications on other industries. Really, really interesting stuff actually. So keep an eye on the Hollywood writer strike.

[00:44:07] Paul Roetzer: I. A couple other, final product notes to wrap the week up. Box ai. If you're a box user, if you're document storage and, and knowledge management. They announced box AI, a breakthrough in how you can interact with your content. It leverages leading AI models such as G P T 3.5 and four to let you ask questions, summarize, pull out insights, and generate new content.

[00:44:30] Paul Roetzer: So if you're a box user, you now have GP T4 baked into your knowledge. And you can query it and talk to it. So that's interesting. Inflection ai. Another one, we've mentioned them before. Mustafa Solomon is the co-founder and CEO along with Reid Huffman, who also was the co-founder of LinkedIn.

[00:44:52] Paul Roetzer: Solomon, if you aren't familiar, was the co-founder and head of Applied AI at DeepMind. We talked about DeepMind last week, that research firms within Google merged with Google Brain last week, so he was one of the co-founders of that with De Sabba. He was also the VP of ai, product management and AI policy at Google.

[00:45:09] Paul Roetzer: Anyway, so. Inflection has raised 225 million. My impression was that they were working on human machine interfaces, like the automated, like action transformer kind of stuff, where they were going to enable the machine to take actions on your behalf. I, I thought that's what they were working on. And if you go to Reid Huffman's LinkedIn profile, where it says he's the co-founder of inflection, now this is actually what it says under Reed's, profile.

[00:45:38] Paul Roetzer: And this is relevant, so, Listen for a second. Throughout the history of computing, again, Reid Huffman's profile, humans have had to learn to speak the language of machines. In the new paradigm, machines will understand our language. Recent advances in AI promised to fundamentally redefine human machine interaction.

[00:45:58] Paul Roetzer: We'll soon have the ability to relay our thoughts and ideas to computers using the same natural conversational language we use to communicate with people. Over time, these new language capabilities will revolutionize what it means to have a digital experience. So again, I, I may have just been wrong that they were working on like the AI agents stuff and having machines take actions, but what they introduced was not what I thought they were going to introduce.

[00:46:22] Paul Roetzer: So, the, here, here's kind of the synopsis of it, from the Forbes article. Whereas other chatbots, so they, they released a chatbot named PI is the short of this, where other chatbots might provide a handful of options to answer a query pi, which stands for personal intelligence, follows a dialogue focused approach, ask pi a question and it will likely respond with one of its own through 10 or 20 such exchanges.

[00:46:47] Paul Roetzer: PI can tease out what our user really wants to know or is hoping to talk through more like a sounding board than a repackaged Wikipedia answer. Solomon said. And unlike other chatbots, PI remembers a hundred turns of conversation with logged in users across platforms, supporting web browser, phone app, iOS only to start WhatsApp and s m s messages, Facebook messages, and Instagram dms.

[00:47:10] Paul Roetzer: Ask PI for help planning a dinner party in one and it will check in how the party went when you talked to another. Solomon said It's really a new class of ai. It's distinct in the sense that a personal AI is one that really works for you as an individual. Eventually inflection, CEO added pi will help you organize your schedule prep for meetings and learn new skills.

[00:47:31] Paul Roetzer: The response seemed kind of loop warm honestly to this. I looked into it a little bit. I don't know. I mean, it, it seems like a conversational version of Chet g p t. It, it's not what I was expecting them to release. My initial reaction was I wonder if there was just pressure to get something to market.

[00:47:49] Paul Roetzer: But it seems like might have a little trouble differentiating. This in the market, in terms of like adoption and everything. So, I don't know. I mean, I could be wrong. It might might be amazing. But my initial feeling was check it out for yourself, see if it's, it's viable to you. They certainly have a ton of money and some legit people behind it.

[00:48:10] Paul Roetzer: But again, it, it just felt initially to me like they just had to rush something to market almost like Google did with Bard. Like just get something out there and. So, so it doesn't look like you're doing nothing, but I might be wrong. I hope I'm wrong. It's really cool. And then the final one is Slack, G p T.

[00:48:26] Paul Roetzer: Slack. Getting into the game, announcing on their blog, they're unveiling Slack, G P t, our vision for generative AI. In Slack, customers rely on Slack to house their institutional knowledge captured in channels about every project, team, or topic. They also securely integrate their tools in Slack using our open, extensible platform and partner ecosystem.

[00:48:46] Paul Roetzer: We're building Slack, g p t on the same foundation, bringing trusted generative AI to where your team already works. So they say they're going to have an AI ready platform to integrate and automate with your language model of choice. Going back again to the whole language model thing, you're going to pick the ones you're going to work with and then integrate it into Slack.

[00:49:04] Paul Roetzer: Whether you partner with built-in apps like OpenAI's ChatGPT, or Anthros Claude, or Build Your Own Custom Integration. We'll have a set of AI features built directly in Slack, including AI powered conversation summaries and writing assistance, and a new Einstein G B T. Isn't that name taken? I always find it interesting, like in, sorry, this is a total slide note.

[00:49:25] Paul Roetzer: Tech companies don't seem to care about IP at all. Like they just pick names that are, and I think, sorry, slack. Like if anyone from Slack is listening, this is meant to be like an attack on Slack, but like, Einstein. G B T is Salesforce's thing. Now, did Salesforce take it to somebody else? I don't know, but they're using that name.

[00:49:40] Paul Roetzer: Nobody trademarks anything. They just like pick random general names that like 10 other people already have and then they just like roll with it. Anyway, that, I'm sorry that was a total side rant, but sometimes it drives me nuts that SaaS companies don't like. Put some thought into their trademarks anyway.

[00:49:58] Paul Roetzer: So a new Einstein G P T app that lets you surface AI powered customer insights from trusted Salesforce customer 360 data and data cloud. All right, that was a lot and that was solo. And I haven't been taking a drink of water in like 45 minutes, so I'm going to wrap episode 46. Mike should be back with us for episode 47.

[00:50:20] Paul Roetzer: So I appreciate you listening to me ramble on my own, for the last 45 or so minutes. Hopefully this was really insightful to you. It was certainly a lot to cover. But we will be back next week and, if you love the podcast, please do subscribe. Give it a five star rating. It's, you know, we, we read any reviews and I'd love to hear from, listeners as well.

[00:50:42] Paul Roetzer: So don't hesitate to reach out on LinkedIn. And otherwise, have a fantastic week and good luck keeping up with the AI news for the week. We will be back to summarize it for you once again next Tuesday. Thanks a lot everyone.

[00:50:55] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:51:17] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 63]: Elon Musk’s Quest to Shape the Future of AI, Hands-On with Google Duet AI, Time’s Top 100 People in AI, and HubSpot’s AI Roadmap

Cathy McPhillips | September 12, 2023

In this episode of The Marketing AI Show, our team tests our Google Duet, Time’s 100 People in AI, Musk, and HubSpot’s announcements at INBOUND.

[The Marketing AI Show Episode 27]: Head-to-Head AI Writing Tools Test, Will ChatGPT Replace Google, and MyHeritage AI Time Machine

Cathy McPhillips | December 21, 2022

We put AI writing tools to the test…and the results were interesting. Plus, Google’s future in a ChatGPT world, and MyHeritage’s AI Time Machine.

[The Marketing AI Show Episode 44]: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered

Cathy McPhillips | April 25, 2023

This week's Marketing AI Show covers ChatGPT’s potential, more announcements from Google, and problems with AI training.