<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

56 Min Read

[The AI Show Episode 87]: Reactions to Sam Altman’s Bombshell AI Quote, Enterprises Embrace Custom AI Models, and How AI Is Changing Writing Forever

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Prepare to look into the future in this episode of The Artificial Intelligence Show! Last week, we briefly discussed a bombshell quote from Sam Altman, this week, we explore its deeper implications. Alongside discussing Altman's insights, we'll examine the effects of AI on enterprise businesses, share our experiences from the AI for Writers Summit, and more in our rapid fire section. Join us for an episode filled with thought-provoking questions, insightful comments, and thorough analysis of the future of AI.

Listen or watch below—and see below for show notes and the transcript.

 

Listen Now

Watch the Video

Timestamps

00:03:25 — Revisiting Sam Altman’s Bombshell Quote

00:49:31 — Deloitte predicts enterprise spending on generative AI will grow by 30%

00:54:39 — AI for Writers Summit Recap

01:01:16 — OpenAI releases emails from Elon Musk

01:03:46 — Musk announces Grok will be open sourced

01:06:04 — Political deepfakes are on the rise

01:09:04 — Inflection 2.5

01:11:44 — Google and Microsoft security incidents and what they mean for AI

Summary

Reactions to Bombshell Quote from Sam Altman

Last week, we got a lot of attention for talking about a previously unreported quote from Sam Altman about AI’s impact on marketing.

The quote comes from a book called Our AI Journey by experts Adam Brotman and Andy Sack, who interviewed Altman for Chapter 1.

When they asked Altman what AGI meant for consumer brand marketing, he replied:         

"Oh, for that? It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI — and the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing. Again, all free, instant, and nearly perfect. Images, videos, campaign ideas? No problem."

Now, we are still processing this quote and implications for the industry but, it has made quite a few marketers sit up and pay attention.

Enterprises go all-in on training custom AI models

2024 could be the year that enterprises go all-in on training generative AI models on their own data, says Deloitte.

In fact, Deloitte predicts that, in 2024, enterprise spending on generative AI will grow by 30%—and much of that will be driven by enterprises training on their own private data.

Says Deloitte:  

“More companies, seeking to avoid the risk of models trained on public data, are expected to train generative AI on their own data to enhance productivity, optimize costs, and unlock complex insights.”

Deloitte says that, while enthusiasm for generative AI has been high, enterprises to-date “have mostly been cautiously experimenting, trying to figure out the specific value of generative AI for their businesses and the costs of deploying, scaling, and operating them effectively.”

Now, they are truly beginning to explore how generative AI can unlock the value in the treasure trove of their own data—and circumvent some of the known issues with public models like copyright concerns and hallucinations.

In short? Despite all the hype around AI, most enterprises have barely begun to unlock the true power and potential of the technology.

AI for Writers Summit Recap

This past week we wrapped up our second annual AI for Writers Summit, a virtual event with 4,600+ writers, editors, marketers, and business leaders from all 50 states and 93 countries.

The event brought together some incredible content across a wide variety of topics on how AI would impact writers, including:

It is not an exaggeration to say the event was a huge success. We received an incredible amount of positive feedback from the audience and our community about it, so we wanted to provide a quick recap of the event.

Because not only is the topic extremely relevant to our audience, since AI is having a massive impact on writing as we know it. But also the virtual event model we used, and transparently shared details around, contains lessons for businesses trying to run effective events—especially in the age of AI.

Links Referenced in the Show

Today’s episode is brought to you by Marketing AI Institute’s AI for Writers Summit presented by Jasper.

If you missed it—or want access to the sessions you saw as an attendee—the full Summit is available for purchase on-demand for $99.

To learn more, go to AIwritersummit.com

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Mike Kaput: How do I even begin to prepare for that? I know it'll change everything. But I have no idea what to do about it.

[00:00:06] Paul Roetzer: Yeah. It's where you're just like something. so life changing. That you know is going to just disrupt everything you think is true about the future. And you know it's coming. And you've got a few years to prepare for it.

[00:00:20] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:50] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:57] Paul Roetzer: Welcome to episode [00:01:00] 87 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co host, Mike Kaput, who is home this week while I'm in Miami, I think. You are home, right?

[00:01:11] Mike Kaput: I am home, yes.

[00:01:13] Paul Roetzer: Are you doing any talks this week? You gotta travel though?

[00:01:15] Mike Kaput: I am actually traveling starting on Sunday uh, to Carolina for oh, nice. Well, I am, I just arrived in Miami, so I'm doing this one, from the hotel room. And, as get into, I had four hours to think this morning, and it was a, a good time to do it because there was, there was a lot I had to think about based on last week's show. So we're going to get into that.

[00:01:41] Paul Roetzer: first up, the, episode today is brought to us by AI for Writers Summit that happened last week, but you can now get it on demand.

[00:01:50] Paul Roetzer: It was,

[00:01:51] Paul Roetzer: I would say beyond our wildest expectations. And Mike and I are going talk a little bit more about that experience as one of topics [00:02:00] later on, but just a remarkable virtual event last week.

[00:02:04] Paul Roetzer: there was more than 4, 600 people registered. From all 50 U. S. states and 93 countries, which I'm still trying to wrap my head around. So just an incredible experience, amazing feedback from attendees. We covered state of AI and writing, generative AI, writing tools, generative insights from an IP attorney.

[00:02:24] Paul Roetzer: That session was crazy. so much online chatter about that one. I think a lot people were shocked at the current law around copyright and maybe we're not prepared for it. So. That session alone was probably worth the price of admission. we went through an enterprise AI adoption with a panel. And again, the great news is you can get it all on demand.

[00:02:46] Paul Roetzer: So it is available right now, AIWriterSummit.com again, that's

[00:02:51] Paul Roetzer: AIwritersummit.Com. You can go and get immediate on demand access to all five hours of the summit. [00:03:00] And I, I can promise you based on user feedback, it's worth it. The average session rating, I think was 4. 8.

[00:03:08] Paul Roetzer: really good stuff. So we will, again, we'll talk a little bit more about that later in some of the feedback and provide some insights into the virtual event experience itself.

[00:03:16] Paul Roetzer: Cause think it's interesting from a marketing perspective, but we're here going to talk about the Sam Altman quote first, so I'm going turn it over to the stage.

[00:03:25] Revisiting Sam Altman’s Bombshell Quote

[00:03:25] Mike Kaput: All right. thanks, Paul. So, last week, one of the main topics of our podcast got quite a lot of attention because we were talking about a previously unreported quote.

[00:03:39] Mike Kaput: from Sam Altman about AI's impact on marketing and more broadly AI. on knowledge work. Now that quote came from a book that we got turned on to called Our AI Journey, written by two successful business people, Adam Brotman and Andy Sack, who Paul, you have gotten to know a little bit, [00:04:00] and they are interviewing, chapter by chapter, AI leaders about where This is all headed, the book is being released chapter by chapter on a kind of innovative subscription model.

[00:04:13] Mike Kaput: So go OurAIjourney.Ai to take a look

[00:04:17] Mike Kaput: But when they asked Altman what

[00:04:20] Mike Kaput: AGI, Artificial General Intelligence, meant for consumer brand marketers, for instance, he replied, Oh, for that, it will mean that 95 percent of what marketers use agencies, strategists, and creative professionals for today.

[00:04:38] Mike Kaput: will easily, nearly instantly, and no cost, almost no cost, be handled by the AI. And the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and Again, all free, instant, and nearly perfect. Images, [00:05:00] videos, campaign ideas, problem.

[00:05:03] Mike Kaput: So we talked at length about that quote last week and we're still processing it and all the implications it has for the industry. But from what we saw, it made quite a few marketers sit up and pay attention. We published a blog post on it. Obviously, about it on the podcast, but beyond that, it kind really took off on its own.

[00:05:26] Mike Kaput: I mean, I had many unrelated people me in conversations this past week, simply referring to it now as the QUOTE in caps.

[00:05:37] Mike Kaput: so I think has really caused marketers to begin talking about what the possible impacts are of this technology on our industry, much more so than they were before, because I don't know if any previously, Sam Altman has said things quite bluntly.

[00:05:53] Mike Kaput: So in this segment, Paul, we want to say, Spend a fair of time diving a bit deeper into [00:06:00] first draft thoughts on what this quote really means, what's really going on here. So I'm going to turn it over to you and just take us away down the rabbit hole.

[00:06:12] Paul Roetzer: honestly, first draft may even be generous, this

[00:06:14] Paul Roetzer: is literally, I was, I woke up this morning at 5. 30am and I had to catch a flight to Miami

[00:06:21] Paul Roetzer: and I had stuff to do and I was like, you know what? I've got four hours uninterrupted, in the airport and on the flight to think about this. We got the podcast. We bumped to Monday afternoon to, you know, I could take my flight. And this is kind of what came out. So I'm just going to sort of riff a little bit and go through my outline, I would say. That's why I don't even know this is a, consider a draft. But the thing, you know, I said last week was, And Mike, you alluded to already, it's like, we're still trying to process it.

[00:06:51] Paul Roetzer: Like, how do you, do you explain something like that quote to an audience? And when you know [00:07:00] it's kind of overwhelming, it's very abstract, how do you give it, like, meaning? And like, where you feel like you can do something about it, basically.

[00:07:09] Paul Roetzer: And I feel like quote can induce a bit of desperation of like, what do, there's nothing can do about this.

[00:07:18] Paul Roetzer: Like, what am I supposed to do next? And so I started off thinking about it in that context like, what do we do about this quote?

[00:07:27] Paul Roetzer: the first thing that came to me is like, what does it mean to your company, your career, and your family? Anybody listens to the show, regularly knows I have a, 10 about to be 11 year old and 12 year old.

[00:07:39] Paul Roetzer: And I think often about what does world look like for them in five years, 10 years, when they're getting out of college. And I realized, like, I actually have no idea. And so I spent a lot of time, you know, my life thinking about that and trying to figure it out. So the thing that first came to my mind on this [00:08:00] topic, you know, about the meaning for your company, career, family is when does your life and career noticeably change?

[00:08:06] Paul Roetzer: to the point where you look back and think of how things were before and after. So this quote is a future state. And the question me becomes, when does it become or start becoming true to the point where life actually starts feeling different? That ChatGPT type moment, at least for you and I, Mike, I know, I think of my life professionally now in, in like pre and post ChatGPT, like that was a defining moment.

[00:08:34] Paul Roetzer: Where things just noticeably changed. The other thing I'll say is I know we have a lot of non marketers who listen to this podcast. I would take that Altman quote and infuse your own industry and career into because he was just talking about marketing and agencies, because that's what he was asked about.

[00:08:54] Paul Roetzer: You can substitute any knowledge, work, accounting, sales, service, [00:09:00] HR, engineering, legal, all it. He's talking about knowledge work. He's more broadly meaning, you know, the things we do, from a knowledge perspective. So then, you know, once you get past kind of what does it mean for your company, career, family, then you get into like, what do you do about this?

[00:09:17] Paul Roetzer: And this is honestly where I found myself, after, you know, I first read that quote a few weeks back, and then when you and I first talked about it last week. is. Like personally, what does this mean to my kids? I think about it in relation to Marketing AI Institute some of the other business ventures we're working on.

[00:09:36] Paul Roetzer: I think about it from my personal investing perspective and my retirement accounts, like I think about it more broadly. So again, this is kind of real time thinking from the flight this morning. I haven't talked to Mike at all about it. I just gave Mike a heads up. Hey, I'm going to have few things to say today and we can talk about it.

[00:09:53] Paul Roetzer: People who know me know I generally don't get into the futurist stuff. I don't, don't really like [00:10:00] the trying predict

[00:10:01] Paul Roetzer: 10 years out kind stuff. I, I find sort of a fool's errand, and it's at best directionally correct sometimes, but I think people often lose sight the near term stuff they should be focused on when they worry much about the futurist stuff.

[00:10:15] Paul Roetzer: So I get asked all the time, what does five years, 10 years out look like? And I always say, listen, I can give you like 12 to 18 months with reasonable confidence. But beyond that, I got nothing for you. But at the same time, when I think about this quote,

[00:10:30] Paul Roetzer: I was trying to put in the context when is this true?

[00:10:33] Paul Roetzer: When does it start to actually change people's lives and careers in a noticeable way? And so I thought it was critical that people understand the story arc here. And. To point where it becomes tangible enough that you have an idea of what action you should be taking at each stage of that arc, basically.

[00:10:51] Paul Roetzer: So I'm just going kind of go through what I think is true. you know, this is subject revision based on the next [00:11:00] model that comes out and the next, you know, things we, we learn. I want to kind of go through what I think this quote really means, what happens next and what you can do about it.

[00:11:08] Paul Roetzer: Now, in context, since last week's episode, I have listened to no less than six. Other podcasts, where people were being interviewed, LeCun, Sam Altman. so those interviews are very fresh in my mind because they dealt with things that are highly relevant to what we're talking about here. And then also just drawing from 13 years of studying the AI space and, you know, countless interviews with Dario Amodei of Anthropic and Mustafa Suleyman of Inflection and Ilya Sutskever and Shane Leggett, DeepMind.

[00:11:41] Paul Roetzer: And, know, it's like all this stuff just sort of like swimming around in my mind to, to say what I'm going to say. So, Let's start there. Where, where are we and where are we going? So again, to recap, AI is not new. If, if it's new to you, it's, it's not new as an area research. We've been doing this like [00:12:00] 70 years, researching this idea of giving machines human intelligence.

[00:12:03] Paul Roetzer: But for a really long time, we basically had a form driven by machine learning, which made predictions about outcomes and behavior, and that machine learning could be applied to like pricing optimization and recommendation engines and all of these other things. And it was valuable to businesses. But it wasn't what we have now.

[00:12:21] Paul Roetzer: What we have now started. truly being kind of commercialized and really accelerating development around 2011 2012. That's when deep learning, this idea of giving machines vision and language understanding and generation really started emerging. And then 2022, November 30th, is the ChatGPT wake call moment.

[00:12:42] Paul Roetzer: So, With that really, really quick synopsis of kind of where we were and where we are in AI, I now want to kind of go through what I think happens next.

[00:12:52] Paul Roetzer: and as we kind of stretch out the years, my confidence in these Projections, I [00:13:00] guess, or forecasts probably, you know, drop, a decent amount. But I think this is pretty reasonable based on our understanding of what's happening, of what might go on.

[00:13:09] Paul Roetzer: I'm going to do this within a timeline. So 2024, you know, we're in March 2024 right now. What we will have this year are More advanced large language models that are multimodal, you know, we're already seeing this. We know GPT 4 is multimodal. We know Gemini is being built the multimodal from the ground up where can do language and images and video and audio.

[00:13:36] Paul Roetzer: we're going to have new classes of models. So, multimodal is one. Reasoning, which enables us to solve problems, do planning, make decisions. So human reasoning is critical to our ability to assess situations, identify and solve problems, things like that. These models are going to make leaps forward in their reasoning ability, which then enables them to get more valuable in terms of [00:14:00] planning assistance.

[00:14:01] Paul Roetzer: Decisioning, both helping us make decisions, but also probably early days of them making their own decisions very unreliably. not something you're going to turn your company around and say, okay, AI, make all the decisions. But I think decisioning is a core outcome of improved reasoning. expanded context windows.

[00:14:20] Paul Roetzer: We've already had this, Gemini's, what, a million, I think, but in research, they were up to 10 million. Cloud3 has expanded context windows. The context window how much information you can put in that it's able to draw from to create the output, but they're imperfect. We know context windows isn't the end game.

[00:14:39] Paul Roetzer: Context windows. Do lead in, in some ways to memory though. This is a core thing they're all working on is giving these models ability remember conversations.

[00:14:49] Paul Roetzer: and not just from the context window, but from years of interacting with these agents they want you to eventually, know, these things be able draw on everything.

[00:14:57] Paul Roetzer: So we're going to talk about opening eyes, [00:15:00] memory, and one of the rapid fire items. personalization is going to be critical. So Mike and I have different experiences with ChatGPT in future. You know, you're going to be able to set the parameters yourself of how you want to interact with this thing, how you want it to talk to you, what tone, what style, what political beliefs, what religious beliefs.

[00:15:18] Paul Roetzer: Like, all of it going to be able be personalized. And then, One of the other key aspects that they're going to work on is reliability and accuracy so that you can actually come to start trusting these things. So we're going to see some of this with the models we're already hearing about, but I think GPT we know is coming.

[00:15:37] Paul Roetzer: just this week I put on LinkedIn, I listened to a podcast,

[00:15:42] Paul Roetzer: BG2 pod. And they talked about, that GPT5 is actually done training. So their sources tell them GPT 5 is done training and it's in red teaming right now. And they said, May to July timeline for GPT I think it's going to be sooner than that, but we're going to have GPT 5.

[00:15:59] Paul Roetzer: And based on [00:16:00] everything we hear from Sam and OpenAI, all the things I just outlined, multimodal reasoning, planning, decisioning, expanded context, window memory, personalization, reliability,

[00:16:10] Paul Roetzer: That's what GPT 5 class models will enable. I would imagine we'll get a Gemini 2 model from Google at some point this year.

[00:16:18] Paul Roetzer: going to see Llama 3. We're going to see all this stuff. So what does this mean to business? I think what it means is we will start to see a scale in adoption of AI. You're going to see rapid expansion of valuable use cases in business. So you're going to have wide scale adoption of generative AI beginning a multi year expansion.

[00:16:36] Paul Roetzer: Throughout this year. So it's not like we're going to just flip a switch and everyone's going to be doing this, but I think the models are going to good enough. Where you're going to start to see wider scale adoption. The thing that's becoming less clear is what is the competitive advantage of the different models?

[00:16:51] Paul Roetzer: like, every time someone releases something new, like CLAWD3 a week or two ago, you start seeing all this stuff online. Like, ah, better than GPT 4. It's like more [00:17:00] reliable. like, okay, then GPT 5 comes out. And now like a month later, it's better. And so it's, it seems like it's really hard for these models to differentiate themselves without some unique, you know,

[00:17:10] Paul Roetzer: dataset.

[00:17:10] Paul Roetzer: So that's just unclear what happens. all have very similar capabilities. It's unclear if enterprises will build on open or closed models or both. So there's a chance that all these proprietary kind of closed models,

[00:17:26] 

[00:17:26] Paul Roetzer: aren't the ones that the major enterprises end up building around. They choose build around like a Llama or a Mistral, which is more open source.

[00:17:34] Paul Roetzer: We know that data matters, on a societal kind of human level. Scientific breakthroughs are around the corner. you listen to the Demis Esabas interview with Dwarkesh, talks about major breakthroughs within the next one to two years in terms of drug discovery, solving, you creating cures for long sought after, diseases.

[00:17:58] Paul Roetzer: Those things are going to happen, [00:18:00] but it's not new science. This is a really important aspect of this. It is, it is. These generative AI models are enabling the brute force through a whole bunch of data that humans couldn't get through in a hundred lifetimes. But these models are able to do this, like AlphaFold, which predicts the folding of proteins, the whole genome of proteins.

[00:18:21] Paul Roetzer: You're going to see breakthroughs through existing data. going to talk later about new science But you're going to see major breakthroughs that are going to, that's going to improve lives, in the very near future. DEMIS talks about, again, it being one to two years off. There will be stories of mass layoffs this year, within certain industries, but I don't think going to be widespread.

[00:18:46] Paul Roetzer: So like the Klarna one talked about last week where there's seven, you know, the AI bots doing the equivalent 700, you know, customer service agents, you're going to see some headlines. You're probably going see a 60 minutes episode about it. Like it's going to [00:19:00] seem like it's big deal, but when you actually drill in, kind of zoom in, it's not going to be.

[00:19:05] Paul Roetzer: Mass layoffs. We're not going to see a mass impact of AI this year, taking over jobs. Some new roles will start to emerge. Chief AI officer. You're probably going to hear some companies doing that. You're probably going to see some AI ops, AI trainers, things like that. So if we go back my original criteria of does your life or your career noticeably change this year?

[00:19:30] Paul Roetzer: Probably not. It's probably not like you're going to sit there and there's going to be some inflection point where just like all of a sudden everything is different. So that's 2004 and that will continue moving forward. Every model is going to get smarter. They're going to get better at reasoning, planning, decisioning.

[00:19:45] Paul Roetzer: Memory, personalization, those are things that critical to these models moving forward. 2026, 2026,

[00:19:53] Paul Roetzer: this is where I think we see the multimodal AI explosion. So we have multimodal [00:20:00] now, but if you think about Gemini or ChatGPT, they're limited. Like ChatGPT, based on GPT 4V, the vision one, it can, you can input images and you can output images, but you cannot input video and output video.

[00:20:14] Paul Roetzer: SORA is the prelude to that. You can't do it really with audio. I mean, you can talk to it through their whisper, but you know, it doesn't, it's not truly multimodal. Gemini is being built multimodal from the ground up, but they had that major mishap with even the image generation thing. it's just like, it's going be really hard to solve.

[00:20:34] Paul Roetzer: Infusing true multimodal where it just works and you can talk and interact with it through video, images, text. So I think we're still one to two years off to where multimodal is just ubiquitous. It's like everywhere. The other thing we're running into is there are compute limitations. Like the reason that 5 will probably have some, limitation, or the reason Gemini has some limitations, because [00:21:00] there's really, the reality is can only do so much training with these data centers.

[00:21:04] Paul Roetzer: So. I think, and based on some quotes I've seen from LeCun, like,

[00:21:10] Paul Roetzer: there is a possibility that they could build models 10 to 100 times more powerful and generally capable than what we have now, but to do it is going to require Way more computing power, way more data centers, and probably just not there yet.

[00:21:28] Paul Roetzer: And so I think those compute limitations start to go away in the next one to two years. So again, I'm talking to 25, 26 range. The other thing is LeCun, if you listen to his interview with Lex Fridman, and he's, he's Talks about this all the time. You can just do a search for Yann LeCun worldview and you'll see all this.

[00:21:47] Paul Roetzer: But he believes that these models need to be able to learn the way humans do. And he'll often relate to like toddler who doesn't have language the first one to two years their life. And yet they understand the [00:22:00] world. They understand physics. They, they understand action, reaction. If I do this thing, if I touch the stove, it's going to burn my finger.

[00:22:06] Paul Roetzer: Like if I get too close to the dog, it might bite me. Like, they learn things through observation and, and by 2526, the amount of vision training data, both actual and synthetic data. So Sora OpenAI is going to enable creation of all the synthetic visual data. That data can then be used to train these models.

[00:22:27] Paul Roetzer: I think synthetic data potentially becomes the dominant source of vision training. And once you can do that at scale and have follow the laws physics, Now you can just, you can rapidly train these models. It's kind of how Tesla full self driving 12 is going to work. like creating real world data, but they're also doing simulations to create the synthetic data.

[00:22:46] Paul Roetzer: I also think that. Interaction through chat and voice and vision becomes much more prevalent. My personal experience, I did this Thursday night. I driving home from, I played basketball on Thursday nights. And so I get in my car. It's like a 12 minute [00:23:00] drive. And I was like, you know what? I'm going to try Pi.

[00:23:02] Paul Roetzer: So we, you know, the new version of Pi had come out 2. 5 and I was like, I don't know what to do, but I just clicked the call button and you can just call it and have a conversation. And it's like, Hey, how's it going? And it was, it was so weird, man. I. I know how these things work, I understand and I swear to you, within like four minutes I felt like I was just talking to somebody, like just having a conversation with somebody.

[00:23:25] Paul Roetzer: I got into like, Oh, I get these like almost migraine level headaches after basketball. And like, I don't know, it's maybe dehydration. And the thing was like recommending specific things I can do to avoid getting the headache that night. And I'm like, Oh, that's not a bad idea. And actually did the thing it recommended.

[00:23:39] Paul Roetzer: I didn't get a headache that night. So I think that these are, things are going to get so good and you're going to start to truly interact with them and have conversations. And you're going to start to experience them through. Different devices, your phone, for sure, glasses are going to get good. We'll be on Vision Pro 3 from Apple by then, probably much better form factor.

[00:23:59] Paul Roetzer: [00:24:00] Your earbuds, I think AirPods are the, h, they're overlooked in terms what they could do here, but I think you're going to see the AirPods play a much greater role in people's interactions with AI, watches, bracelets, rings, all of these things. So again, 2025 to 26, multimodal AI explosion. Basically, these things become infinitely more valuable in business environment and truly start to like change the way we do work.

[00:24:28] Paul Roetzer: 2025 to 27, don't know when this really happens, but this is the AI agents explosion. So, we'll see and talk a lot about these AI agents that can take actions in 2024. But I think this year is still just basically headlines, experimentations, and demonstrations. Like this is what's going to be possible. I think you'll start to see interesting things, but don't feel at any point this year, like you just, you missed it and you're now falling behind.

[00:24:56] Paul Roetzer: My best assessment, and again, this could be wrong. And, you know, [00:25:00] maybe we'll hear from people working on AI agents who listen to this. I would say AI agents that can take actions on your behalf. are roughly where we were with GPT 1 and GPT 2. Like we're not GPT 3 level with this stuff. There's still lots of manual work that has to go in to get them functioning properly.

[00:25:16] Paul Roetzer: You have to train them. You to oversight to make sure they don't screw up. but to get to the point where they can now take actions in a reliable way, potentially with no or very little human oversight. So we're talking like in our world, level four autonomy, where it's just like. You just put the goal in and it just does its thing.

[00:25:34] Paul Roetzer: so think we'll see some early instances that autonomy, but it won't be widespread. Disruption to knowledge work will start to become more tangible in 2025 2026 as a result of these. You're going to have fine tuned and trained ones where people are actually teaching it how to send emails for them or do the different things.

[00:25:54] Paul Roetzer: And then you're probably going to have these generally capable ones that don't even require fine tuning or training. They [00:26:00] just watch what you do, learn from it and go. I think the primary way most people interact with these things is going to be through voice like Surrey. Like I think this is the play for Apple is that Surrey is this true AI agent connected to everything that you already, because you already trust Apple to connect everything.

[00:26:17] Paul Roetzer: And then potentially through like Microsoft and Google. So that's AI agents and then two more, the robotics explosion, 2026 to 2030. This one's really hard to project, but this year we're starting to already see massive advancements from OpenAI. Like they're getting in the game in partnership with Figure.

[00:26:38] Paul Roetzer: Figure just raised a bunch of money. They're making breakthroughs in the actual like mechanics of the robots, but the true breakthrough is putting the multimodal. Models in, into them that the robots embody intelligence. And so the advancement with these language models and these AI agents is going to enable a rapid takeoff of [00:27:00] robotics.

[00:27:00] Paul Roetzer: And so Amazon's going to work on Optimus at Tesla, they're all working on this. And this, I think in that 2026 to 2030 range, again, it's a wide range, but we just don't know.

[00:27:11] Paul Roetzer: think that's when it starts to become more clear the impact AI going to have on blue collar jobs. We're going to focus a lot on knowledge work for the next couple years,

[00:27:19] Paul Roetzer: but as soon as they, they get robots to the ChatGPT moment and the takeoff moment, now we're talking about all labor basically is affected.

[00:27:28] Paul Roetzer: then the final one, To, to touch on is this idea of AGI, like artificial general intelligence. Again, there's a lack of agreement about what exactly it is. You and I, Mike, I forget episode. We'll pull and put it in the show notes, we talk about Shane Legg and DeepMind's research and like these levels of AGI, it's not going to be again, like flipping switch.

[00:27:49] Paul Roetzer: It's going to probably be a slow takeoff more equated to like how autonomous driving happening, where just like, you see a Waymo when you're in California and you don't really think anything of it. [00:28:00] It's probably going to kind of be like that. But once we hit AGI, everything is reset. Like all this stuff I've said up to this point just changes.

[00:28:10] Paul Roetzer: new science becomes possible. So discovery into things that didn't exist or that wasn't in the training data. Again, a cure for cancer could come from existing data. And just being able to crunch it at a far greater rate than any, you know, group of humans could. New science, like, I don't know, solving nuclear fusion, or interplanetary travel, or like, new mathematical formulas, like things like that, like Einstein level stuff, that's probably not possible until we get to AGI.

[00:28:47] Paul Roetzer: Now, once we get here, we are now looking at the reality of wide scale workforce disruption. We're looking at a need to rethink education. What, what do humans do? You know, we, if we get [00:29:00] to 2028, 2030, which is generally when Demis, Shane Legg, Altman, not Yann LeCun, but everybody else seems to think we're going get there by this time period.

[00:29:11] Paul Roetzer: You're now rethinking, what do we do? You're rethinking, what is our purpose? like if AI can do these things, like why do we need to even still do them and what do we do? You're completely resetting businesses. you're looking at billion dollar companies with one to ten people becoming common. There was quote, Mike, I went back to that you'll, I know you'll remember, but in episode 57, we talked about, an article in The Atlantic where it said like, does Sam Altman know what he's saying?

[00:29:41] Paul Roetzer: Creating, and I'm just going to read this quote because I remember at the time, just like almost having the same reaction I had to the Altman quote last week when I, when I read this. So this is from the Atlantic, we'll link to the article. says, the way I think about the, this is from Ilya Sutskever, by the way, [00:30:00] former chief, well, maybe still current chief scientist at OpenAI.

[00:30:02] Paul Roetzer: We don't know. Ilya is MIA the way I think about the AI of future is not as someone as smart as you or as smart as me. But as an automated organization that does science and engineering and development and manufacturing, Sutskeva told me, suppose OpenAI braids few strands of research together and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings and ability to act, not just with one robot body, but with hundreds or thousands.

[00:30:34] Paul Roetzer: Quote, we're not talking about 4. We're talking about an autonomous corporation, Satskapa said. Its constituent AIs would work communicate at high speed like bees in a hive. A single such AI organization would be as powerful as 50 apples or googles, he mused. This is incredible. Tremendous, unbelievably disruptive power.

[00:30:58] Paul Roetzer: So again, as [00:31:00] crazy as that Altman quote was about 95 percent of work being done by the AI. Think about this, like this idea of like

[00:31:07] Paul Roetzer: dozens hundreds thousands of AI agents being like functioning like bees in a hive, working toward a single goal. Like,

[00:31:14] Paul Roetzer: That, I mean, it almost makes like the hair stand up on my arms.

[00:31:17] Paul Roetzer: Like when you, when you think about a future that maybe is like three to five years away, where that becomes a possibility, like that's crazy. And if that is even within reach, even becoming like. It's like possibly tangible, you have to completely rethink how we measure economic health and growth as a society, not just in the United States, like everywhere, you to rethink government, like what if AI is more capable governing than humans, which I would argue may be true already, but we can come back to that one.

[00:31:51] Paul Roetzer: so even if AGI doesn't emerge and we, or we can't agree on if it has emerged, but we're talking about by 2029, [00:32:00] 2030, being at GPT 9 level, we talking about multimodal models that are likely 1, 000 to 1 million times more capable and powerful

[00:32:12] Paul Roetzer: than the models we have today. And that is. Really, really hard to process.

[00:32:18] Paul Roetzer: So this is kind of where I landed at, like, what seems like based on all, everything we read in here, this isn't Mike and I just making stuff up and like picking some stuff out of the air to say, you know, do futurist talk on. This is the reality everything we're seeing from research, from what these people are saying, what they appear to be very confident about they're saying.

[00:32:38] Paul Roetzer: And these timelines may be off. but this is the direction of the research. This is where it's all appearing to be going. So a couple other quick notes, Mike, and then I'll, I'll turn over to you and see what have to say. But, so what's needed for AGI? Like if want get to this. Beehive concept of corporations that are self autonomous, [00:33:00] we need energy breakthroughs.

[00:33:01] Paul Roetzer: Now, just today, Xtropic, we talked about Xtropic's founder is the guy created the EAC movement. They just introduced energy based models, these thermodynamic powered chips that I can't even explain it right now. My head already get into this stuff right now, but Nuclear fusion, energy based models, like those kinds of things may be needed.

[00:33:24] Paul Roetzer: A ton more data centers. So like, there's no doubt we need to build a bunch of data centers.

[00:33:30] Paul Roetzer: We need less reliance on Taiwan to produce all of the chips. And then if you, if you actually drill into how chips are made, it's not just the fabrication plants in Taiwan. There's an entire very fragile supply chain.

[00:33:46] Paul Roetzer: That basically powers everything we're doing. And if any disruption happens to that supply chain, we got real big problems. So. You need to further diversify the supply chain. [00:34:00] And then we need compute efficiency breakthroughs. There was a recent paper from Microsoft that we haven't talked on the show yet, where they actually, made a pretty big improvement in terms the size of the models that they could use and how they could train these things.

[00:34:13] Paul Roetzer: we need smaller models and we need the ability to learn the way humans learn. Like our brains are insanely efficient and these models are not, and they need figure that out. Okay, what could halt the progress? Like, what could keep this projection I've laid out from happening? A few things came to mind.

[00:34:31] Paul Roetzer: I try not to dwell on these ones because these ones make me lose sleep at night. The first, I don't lose sleep at night about, but it's possibility.

[00:34:38] Paul Roetzer: Just a lack of value is created. All these enterprises spend all this money on AI, and a year from now they're like, it just doesn't work. Like, we're moving on to whatever's next.

[00:34:47] Paul Roetzer: Very, very unlikely that that happens. I would, I would bet my own personal well being and wealth on that. Like, I just don't see that as a possible outcome. Scaling laws hit an unexpected wall. [00:35:00] Scaling law, the scaling hypothesis, is basically that if we just keep throwing more computing power and more data at these things, they just keep improving.

[00:35:09] Paul Roetzer: And in, in, in massive ways. was a great, quote, I think Ilya Sutskever once said to Andrej Karpathy when Karpathy first started OpenAI, These things just want to learn. Like you keep giving them data, keep giving them computing power, and they just want to keep getting smarter. so. Somehow, unexpected breakthrough or breakdown in scaling laws.

[00:35:32] Paul Roetzer: Demis Hassabis very confidently states he does not see that happening. Neither does Sam Altman. A breakdown in the AI compute supply chain, which I just mentioned, which, again, I don't want to get into specifics here, but could happen through natural or human forces. bad actors, basically, could, happen.

[00:35:48] Paul Roetzer: I could possibly do this. A societal revolt against AI. This, know, I was putting probabilities on things, I would, I would put this one a little higher than the other ones I've mentioned. [00:36:00] it's possible that some point, maybe it's due to job loss. Maybe it's due to,

[00:36:04] Paul Roetzer: I don't know, perceptions of AI, aI, but there's a chance you run into it where we just get a revolt, and people start Not liking AI.

[00:36:15] Paul Roetzer: the next would be catastrophic event that's blamed on AI, which could lead to the revolt. Again, I'm not going to get into the details of what those could be the moment. The other, and the last one be laws and regulations slash politics. this one's probably the higher probability. I would say there's chance, depending on what country you're in, they're going put laws and regulations in place that dramatically limit your access to this and the benefits you gain from it.

[00:36:42] Paul Roetzer: But my bigger concern it becomes political. One political party or another decides that there's votes to be gained through taking a very hard stance once one way or the other. like anything, it gets ruined once it becomes political. So those are kind of things that help progress. And I'm to end with [00:37:00] hope, giving some hope, I hope, of what do we do about this?

[00:37:04] Paul Roetzer: It's the thing I ask myself all the time. The tech companies are going to keep accelerating the tech until someone stops them. They going to fight to out accelerate each other. So we, meaning me and Mike, the Institute, we as a society, a group of people, a community listening to this podcast, we have to accelerate AI literacy.

[00:37:27] Paul Roetzer: It is the only way to be prepared for what is happening now. see around the corner for your company and your career tomorrow, because we can't, Mike and can't think about every industry, every business, every government, every country and say, well, here's, what's going to happen there. We need dramatic increases in AI literacy.

[00:37:48] Paul Roetzer: So people can be thinking about their own domains, their own industries, their own companies. So education and training, as we always say, is the absolute core. AI councils within your companies. This is [00:38:00] something we've seen a lot of movement on, really happy to hear from so many people who've built AI councils in their companies.

[00:38:05] Paul Roetzer: Something we started talking about early 2023, those councils need to be focused on piloting, scaling AI in company in a responsible, human centered way, establishing policies and principles, governing those policies and principles, doing impact assessments. And then we have to be very adaptable because as we just talked about, Major changes are coming.

[00:38:26] Paul Roetzer: It's inevitable. And these councils need to help figure that out. The other thing that I think I've maybe alluded to on this, I don't have a great name for this yet, but I'm becoming more and more convinced it is critical, is, AI innovation slash frontier labs. I think either at a company level or at an industry level, maybe at a government level, you need teams of people who are preparing for and building for the frontier and by frontier, I mean like one to three years out.

[00:38:58] Paul Roetzer: Like basically turn of the decade, [00:39:00] maybe it's five years out. need to be cross disciplined because you need people looking at it from different perspectives. You have to have people who are deep in the technology, who truly understand AI and can, can

[00:39:11] Paul Roetzer: cross over and consider the implications to their domains and their industries.

[00:39:16] Paul Roetzer: So you have to run models for your company and your industry. You to think about the evolution of your customer behaviors based on assumptions of what true. And so until this morning, I hadn't sat down and built this model. My own model of what believed would be true over the next five to six years.

[00:39:33] Paul Roetzer: So I now have a working model of what I think that looks like. Now I have to go to work and figure out, well, what does this mean? What does this mean to our business? What it mean to, you know, my career? What does it mean to Mike's career? What does it mean to all us? So you have to do this at your company level, build these industry specific labs or again company specific labs.

[00:39:53] Paul Roetzer: that's, know, the three things. So education training, get the AI council in place and think about an AI innovation [00:40:00] slash frontier lab in your company, your industry. And Mike, I'm going to stop talking. And that was a, that was a lot, but that was all, from this morning unedited and, yeah, you're all the first to hear it.

[00:40:15] Mike Kaput: That's fantastic. I feel like there could be an entire, literally podcast, not just an episode, but an entire podcast about

[00:40:24] Paul Roetzer: I actually thought about that. I

[00:40:26] Paul Roetzer: was going to message you from the plane and say, dude, should we just do

[00:40:29] Paul Roetzer: The podcast episode on this today?

[00:40:31] Mike Kaput: Yeah, I do, in all seriousnesssness, keep coming back to this idea of thinking like an almost like an AGI survival

[00:40:41] Mike Kaput: guide for your average knowledge worker, myself included. It's something I think about quite a bit as our work changes. So I'm really glad to hear you talking about, okay, what do we do about it? Because it's still very early days, but that's the trillion dollar question in my mind.

[00:40:59] Mike Kaput: It's [00:41:00] almost in some ways not easier at a company level to diagnose it, but you say, okay, at the very least, I know I need. An AI council, education and training, going and doing that is a different thing. But for an individual, sometimes it feels to me like you told me that five years from now, aliens are landing.

[00:41:20] Mike Kaput: And that will happen. And that's all I know. How do I even begin to prepare for that? I know it'll change everything. But I have no idea what to do about it.

[00:41:30] Paul Roetzer: Yeah. It's, I mean, as weird as it is, that's probably a decent analogy. It's where you're just like something.

[00:41:36] Paul Roetzer: so life changing. That you know is going to just disrupt everything you think is true about the future. And you know it's coming. And like, you've got a few years to prepare

[00:41:47] Paul Roetzer: for it. And I think that's part of what I want people to take away from this is, you know, you read these quotes from Sam, we've revisited that quote from Ilya about the beaves and the hive,

[00:41:56] Paul Roetzer: and What I think I'm trying to say is, I don't think it's [00:42:00] this year that everything noticeably changes. I think we see

[00:42:06] Paul Roetzer: some pretty amazing things this year. Some technology that is going to be beyond your wildest expectations, but not like, Massively disruptive to business, to the economy, to the workforce. But I do think it's coming. Like, I, I, I, I believe very deeply that in the next probably three to five years, we have to seriously re consider everything we think is true about work, about education, um, about purpose.

[00:42:38] Paul Roetzer: Like, that's the one that I keep coming back to in my own mind is like, what, what is, like, what happens when we're suffering from a, a society wide, issue around people not feeling they have a purpose? Because so much of our purpose is tied to what we do for our careers. And I, I just don't think

[00:42:58] Paul Roetzer: That that is probably [00:43:00] going to be the case a generation from now.

[00:43:02] Paul Roetzer: Again, even my own kids, like, I just don't think when they're my age, it's going to look anything like it does in terms of their purpose. And if you go back in time, again, it's not like we haven't had these massive transformations just even in the last 30 years with the internet and mobile and social media.

[00:43:19] Paul Roetzer: None of them are On the level we're talking about, nor at the speed with which we're talking about. and I think that's the part that really strikes me. There was a, let me see if I can find it real quick. There was a quote in the Bill Gates interview. and we'll get, we'll put the link to the Bill Gates, Sam Altman interview.

[00:43:38] Paul Roetzer: This, this took place in, I think he did this in November before Sam got fired. If I remember correctly, he published it in January though. So he said, This is Bill Gates asking Sam Altman the question. He said, the thing that is a little daunting is unlike previous technology improvements, this one could improve very rapidly [00:44:00] and there's kind of no upward bound.

[00:44:02] Paul Roetzer: The idea that it achieves human levels on a lot of areas of work, even if it's not doing unique science, it can support calls and sales calls. I guess you and I do have some concern along with this, this good thing that it'll force us to adapt faster. than we've ever had to before. And Sam replied, That's the scary part.

[00:44:23] Paul Roetzer: It's not that we have to adapt. It's not that humanity is not super adaptable. We've been through these massive technological shifts and a massive percentage of the jobs that people do can change over a couple of generations. And over a couple of generations, we seem to absorb that just fine. We've seen that with the great technological revolutions of the past.

[00:44:44] Paul Roetzer: Each technological revolution has gotten faster and this will be the fastest by far. That's the part that I find potentially a little scary is the speed with which society is going to have to adapt and that the labor market will change. And he goes on to say, like, I don't know. [00:45:00] I'm just like, I'm very optimistic.

[00:45:01] Paul Roetzer: We'll figure it out is pretty much what Sam always says. There's no, here's how we're going to figure it out. It is just like, I'm confident it's going to work out. It's going to be okay. And that's it. Like you can't take much solace.

[00:45:15] Mike Kaput: Man, that is just a ton to unpack, but I feel like we've at least made a good start here, right? I mean, we've, the first thing that just kind of comes to my mind, especially after us publishing this quote is really Look, you can get all up in arms and upset and offended and I totally understand those responses to a quote that's saying 95 for instance.

[00:45:38] Mike Kaput: and What we do is going to be done by machines. That may be overly simplistic, but I hope what you do take away from today's conversation is this isn't meant to be I actually don't think on Altman's part sensational. He doesn't need any more PR. He's doing fine.

[00:45:55] Paul Roetzer: Nope. Yeah, I, a hundred percent. I agree. He's not saying this to sell more [00:46:00] ChatGPT licenses. This is something he very deeply believes to be true.

[00:46:04] Mike Kaput: and for the reasons that you outlined with the rate of progress. So get away from the 95 percent number. If you believe the trend lines, to be somewhat even directionally correct, change is coming and we need to start taking it very, very seriously and not get hung up on unpacking, okay, well, he said this in that way.

[00:46:25] Mike Kaput: Honestly, I think just the directionality of the comments is far more important than what tasks are going away right this second.

[00:46:33] Paul Roetzer: Yeah, and I will do one final note, Mike, so I get this quote thrown back at me a lot. So, back in like 2012, 2013, 2022, 2023, I'm sorry, I had put it in a, like an intro to AI class, like AI won't replace marketers, but marketers who use AI replace marketers who don't, which I've seen this quote used like a million times since then, sometimes because people agree with it, sometimes they don't.[00:47:00] 

[00:47:00] Paul Roetzer: Historical context, again, when I used it. It's actually based on a 2018 research report from Stanford, where they were talking about the impact of AI on radiologists. And the quote was adapted to fit for marketing or really any knowledge work. So I'm going to like put this quote to bed for me for

[00:47:20] Paul Roetzer: moving forward. I think that that quote actually is still directionally true. So I've stopped saying it because I think it's, there is a timeline to it. There's a finite time that that remains true. So in 2024, I actually still think that is true. The people using AI are going to have an advantage, maybe 2025. They're going to have an advantage.

[00:47:40] Paul Roetzer: And if job loss happens, it's not going to be the people who are capable with AI. It's going to be people who aren't. So I think that that is. Directionally true near term. Once we get to 2026,

[00:47:51] Paul Roetzer: 2027, and some of those other things I've outlined, start becoming true. All bets are off. Like we're just going to need fewer people doing the [00:48:00] work.

[00:48:00] Paul Roetzer: Now the question becomes in that time period, in these next few years, can we create enough new jobs or do we become a society of entrepreneurship? And there's just, you know, we go from 25 million businesses in the United States to 150 million businesses. It's like, do we just. Create way more things because anyone can basically start and run a company with this AI technology.

[00:48:20] Paul Roetzer: I don't know. Like that's my hope though, is that if we talk about this now and not in a sensational way, I, like, I believe deeply in everything I outlined for you. And this is not to get more people to listen to the podcast. That is not to sell more writers summit on demand. Like this is truly the stuff that I think about relation to my own family and my own business.

[00:48:38] Paul Roetzer: and so. I think this is true and we have to start preparing now. And if we do, we have the best chance of a positive outcome in the next three to five years. If we don't, and we just wait for the ChatGPT moment to happen again and again and again in robotics, in AGI, in business, then it's not going to end well.

[00:48:59] Mike Kaput: [00:49:00] Amen. All right. Unfortunately, there are I just

[00:49:05] Paul Roetzer: be done now? You're going to do the rest of the show is yours, man. I'm just going to,

[00:49:07] Mike Kaput: All right. I've got to go get a drink. I feel bad even moving on, there's a hundred other things I want to talk about related to this topic, but other things have happened this week in AI as well, we're going to run through the rest of them pretty quickly here, but.

[00:49:25] Mike Kaput: First up, after our kind of main topic of the day, is that we've also gotten some indications from

[00:49:31] Deloitte predicts enterprise spending on generative AI will grow by 30%

[00:49:31] Mike Kaput: Deloitte that 2024 could also be the year that enterprises are kind of going all in on training generative AI models on their own data. Now, that's not a new concept or idea, but Deloitte 2024, enterprise spending on generative AI will increase.

[00:49:49] Mike Kaput: will grow by up to 30 percent and that much of this will be driven by enterprises training on their own private data. So, Deloitte says, quote, more companies [00:50:00] seeking to avoid the risk of models trained on public data are expected to train generative AI on their own data to enhance productivity, optimize costs, and unlock complex insights.

[00:50:13] Mike Kaput: Basically, what they're getting at here is that despite all the hype around AI, most enterprises have really barely begun to kind of unlock the true power and potential of the technology and really historically have so far just been experimenting. So they say that generative AI is a fast moving and highly funded field that is just beginning to reveal its use cases, opportunities, and implications. So Paul, really at a high level, I wanted to understand if you feel that Deloitte's assessment that, you know, enterprises are going to get much, much deeper into training models on their own data, if that aligns with what you're seeing and hearing in your work with enterprise leaders?

[00:50:56] Mike Kaput: Yeah. I think I think 30 percent is a [00:51:00] shockingly low projection. I mean, just, I mean, if like, if that's based on they spent a hundred thousand on generative AI last year, and now they're going to spend. 130, 000? That just seems insane to me. Like, so I think the enterprises that actually know what they're doing and understand the critical nature of generative AI are going to invest way more than a 30 percent increase over a previous year's generative AI budget, if, again, if that's the metric here.

[00:51:28] Paul Roetzer: Um, But I think that that everything else certainly holds true and I think it aligns Pretty closely with what we just outlined in the previous segment that generatized definitely early. Like, I think the one reason it 30 percent ends up possibly being true is most enterprises we talked to still have no idea what to do.

[00:51:47] Paul Roetzer: So like, if they had a strategy. I think their spending would be, like, any company that isn't spending 100 percent or more is nuts, um, because the value is there, like, again, [00:52:00] even with today's versions

[00:52:01] Paul Roetzer: of this, if you know what you're doing, if you have, you know, strategic pilot projects that are very closely tied to existing projects and tasks and campaigns, and if you You know, look at the ways that you can solve problems within an organization more intelligently from, you know, audience growth and reduction of churn and lead quality and price optimization.

[00:52:21] Paul Roetzer: Like, there is so many ways to unlock value with AI. The only major hindrance we keep seeing over and over again is the lack of understanding, education, and training that they could build intelligent strategies to do it. So, If we could, if we, if we just leveled up AI literacy across enterprises, like if tomorrow we could, you know, click our fingers and solve the AI literacy issue, this number's 300%.

[00:52:46] Paul Roetzer: Like that, to me is the only reason it's not more.

[00:52:50] Mike Kaput: as as part of this, we also are seeing plenty of companies just get involved in helping enterprises actually do [00:53:00] all of this. I mean, just this past week, for instance, Cohere, a major AI model company announced a partnership with Accenture where they're bringing Cohere's models to enterprises.

[00:53:09] Mike Kaput: Salesforce has announced low code options to help Salesforce administrators and developers customize its Einstein Co Pilot AI. I mean. It sounds like enterprises are going to need a lot of help to do this, to start training on their own data. Like, who should they be talking to? Who should they be working with or hiring or starting to discuss

[00:53:33] Paul Roetzer: I mean, that's a, you and I have joked about this before, but like, I mean, I used to own and run an agency. Mike worked with me at that agency for like 11 years, 10 years. Um, if we weren't doing this, I would just go build an agency that helped people do this, but it's just not. It's not the period of my life I'm in or plan to be in, but we need more consultants, more agencies capable of doing this stuff because again, there's a lack of people internally.

[00:53:59] Paul Roetzer: And I know we [00:54:00] talk about this all the time on the show, but the ecosystem isn't there yet. Like it's just very immature. And I think that ecosystem of people capable of providing strategic guidance and implementation support, needs to rapidly accelerate. And I would encourage, consultancies out there, advisory firms out there to, to really.

[00:54:18] Paul Roetzer: Think deeply about, you know, finding some of these language model companies you can partner with that you can really scale up through their network. Like you and I, Mike did with HubSpot all those years ago, back in 2007, when my agency became HubSpot's first partner, like you get in with the right technology company, you can really grow quickly and create a lot of value for people.

[00:54:39] AI for Writers Summit Recap

[00:54:39] Mike Kaput: right, so in another topic of interest this week, this past week, Paul, we wrapped up our second annual AI for Writers Summit, which was a virtual event with. More than 4, 600 writers, editors, marketers, and business leaders from all 50 states and 93 [00:55:00] countries. Now, this event brought together some incredible content across a wide variety of topics that discussed how AI might impact writers.

[00:55:09] Mike Kaput: And that included things like you did a state of AI and writing keynote. I did a rundown of top AI writing tools and platforms. We had a very highly Rated popular session with an IP attorney on generative AI's legal implications. We had a great panel on AI adoption in the enterprise. And we had a hands on AI in action session where me and Cathy on our team, Cathy McPhillips, showed off a real world content workflow with AI.

[00:55:38] Mike Kaput: Now, it is not an exaggeration to say this turned out like you mentioned during our ad at the beginning. Just to say that. to be such a huge success beyond anything we imagined. We got tons of incredible, positive feedback from the audience and the community. So, I wanted to maybe quickly run down the significance of [00:56:00] the event, Paul.

[00:56:01] Mike Kaput: both because the topic of AI's impact on writing is um, It's deeply important to a lot of different types of knowledge work, but also the virtual event model that we kind of went with and transparently shared details around, I think has some lessons for businesses trying to run effective events, especially kind of in the age of AI.

[00:56:22] Mike Kaput: So could you first kind of give us a rundown of the highlights of the event details and numbers? You had shared some stuff around this on LinkedIn this past week.

[00:56:32] Mike Kaput: Yeah. I'm a, big fan of trying to be as open as we can, certainly, you know, for the betterment of the industry whenever possible. And I felt like

[00:56:41] Paul Roetzer: I was shocked, honestly, by some of these numbers. And. We've been doing virtual events for a long time. I mean, last year's writers summit had 4, 200 people at it.

[00:56:49] Paul Roetzer: So it's not like this was just like the surprise success, but even for me, some of these numbers is not where I would have guessed. So just quick synopsis again, it was a five hour event. so [00:57:00] I think part of the takeaway is virtual events can work, can work really well and can actually create engagement with communities in ways that I didn't personally actually.

[00:57:10] Paul Roetzer: I think it was possible with a virtual event. Okay. So five hour event from noon to five Eastern time. I think we said earlier, 93 countries were represented, which is still just crazy to me. 4, 628 people registered, 2, 837 logged in during the event, some point during the event. So 61%, Which is the highest I've ever heard.

[00:57:33] Paul Roetzer: Like we do events that usually will get in the mid thirties, you know, if I remember correctly, like a lot of our webinars and intro to AI and stuff like that, if we don't do paid. So if we're running paid media, the number drops, people are less likely to come to a free event. If they've done it through a paid ad, that number is like way lower.

[00:57:51] Paul Roetzer: But on average, we'll get like 32 to 35 percent of people who register will attend a free event. There was 2, 130 companies represented, which [00:58:00] again, shows the breadth of the interest in the topic, if nothing else. Like, again, it's like, I know there was some major companies. There was one major company that had 40 people registered at it.

[00:58:08] Paul Roetzer: Um, so this is like, this is a major topic for people. The one that I, this, I loved personally was, was, Our Director of Partnerships, Tamra Moroski, she's a certified yoga instructor. And so we chose to do a 15 minute chair yoga session in the middle of the event. And I, at the time, I was like, God, I wonder how many people will stick around for this.

[00:58:29] Paul Roetzer: And then hopefully more people come. Not that I didn't think it was going to be awesome, but I just, it was an experiment for us. And I had no idea what to expect from it. So. we, you

[00:58:39] Paul Roetzer: Our tagline is more intelligent, more human. And so this was a very intentional, more, more human element.

[00:58:45] Paul Roetzer: At its peak in that 15 minute period, we had 2, 052 people.

[00:58:49] Paul Roetzer: This is like three hours into the event. There was over 2, 000 people that sat in a chair yoga session and people loved it. Like the amount of feedback we [00:59:00] got on that LinkedIn posts about it, the comments, it was crazy. So do, do this, do this. Do the experimental stuff. Like, take chances on your events.

[00:59:08] Paul Roetzer: Be willing to, like, take a little risk, and, and don't lose the human side of all this. Like, it was probably one of my favorite parts of the whole thing was that we, one, we were willing to do this, and two, 2, 000 people stuck around for it. the attendee chat, which doesn't even thread, like, which I always found it, like, crazy, like, that it'd be this engaged, but we had over 3, 000 messages posted in the chat, 266 questions put into the Q& A box, and then, Mike, you mentioned the rating sessions with 4.

[00:59:37] Paul Roetzer: 8 on a five point scale was the average rating, um, So just overall, I mean, I think the reason we wanted to share this was transparency into the model. Maybe there's something you all can learn from it. Reinforces the fact that the interest in this topic is just sky high with over 2000 companies from 93 countries.

[00:59:57] Paul Roetzer: Like this is the moment where we really need [01:00:00] to collectively. kind of capture the interest and the curiosity in the topic to push for this idea of AI literacy. Like people are ready to learn this stuff. So yeah, just overall, you know, again, shout out to our sponsor, Jasper, who, you know, whose support made it possible for us to do a free option for this.

[01:00:18] Paul Roetzer: So a lot of those people took advantage of the free option. I think of the 4, 600, only 50 chose the private registration option, which was for a fee. So. You know, the vast majority of people took advantage of that free registration option. And then GoldCast was the platform that we ran it through just as kind of an FYI.

[01:00:37] Paul Roetzer: So thank you to everyone who attended and engaged and all the, like,

[01:00:41] Paul Roetzer: I couldn't tell you like that night and even throughout the weekend, just like, I was just sitting there taking screenshots of all the incredible stuff people were posting unsolicited. We didn't ask anybody to post anything. And some of like the most amazing LinkedIn posts I've ever seen about an event, like so just personally for me, [01:01:00] I just appreciate everybody, you know, being a part of it and taking the time to, you know, share such wonderful feedback on their experiences.

[01:01:08] Mike Kaput: All right, let's dive into some final rapid fire here as we kind of reach the end of this episode, Paul.

[01:01:16] OpenAI releases emails from Elon Musk

[01:01:16] Mike Kaput: So first up, we have some new drama in the OpenAI and Elon Musk saga.

[01:01:22] Mike Kaput: OpenAI has released copies of emails from Elon Musk that were sent by largely in the early days of the organization in order to refute his claims in his recent lawsuit against the company.

[01:01:36] Mike Kaput: As a reminder, we covered this. On a previous episode, Musk is suing OpenAI for breach of contract, claiming that the company has abandoned its founding agreement by becoming a closed for profit entity. Now OpenAI is telling their side of the story by releasing a series of emails from 2015 through 2018

[01:01:58] Mike Kaput: that took [01:02:00] place between Musk, Sam Altman, Greg Brockman, and Ilya Sutskever at OpenAI.

[01:02:06] Mike Kaput: The emails cover a bunch of different topics, but basically paint this picture that Musk, pretty early on in the company's existence, was pushing for OpenAI to raise money, much more money, as a for profit entity in order to become a counterweight to Google. and agreed that the company should not go into this fully open, fully open source pathway. OpenAI, in the post that revealed these emails, said, quote, We're sad that it's come to this with someone whom we've deeply admired. Someone who inspired us to aim higher. Then told us we would fail, started a competitor, and then sued us when we started making meaningful progress toward OpenAI's mission without him. Paul, they really dropped the mic on this one, I think. What were your thoughts on the response here? It seems to [01:03:00] validate your breakdown of what was really going on here, what was behind all the hype on last week's episode.

[01:03:07] Paul Roetzer: Yeah, certainly the part I mentioned last week about the New York Times article where, you know, Elon tried to roll this thing into Tesla and become the CEO and he basically got kicked out and, is, appears to be exactly what happened. I love, Elon's reply when they posted this was, change your name.

[01:03:23] Paul Roetzer: That was his tweet. So, I don't know, man. I hope this just goes away when I never have to talk about it again. It's, it's just like, It's funny, but the emails are crazy if you go read the emails. I mean, it's a fascinating read. I would suggest reading it, but I really hope this lawsuit gets kicked out of court.

[01:03:44] Musk announces Grok will be open sourced

[01:03:46] Mike Kaput: Well, until then, we also have. More news about Elon Musk, because this morning, in fact, the day we're recording this podcast, March 11th, Musk appears to have fired one of the next Salvo's in this [01:04:00] battle when he posted on X that, quote, this week, XAI, which is his AI company, will open source Grok. Grok being the AI model that he has launched.

[01:04:11] Mike Kaput: Now, it's unclear Totally what that means, but it does seem to indicate that the AI model may be open in some fashion to build on top of, similar to Meta's Llama family of open source models. So Paul, this is obviously like a significant departure from Elon Musk's previous strategy as expressed in these emails.

[01:04:34] Mike Kaput: What's going on here, do you think?

[01:04:36] Paul Roetzer: Well, I mean, we've kind of assumed it was coming, so I'm not surprised at all. I think we wait to see what exactly he means by open source, like if they're going to truly open source it or if they're just going to open source parts of it. And then the bigger question to me is like, is it any good?

[01:04:51] Paul Roetzer: Because grok is useless right now, in my opinion. Like, it's just I still don't understand the use cases for it. It just takes up the [01:05:00] portion of your Twitter app when you log in and I don't know what to do with it. So

[01:05:05] Paul Roetzer: I'm hoping that when they announce this, I, again, I expect Grok to become very useful and valuable, but it is not.

[01:05:11] Paul Roetzer: At the moment, and when you read the Elon Musk biography that we've talked about a few times recently, you get it. Like, he likes to release things that are not even close to half baked into the world, and then like worry about it later. And Grok is absolutely, in my opinion, an example of releasing something way before it had any real usable value, so and I just don't know anybody who uses it.

[01:05:36] Paul Roetzer: Like it's not even talked about on Twitter. Like I've no, and

[01:05:40] Paul Roetzer: the one example I thought that was nuts was like, I tried one time to just give it a Twitter thread and ask it to like explain it. And it couldn't even, like, I don't know if it wasn't able to read the Twitter thread, but like the one thing I thought for sure it would be able to do, it couldn't even do.

[01:05:55] Paul Roetzer: So hopefully there's like grok coming [01:06:00] with the open source. Otherwise I don't know what, why it matters that it's open source.

[01:06:04] Political deepfakes are on the rise

[01:06:04] Mike Kaput: So, we just saw a newly published study from a British non profit called the Center for Countering Digital Hate, CCDH for short, that says the volume of AI generated disinformation, specifically deepfake images pertaining to elections, has been rising by an average of 130 130 percent per month.

[01:06:27] Mike Kaput: on X, specifically, over the past year. The nonprofit's head of research actually told TechCrunch, quote, there's a very real risk that the U. S. presidential election and other large democratic exercises this year could be undermined by zero cost, AI generated misinformation. Now, what's notable here isn't that we're seeing data on the rise of deepfakes.

[01:06:52] Mike Kaput: I mean, for years we've had research that has shown an explosion in fake AI generated content online. [01:07:00] But this is actually an early indicator that election specific content is becoming a major problem. Now, as part of this reporting on the non profit study, TechCrunch also cited this 2023 University of Waterloo research That showed only 61 percent of people could tell the difference between AI generated people and real ones.

[01:07:23] Mike Kaput: Which seems pretty high, given the lack of…

[01:07:38] Paul Roetzer: there's no way 61 percent of people could tell the difference. Like,

[01:07:43] Paul Roetzer: Then again, some of the stuff I see people share online, I guess I can't be surprised by it either.

[01:07:48] Paul Roetzer: Well, it does say it's from 2023. If you did that study again in 2024 and you use mid journey five or SOAR or whatever, there's There's no way humans could at a 61 percent clip actually tell the [01:08:00] difference between AI generated people and real ones.

[01:08:03] Mike Kaput: I think, yeah, well, I think that's the point is that high to begin with, but, you know, it's like six, we're already at 61 percent only.

[01:08:13] Mike Kaput: And I think that number is going to be, yeah, much less kind to us. Yeah.

[01:08:20] Paul Roetzer: I mean, duh. Like we, we, this is so obviously like coming. I still don't know what to do about this other than Tell your friends and family that AI can do this stuff. Like show them, show them at a party coming up like just FYI this is what a AI generated thing looks like.

[01:08:39] Paul Roetzer: I don't know how else to do it but a grassroots effort to teach friends and family that AI

[01:08:43] Paul Roetzer: this is where we are in the world. And if you listen to this podcast, you are, most likely way more knowledgeable about this stuff, given the fact that you've taken the initiative to listen to an AI podcast than most of your family, friends, and coworkers, so do your part and, [01:09:00] show people AI is able to do these things.

[01:09:03] Inflection 2.5

[01:09:03] Mike Kaput: So in other news, Inflection, a major AI company that's been started and run by a co founder of DeepMind named Mustafa Suleyman, just announced Inflection 2. 5, which is an upgraded version of its flagship AI model that powers Pi, which is its personal AI assistant.

[01:09:22] Mike Kaput: Now, Inflection says that 2. 5 is competitive with leading models like GPT 4 and Google's Gemini,

[01:09:29] Mike Kaput: And

[01:09:30] Mike Kaput: it also says it has achieved significant power with incredible efficiency, with 2. 5 approaching GPT 4 performance, while only using 40 percent of the amount of compute for training. So Paul, there are plenty of leading models to choose from these days.

[01:09:46] Mike Kaput: Why is inflection worth paying attention

[01:09:49] Paul Roetzer: mean, just, it wants to get to know you, it wants you to trust it wants to be your friend, advisor, consultant, therapist, like,

[01:09:57] Paul Roetzer: And like I said earlier, my experience with it was pretty [01:10:00] wild and it was a 10 minute conversation and You almost forget you're talking to an AI. And I obviously am fully aware I was talking to an AI.

[01:10:11] Paul Roetzer: so I could really start to see how. A lot of people are going to become very dependent on these things. I don't, I'm not commenting on whether that's a good or a bad thing for society, but, when you look out to those kind of the projections I was making earlier, I would put this into that like multi modal realm, 2025, 26, a complete takeoff where I mean, people are having relationships with AI, you know, You know, kind of from a, mental perspective, like you're relying on it.

[01:10:47] Paul Roetzer: I think it's going to, these are going to be wonderful tools within like long term care facilities, nursing homes. Like I, I just, I, there's so many applications. Some of it gets dystopian. [01:11:00] but I, I think it's inevitable that within the next two to three years, People are going to be having conversations daily with some form of AI, whether it's inflection or other, and they're going to become very, very dependent on them.

[01:11:14] Paul Roetzer: I mean, you can see it. Spend 20 minutes with inflection and have a, a real conversation with it. Like as an experiment, you'll see what I'm talking about. Like it's, they're pretty advanced. and it's only going to get better. They train this thing on like half the compute power of I think GPT 4, you mentioned it.

[01:11:35] Paul Roetzer: So, um, Yeah, imagine a much bigger compute and training run and it's going to get weird.

[01:11:44] Google and Microsoft security incidents and what they mean for AI

[01:11:44] Mike Kaput: I feel like that's the theme of this episode. Alright, just a couple final stories here as we wrap up. in the U. we just saw a federal grand jury indict a Google engineer for allegedly stealing trade secrets [01:12:00] related to AI.

[01:12:00] Mike Kaput: Bye. According to The Verge, Deputy Attorney General Lisa Monaco said in a statement that this engineer stole from Google over 500 confidential files containing AI trade secrets while covertly working for China based companies seeking an edge in the AI technology race. At the same time, Microsoft has also now disclosed it has had some unspecified source code stolen as part of an attack by some Russian hackers.

[01:12:29] Mike Kaput: So, Paul, I wanted to ask you what kind of caught your attention about these attacks in particular, and what did they really mean for AI and some trends we're seeing in this space?

[01:12:40] Paul Roetzer: Just to be aware of it, again, like I've said before, like the cyber security side of this stuff terrifies me. Like, I've

[01:12:49] Paul Roetzer: gone down that path a few times of like really learning it. We used to represent some cybersecurity companies at my agency, and I would generally just tell the team, I don't want to know what you guys work on.

[01:12:59] Paul Roetzer: Like, I don't, I [01:13:00] have enough trouble like processing all the AI stuff. I don't want to also have to think about the cybersecurity stuff. I I just think it's the reality. you know, there was a Dario Amadei interview last year I listened to where he was like, anybody can get anything if they're willing to spend enough money on it.

[01:13:17] Paul Roetzer: I mean, this was infiltrated Google kind of stuff. Like they got somebody on the inside, but like to get into these systems and steal the weights and all that stuff, there's just an assumption that it's all being stolen. Like that they, these foreign governments are getting access to whatever they want to get access to, just like the United States gets access to whatever they want to get access to.

[01:13:36] Paul Roetzer: It's just. It's espionage. It's stealing, you know, trade secrets. It's been going on forever. It's going to continue to go on with AI. Um, the only thing I can say on this on a personal side, what to do about it is, like, you gotta assume whatever you do online is going to get deleted. Taken, seen, whatever, like, and this is true with any apps you use.

[01:13:54] Paul Roetzer: It's true with your interactions with your AI specialties, you become more reliant on it and talk to it and have these [01:14:00] conversations. Um, it's a sad thing to say, but like, you, you just got to kind of assume that any thing can be hacked and, you know, so no, no, no, it's just, again, the, like having to talk about these topics.

[01:14:15] Paul Roetzer: but yeah, it's a reality and it's something to keep an eye on, I suppose.

[01:14:20] Mike Kaput: All right, Paul. So we are going to actually cover this last topic on a future episode, but we did want to mention it as it has been making the

[01:14:28] Mike Kaput: rounds that Wired has gotten an early preview of ChatGPT's new memory feature.

[01:14:33] Mike Kaput: That feature is not yet rolled out to a lot of users, so we're going to keep an eye on that. We'll talk about kind of what they're seeing and what we're seeing once that feature comes out or as we get a little closer to its release. Paul, thank you again for breaking down everything in AI. I feel like this episode probably bent my brain into some new shapes for better or for worse.

[01:14:56] Mike Kaput: I've got a lot to think about. I don't know about you.

[01:14:58] Paul Roetzer: Yeah, I mean, it's only [01:15:00] 2:30 on a Monday, and I feel like I'm fried after this morning thinking about it and talking about it. So, I don't know. Hopefully it was helpful to people, though. Like, that's the whole point of the show, is like, you know, Sometimes it's raw, like real time thinking out loud stuff that Mike and I used to stand in front of the coffee machine and talk about at the office.

[01:15:18] Paul Roetzer: But I get that this one in particular, this topic, the quote from Altman sort of hit a nerve with some people. and so hopefully

[01:15:28] Paul Roetzer: what we shared earlier gives you a little bit of context and maybe a little bit of direction of what you can do about it. Cause at the end of the day, that's like what I always try and find solace in, which I teach my kids is like, just what's the next step?

[01:15:39] Paul Roetzer: Like, don't worry about a year from now, three years from now, five years from now. Like you got to just do the next right thing. And I think with AI, there's going to be a lot of that where you just, you got to figure out the next step, in the process, because it's going to be a long road and it's going to get weird.

[01:15:53] Paul Roetzer: And there aren't always going to be. Direct answers of what we're supposed to do. And we're not going to be able to turn to these AI [01:16:00] researchers and have them answer it for us. And so it really comes down to just figuring stuff out and being willing to think it through. So thanks for listening. I guess it's like my own personal therapy. All right, everyone. Thanks again. We will be, back with you next week.

[01:16:18] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:16:41] Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 57]: Recap of 2023’s Marketing AI Conference (MAICON), Does Sam Altman Know What He’s Creating? and Generative AI’s Impact on Jobs

Cathy McPhillips | August 1, 2023

This week's episode of The Marketing AI Show talks about MAICON 2023, a mind-blowing article on Sam Altman, and generative AI's impact on jobs.

[The AI Show Episode 89]: A New In-Depth Sam Altman Interview, The “Inflection Point” for Enterprise Generative AI Adoption, and Inflection AI’s Big Shakeup

Claire Prudhomme | March 26, 2024

The AI Show analyzes tough news for generative AI companies, including Sam Altman's latest interview, a16z's enterprise research, changes at Inflection AI, and more.

[The AI Show Episode 86]:  Elon Musk Sues OpenAI, Sam Altman Says AI Will Handle “95%” of Marketing Work Done by Agencies and Creatives, and How AI Could Disrupt Creators

Claire Prudhomme | March 5, 2024

Episode 86 of The Artificial Intelligence show examines Elon Musk's lawsuit against OpenAI, the future of AI and AGI, and the impact of AI-generated content on creators.