Marketing AI Institute | Blog

[The AI Show Episode 153]: OpenAI Releases o3-Pro, Disney Sues Midjourney, Altman: “Gentle Singularity” Is Here, AI and Jobs & News Sites Getting Crushed by AI Search

Written by Claire Prudhomme | Jun 17, 2025 12:15:00 PM

​​​​o3 Pro is here. Sam Altman thinks the singularity might be too.

This week, Paul and Mike dive into OpenAI’s o3-Pro reasoning model and what makes it fundamentally different. They explore Sam Altman’s bold claim that the singularity has begun, Meta’s superintelligence ambitions, and Disney’s high-stakes lawsuit against Midjourney.

They also break down search traffic freefalls, mechanized job automation, and whether GPTs or projects are better for scaling AI workflows, among other topics, in our rapid-fire section.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:04:54 — o3 Pro

00:18:33 — Disney Sues Midjourney

00:28:53 — The Singularity Is Nearer

00:50:14 — AI and Jobs: Saying the Quiet Part Out Loud

00:56:27 — OpenAI and Google Deal

00:58:46 — AI and Google Search

01:02:38 — Ohio State’s New AI Fluency Initiative

01:06:08 — xAI Data Center Environmental Scandal

01:10:58 — Kalshi’s AI-Generated NBA Finals Ad

01:15:18 — What Happens When AI Goes Down?

01:19:19 — Meta Crackdown on “Nudify” Apps

01:21:59 — Updates to GPTs, Using Projects vs. GPTs

Summary:

o3-Pro

OpenAI has launched o3-Pro, a new AI reasoning model that marks a significant leap in capability.

o3-Pro builds on the earlier o3 model, which was designed not just to chat but to think. These models don’t just generate answers. They solve problems step by step. That makes them 

especially strong in domains like coding, science, and math.

What sets o3-Pro apart is its depth. It's slower, pricier, and much more compute-heavy. But it’s also more precise. In benchmark tests, it outperformed top rivals like Claude 4 Opus and Gemini 2.5 Pro, especially in high-level science and math reasoning.

Early users say o3-Pro isn’t just better. It’s fundamentally different. It needs rich context to shine, but when fed the right inputs, it doesn’t just help you think. It helps you plan, prioritize, and execute with uncanny clarity. Think less chat assistant, more strategic co-pilot.

It also shows big gains in tool use, or knowing not just how to use external tools, but when to call on them.

The trade-off? It’s not for quick questions. This is an AI designed for deep work, and it seems like it demands thoughtful prompting to unlock its full potential.

Disney Sues Midjourney

Disney and NBCUniversal have filed a joint lawsuit against AI image generation company Midjourney, accusing the company of mass copyright infringement.

It’s the first time Hollywood’s biggest studios have taken direct legal action against a generative AI company. The studios claim Midjourney used their characters—like Elsa, Darth Vader, and the Minions—to train its model and create lookalike images, all without permission.

The lawsuit includes striking examples of generated content nearly identical to iconic movie scenes. Disney and NBCU say they reached out to Midjourney to resolve the issue privately, but the company allegedly ignored them and continued to release even more “infringing” versions of its tool.

The complaint calls Midjourney a “bottomless pit of plagiarism” and says its actions threaten the foundations of U.S. copyright law. 

The fact that the famously litigious Disney is involved is significant. As one expert put it to the publication New Scientist: “It’s Disney, so Midjourney are f****d, pardon my French.”

The Singularity Is Nearer

We got a couple indications this past week that AI insiders aren’t just building artificial general intelligence, but possibly artificial superintelligence.

First, Sam Altman published an essay titled The Gentle Singularity, where he argues the singularity—a hypothetical point where artificial intelligence surpasses human intelligence—has quietly begun. 

In the essay, he argues that humanity has crossed the event horizon toward digital superintelligence, and that it’s a bit quieter than anyone expected. 

He outlines a near future where scientific breakthroughs arrive faster than we can imagine—thanks to AI that not only assists, but helps design the next generation of AI. By 2027, he predicts robots will be handling real-world tasks. And by 2030, productivity could be an order of magnitude higher than it was in 2020.

Altman calls this a “gentle” singularity because each wonder quickly becomes mundane. We get used to all the progress, and it becomes normal.

Second, at the same time, Meta is making a bold new bet on superintelligence.

Mark Zuckerberg has launched a secretive new AI division aimed squarely at building superintelligence. To kickstart it, he’s personally recruiting dozens of top AI researchers from rivals like Google and OpenAI—and placing Alexandr Wang, founder of Scale AI, at the helm.

He’s able to do that because Meta is acquiring a 49% stake in Scale, which is best known for labeling the data that trains AI systems. (The deal values Scale AI at $28 billion.)

Meta is hoping that Wang’s team and infrastructure can help fix what Zuckerberg sees as a performance lag in Meta’s Llama models.

The lab’s mandate? Beat the competition to AGI—and embed it across Meta’s ecosystem, from chatbots to smart glasses.

This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.This episode is also brought to you by our upcoming AI Literacy webinars.

As part of the AI Literacy Project, we’re offering free resources and learning experiences to help you stay ahead. We’ve got two live sessions coming up in June—check them out here.

Read the Transcription

[00:00:00] Paul Roetzer: So as a company, we've had this conversation internally about like organic search, and I actually said to Mike, I don't even care about organic search. Like I honestly don't even know what ours is anymore. The organic traffic we get from Google, it was a KPI we used to look at very closely, but like I just assume it's going to zero.

[00:00:17] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content officer Mike Kaput.

[00:00:38] As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.

[00:00:54] Welcome to episode 153 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my [00:01:00] co-host Mike Kaput. We are recording this on, oh boy, Mike. It is Friday the 13th. I just realized it was the 13th. 

[00:01:07] okay. So it's 2:30 PM Eastern time on Friday, the 13th of June. this will drop on Tuesday as usual, but I am traveling, so we needed to get this in on a Friday.

[00:01:20] I'm not exaggerating. There was what, like 18 topics that ended up being newsletter only this week, if I'm not mistaken. 

[00:01:29] Mike Kaput: Yeah, yeah. It was quite a few. 

[00:01:31] Paul Roetzer: It was, it was a crazy week. And the thing that was so fascinating, I told Mike this, like was, we were going through, figure out what was gonna make the cut to talk about, and again, we're doing this like a day earlier than we normally do.

[00:01:43] So there's probably some things even from today that could have cut. It even made the cut. every, every single thing we cut were things I wanted to talk about. Like, these weren't just like quick little updates. There was just a, I feel like each week there's more and more like [00:02:00] significant news across different elements of AI where we could just be talking about this stuff nonstop.

[00:02:08] Yeah, for sure. If Mike and I had nothing else to do, I think we honestly are at the point where we could just do this as a daily podcast and not run out of things to do. We just do like three to five things every day. but we have other things to do, don't we? Plenty else going on. All right. Well, so we do have quite a bit to cover, so we'll get into it in a moment.

[00:02:30] this episode is brought to us, Scott by MAICON 2025. Again, this is our marketing AI conference that we started in 2019 through our Marketing AI Institute brand. This is the sixth Annual Marketing AI conference. It is happening October 14th to the 16th in Cleveland. The 14th is optional workshop days.

[00:02:50] We've got four workshops planned there. You can go read about those on the website. And then the 15th and 16th are the full event days. There's general sessions, dozens of [00:03:00] breakout sessions, amazing exhibit hall. So definitely check that out. We are, get the, again, at the Cleveland Convention Center right across from the Rock and Roll Hall of Fame, in front of Lake Erie.

[00:03:10] if you haven't been to Cleveland, you'll love it. It is our hometown. So I'm, a little bit biased, but everyone who comes in has an amazing experience, so check that out. It is MAICON.AI. You can check out the agenda so far. There's still much more to be added and the speaker list so far as we continue to add speakers, throughout the summer as well.

[00:03:32] So again, that is MAICON.AI, and this episode is also brought to us by our two upcoming webinars that we talked about last week as well. As part of our AI literacy project, we offer a collection of free resources and learning experiences. We have one coming up on June 19th. That is our five essential steps to scaling AI class.

[00:03:52] I teach that every month. So this is a free class that teaches five fundamental steps for any organization of any size to [00:04:00] scale AI in a responsible way. it's about a 30, 35 minute presentation with 25 minutes of Ask Me anything. So that has happening on June 19th. You can register, the link will be in the show notes.

[00:04:11] You can also go to smart rx.ai and click on the link there. and then we also have June 25th, we have the AI Deep Dive, Google Gemini Deep Research for Beginners. So in that one, I'm gonna walk through a project that I did on, episode 1 49, which I'm actually gonna talk about again in a, a couple minutes here.

[00:04:31] but again, that is June 25th. That is also a free webinar through SmarterX.AI. I think it's under the education link, but we will again include that link in the show notes. okay. We, we had some lawsuits, we have some new models. We've got Sam Altman promising the singularity is near, and that is just the main topics, Mike, but let's start there.

[00:04:54] o3-Pro

[00:04:54] Mike Kaput: Yeah. You know, little Steph here. Alright. So first step, Paul [00:05:00] OpenAI has launched oh three Pro. This is their new AI reasoning model. It builds on the earlier oh three model. And these models, the reasoners, are not just designed to generate answers in chat, but to actually think they solve problems step by step.

[00:05:17] They're especially strong in domains like coding, science, and math. And o3 Pro is slower pricier and much more compute heavy than its pre predecessors. It's also more precise in benchmark tests. It has outperformed rivals like Claude Four Opus and Gemini 2.5 Pro, especially in very high level science and math reasoning, and some early users say o3 Pro isn't just better.

[00:05:44] It is fundamentally different in how it operates. It needs a lot of rich context, it seems, from some early experiments to actually shine. But when it's fed the right inputs, it can really help you plan, prioritize, and execute at an extremely [00:06:00] high level of sophistication. It also shows some big gains in tool use.

[00:06:05] It knows not just how to use external tools, but also it's gotten a lot better on when to call on them. Now, this is not really designed as of right now for quick questions. It's designed for deep work. It takes a long time and demands thoughtful prompting to unlock its full potential. It thinks for a long time you have to put more thought into the prompts, so definitely.

[00:06:27] kind of a sometimes model depending on how quickly you're trying to get your work done. Now, Paul, first up, what are your initial impressions ofo3pro? I know you've been using the normalo3quite a bit lately. 

[00:06:43] Paul Roetzer: Yeah, it's slow. It's slow. So I mean, it's, the main difference is it just takes longer to think, which then enables all these benefits.

[00:06:53] But, yeah, I mean, you and I think both useo3all the time. you know, certainly when I'm using deep [00:07:00] research, which I use all the time, both within Google, Gemini, and OpenAI GPT deep Research product. soo3is like fundamental for, for that. honestly, Mike, like when I looked at the show notes for today, I was like, wait a second.

[00:07:16]o3PRO is just this week. Like I was like, really? I don't know if I just had a crazy week or what, what happened, but I. It just felt like it had already been like a month since this, since this happened. So I had to dive in because I was having some personal confusion. I was working on updating a couple of custom gpt, which we'll kind of talk about at the end today.

[00:07:39] But as of June 12th, which is yesterday when we're recording this, you can now choose custom models in your custom GPT. So prior to now, 4.0 was like the standard model that worked with all the gpt and we used, you know, dozens of gpt all, you know, within our company all the time. And [00:08:00] so I was having to rebuild problems.

[00:08:02] GPT, which is one I introduced last year for a workshop I'm gonna be running. and so I started going, I was like, oh my God. Like now you can pick all these models for the GPT and as the creator of it, you get to set like a recommended model. Your users can actually choose whichever model they want in the GPT, which then maybe is like, well, is it even gonna work if they choose O three?

[00:08:25] Like I, and I still actually don't know the answer to this because I've been in testing all morning. so I found a webpage that we will put in the show notes that is, from OpenAI. It's a models page, which is actually really helpful because you can compare the models. It's like when you're shopping for an iPhone and you want to compare like the 16 pro to the 16 to the 14th, whatever.

[00:08:47] This lets you do side by side comparisons. And they also kind of break down their different models. So there, there's, I don't know, it's like eight to 10 categories, but the main ones are their reasoning models, their [00:09:00] flagship chat models, and this is their definitions, their cost optimized models and their image generation models.

[00:09:06] So follow along at home or in your car or on your walk or wherever you're doing right now. Follow along with me for a minute of like, what. These each are, because in our chat chippy team license right now, which is what we use at SmarterX, there are eight models to pick from when I want to go in and do a chat.

[00:09:28] So let's start real quick and we'll put in context whato3PRO is by doing this. So reasoning models are there o series models. Now this is from OpenAI O series models that excel at complex multi-step tasks. now the way I often teach this is why reasoning is relevant. It involves the ability to think logically, analyze situations, evaluate evidence, and solve problems.

[00:09:54] In simple terms, it makes the models more intelligent, generally capable and human-like. So for [00:10:00] perspective, the first reasoning model we all got access to was oh one in September, 2024. That was anybody who was at MAICON with us that year. I guess that was just eight months ago. They introduced that about an hour and a half before Mike and I were going on stage for the closing keynote.

[00:10:18] So oh oh one came in September, 2020 4 0 3. They skipped o2 for IP purposes. They, somebody else owned o2 had a trademark on it at o3. Came out April, 2025. So this is all sounding kind of like recent. It is like Mike and I talked extensively about it then. So these reasoning models enable multi-step problem solving.

[00:10:40] They enable us to see the chain of thought that the model's going through. That's kind of the magic now of watching these models think and seeing how they think. In theory, it reduces hallucination and errors, gives greater contextual understanding. They can perform higher level cognitive tasks and then their ability to draw conclusions and make decisions.[00:11:00] 

[00:11:00] So OpenAI explains o3. So again, if you go into ChatGPT today and you click your dropped on what model you want to pick o3, it defines as uses advanced reasoning. In their model page, it says it's a well-rounded and powerful model across domains. Really good at math, science, coding, visual reasoning also excels at technical writing and instruction following.

[00:11:25] they say to use it for multi-step problems that involve analysis across text code and images. From a specification standpoint, it has 200,000 context window tokens of context, a hundred thousand output, its knowledge cutoff as May, 2024. so then, okay, so that's oh three. So that's the, up until now, that was their best reasoning model.

[00:11:47] So now we have oh three Pro, which just came out, and they say best at reasoning. That is the description in ChatGPT. So it's basicallyo3except it's bigger and better. And it [00:12:00] like spends more time in compute to do things. Yeah. So they say, they use reinforcement learning to think before they answer the model does.

[00:12:08] And perform complex reasoning uses more compute to think harder and provide consistently better answers. it has access to tools that make ChatGPT useful, like the web. It can analyze files, it can reason about visual inputs, write code, understand code, personalized responses, using memory and more.

[00:12:26] They recommend it for challenging questions. Where reliability matters more than speed and waiting a few minutes is worth the trade off. they say in expert evaluations, reviewers consistently prefer oh three over oh one in every tested category, and especially in domains like science, education, programming, business, and writing help.

[00:12:46] Reviewers also rated o3 pro consistently higher for clarity. Comprehensiveness instruction following and accuracy image generation is not supported witho3Pro, right now. So [00:13:00] it is, slow. It is expensive as you, if you're using the API to build stuff. It is available for pro and team right now, and it said Enterprise and EDU is coming.

[00:13:10] Well, it'll be this week when you're listening to this. I think the week of whatever that is, the 16th or something, June 16th. And then beyond that you have the traditional chat models like, 4o, which they say is great for most tasks. 4.1, which is great for quick coding and analysis. I honestly have no idea when to use 4.1.

[00:13:31] And then 4.5 preview is good for writing and exploring ideas. And Mike, I have lost complete track of time. 4.5 preview just came out like. Wasn't that in the last like four weeks? Like didn't, isn't that a newer one that we just talked about too? 

[00:13:48] Mike Kaput: Feel like we need It's like dog ears, right? Crazy If we need like AI time that's like, I think it was in the last four weeks, but it feels like it was like a half a year ago.

[00:13:57] Paul Roetzer: Yeah. And then, and then just to confuse the average [00:14:00] user even more in your chat, chippy t dropdown you can also choose from. And if you're building custom gpt, you can choose this o4 mini, which is their fastest and at advanced reasoning and o4 mini high, which is great at coding and visual reasoning.

[00:14:20] all this to say, Mike, o3 is great. I use it all the time. Yeah, it sounds like our takeaway here is o3 PRO is probably better. It just takes a lot longer. and I don't know if there's limitations in o3 PRO right now, like if I can only use it like, I think o3 is like. What is it, like a hundred or 200 uses a week or something like that.

[00:14:44] Like there was some weird limitation that they keep raising every three weeks, so you don't really know. Anyway, so as of right now, my current personal way I approach this is I use 4o for most things. It's great. when I'm [00:15:00] using my custom gpt, like my Co-CEO GPT, those are always using 4.0 and I'm very happy with how they perform.

[00:15:08] oh three is the one I predominantly use for deep research or if I'm using some, doing a more complex like strategy or thinking project. And so my assumption here is if you follow a similar path where you like, you like o3 as a reasoning model and you use it all the time, then test o3 pro.

[00:15:28] And what I I think happens, and again, I think this, I assume by now they have decided internally this is what's gonna happen. I think GPT-5comes out in July or August. My guess is it is a reasoning model and a traditional chat model. Like I think o3 Pro and 4.5 get married and a little bit smarter and better, and they become GPT five I think [00:16:00] that's what happens.

[00:16:00] So I've, I don't know if that's helpful for anybody. I'm literally just like kind of thinking out loud here because this is kind of complex. 

[00:16:07] Mike Kaput: Yeah. I've, I've struggled with this too, just it's what I struggle with. Anytime a new reasoning model comes out is like, how do I actually evaluate this? Because Yeah, I know it's smarter.

[00:16:16] I've run some o3 protests, like I can tell that this is a better output for the same prompt than I even got with o3. Great. I probably won't use it as much for a lot of the stuff I'm doing just because of how long it takes, but that'll eventually be solved. But I did, we, we will include this in the show notes.

[00:16:35] There's a substack called Latent Space, and they made a really good point that I'm eager to dive into a bit further in my own experiments is the whole idea here that they found was really helpful. They said, you know, the key I discovered to actually test this thing was not to chat with it. Instead treat it like a report generator.

[00:16:54] Give it context, give it a goal, let it rip, which is how you should be probably using these reasoning [00:17:00] models anyway. But it is tempting to just jump in, be like, here's a quick chat. Let me test down SB Pro. And if like, no, we might actually need to take a step back and like you mentioned with those more strategic, more complex tasks, like pick one of those instead of like 10 random things to try here and maybe go deep on one thing.

[00:17:19] Paul Roetzer: Yeah. And I do, I don't know about you, but I do increasingly find myself when I'm working on high value tasks, I almost always use. Gemini 2.5 Pro and oh three. Yes. Like anything I'm doing with reasoning, I always put the prompts and I'll keep the prompt thread, like the follow on conversation. I will just have that exact same conversation in both models and I'm just kind of, and then I'll often actually throw it into 4.0 also, if it's not actually like a deep research project, if it's just like a reasoning, thing I'll, I'll sometimes just put it into four Oh.

[00:17:52] Just to see what it would get without the reasoning. Right. Just kinda the standard output based on training data. So yeah, I [00:18:00] will often have three tabs open with three different models, and I will give the same project, all three of 'em. And yeah, it's just, I think what I keep finding is there isn't just one model that's always best at everything.

[00:18:12] Yep. Like, so 

[00:18:14] Mike Kaput: Yeah. That's such good advice too, because I get so many questions about what, what should I use? Should I use this, should I use that? And it's like, well, probably realistically the answer is you should be cycling between a few if you can. 

[00:18:26] Paul Roetzer: Yeah. And I know, again, I know people love Claude and like you'd certainly throw that into the mix if, if that's your go-to model too.

[00:18:33] Disney Sues Midjourney

[00:18:33] Mike Kaput: Alright, our next big topic this week, we've got some drama. Disney and NBC Universal have filed a joint lawsuit against AI image generation company midjourney. They are accusing midjourney of mass copyright infringement. It is the first time Hollywood's biggest studios have taken really direct legal action against a generative AI company like this.

[00:18:56] And the studios claim that Midjourney used some of their [00:19:00] characters like Elsa Darth Vader the minions to train its model and create lookalike images without permission. The lawsuit includes pretty striking examples of content that was generated that looks nearly identical to iconic movie scenes.

[00:19:16] Disney and NBC Universals say they reached out to Midjourney to resolve the issue privately, but the company allegedly ignored them and continued to release even more what they call infringing versions of its tool. The complaint calls Midjourney a quote, bottomless pit of plagiarism and says its actions threaten the very foundations of US copyright law.

[00:19:38] Now it's really important, we'll talk about this, that the famously lawsuit, loving Disney is involved, is a very significant factor here because as one expert put it to a journalist at the new scientist quote, it's Disney. So Midjourney or Ed, pardon my French, and he did not say effed in. [00:20:00] It was a great quote.

[00:20:00] So Paul, maybe you can start by unpacking this for us. I guess first I'm curious like why has Disney waited so long to do this? We've known these tools are producing images that are a problem for quite some time. Why Mid Journey specifically getting kind of in the cross airs here, 

[00:20:20] Paul Roetzer: why Disney waited? I have no idea.

[00:20:23] I mean, I think all these, you know, Hollywood studios, they all are using and planted deeply, integrate AI into what they do. So I mean, like, they're all benefiting from generative AI and they probably understand it's leaned on their stuff. I mean, that's pretty obvious. It's not hard to figure that out.

[00:20:44] I don't know, like maybe it's just been everybody's trying to do back, you know, backroom licensing deals and figure out ways to get these labs to at least put filters in so that the stuff they trained on can't just be requested and shown. I mean, that's what we saw [00:21:00] originally with like the, image generation stuff from ChatGPT you would say, show me something to the Simpsons and it would.

[00:21:05] Start to output and you'd watch the Simpsons characters showing up, and then I'd be like, oh, sorry. Can't do that. Due to licensing reasons. And then when they came out with the new one, they're like, oh, screw it. and then xAI is like, screw it. And like everybody's just kinda like, all let's just go. And it was almost like we talk about the product as the time.

[00:21:20] Everybody just kind of reached to like, they don't give a shit phase of IP infringement. Like, so I don't know what goes on behind the scenes and like what decisions these AI companies are making. They all know they trained on the data. We all know they trained on the data. We all know the models are capable of outputting that data and, and the images and the videos and the audio to look and sound exactly like the training data.

[00:21:45] It's not like this is a big secret. So, and it's not like this is the first lawsuit, like just episode 1 52, we talked about Reddit and Anthropic, and I think at the time I said, well, if they did, it's in the data. This isn't hard to figure out if Anthropic [00:22:00] took the data or not. So in this case. And it's 110 page, filing the lawsuit.

[00:22:08] And it's, it's very obvious, like if you go look at the links, we'll put in the show notes. I mean, it is literally just outputting the exact characters from Disney movies. Yeah, it's crazy when prompted to do it. There's no disputing this. Like, so my initial reaction, I put this on X I said start with Midjourney for legal precedent, then take on Google Open Eye, meta x ai and others who have likely done the same thing.

[00:22:31] ed Newton Rex, who we have mentioned numerous times on the podcast, the CEO fairly trained and he, notoriously left stability AI over his disagreements with how they were doing, using copywriter materials. He was the VP of audio at stability AI prior to leaving, and he's very, very vocal on x, defending, you know, the creators basically.

[00:22:56] So. He replied to me, absolutely. I said, that's what [00:23:00] I was assuming seems like the obvious reason. Easier path to precedent than settle with the others and do licensing deals and hopefully in the process establish mechanisms to compensate the creators and artists. So that was what I was setting. And then I actually put this on LinkedIn as well, some expanded version of this.

[00:23:16] And I said on my LinkedIn post, like, Hey, I would love to hear comments from actual legal experts in my network. I am not a legal expert, but this seems pretty obvious what's going on. So, our friend Sharon Toerek, who, is an IP attorney and founder and owner of Legal and Creative Toric Law Firm, she replied.

[00:23:35] So here's an actual legal expert commenting on this situation. She said, midjourney looks to have taken its cue from big AI on this. Why else would you ignore a cease and desist demand from a huge copyright holder? And as you pointed out, Mike Disney does not mess around. she continued there potentially waiting out the New York Times and other similar copyright holder cases.

[00:23:56] Pending against OpenAI to see if there's a roadmap for avoiding [00:24:00] infringement liability altogether. And if not, to get big AI's blueprint for working out licensing deals with creators Parentes for pennies on dollars of worth, depending on the copyright owner. She continued and I agree that Midjourney and companies similarly sized are best first targets for setting precedents.

[00:24:19] They're less well-funded defendant than OpenAI for sure. So far the US copyright office is holding tight, somewhat on creator's rights. We'll see if the court cases proceed similarly. So I then said could lead to a bunch of settlements, licensing deals and hopefully some mechanisms for the compensating creators.

[00:24:36] And then like, I'll just call out a couple of quotes. 'cause to, I mean, this was the New York Times article, this stuff, this was everywhere by, you know, middle of the week. Everybody's got articles on this, but some of these are very telling, so. 110 page lawsuit. Contends Midjourney helped itself, helped itself to countless copywriter, works to train its software, which allows people to create images and soon videos that blatantly incorporate and copy Disney and Universal's famous [00:25:00] characters.

[00:25:00] the one quote you mentioned, but also said, midjourney is the quintessential copyright free rider and a bottomless pit of plagiarism. AI startups like Midjourney, which was introduced in 2022, train their software with data scraped from the internet and elsewhere, often without compensating creators practice as a result of in lawsuits from authors, artists, record labels, news organizations.

[00:25:22] and then you kind of alluded to this, Disney and Universal, the first major Hollywood studios to file copyright infringement lawsuits. midjourney lawsuit indicates that Disney Universal, two of the most powerful traditional entertainment companies have been biding their time while taking aim at Midjourney for infringing on prominent characters like Darth Vader, the Minions, the frozen princesses, et cetera.

[00:25:43] Um. The lawsuit reads like a shot across the bow to AI companies in general, as a quote says, we are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity. Some Horatio Gutierrez, [00:26:00] Disney's general counsel and then he continued, but piracy as piracy in the fact that it's done by an AI company does not make it any less infringing, as you said, they sent a cease and desist.

[00:26:10] It was ignored by Midjourney. universal then sent a cease and desist last month that was also ignored. And they're asking, them to pay damages, but don't inate how much. They also want a judge to stop midjourney from offering its forthcoming video service without appropriate copyright protection measures, which I assume Mike means.

[00:26:31] Listen, if you trained on the stuff, that's a problem. You gotta compensate us for it. Yep. But you need to stop it from being able to be created when asked for. Which then took me to the VO three thing we ended last week with, and the fun of like these models and these storm trooper vlogs and like all this funny stuff.

[00:26:51] And like even with the image generation, the ability to like take pictures and turn 'em into anything you can imagine. The Studio Ghibli thing we talked about, like [00:27:00] I am constantly in this personal struggle, Mike of it is so fun to do these things and it is hilarious to look at them and like it is a lighthearted side of ai.

[00:27:10] Yeah. but it is also the work of creators that is being stolen to make all this possible. And I sometimes just really struggle with my own personal use of it and enjoyment of it, knowing that this is all happening behind the scenes. And I think it's like both things can be true here. Yeah. I think creators should be compensated.

[00:27:31] I think the model company should take more responsibility and I also think it's awesome technology that drives creativity and is entertaining as hell. 

[00:27:41] Mike Kaput: Yeah. And with, Disney owning Star Wars, that storm Trooper, vlog guy, whoever made those, he's like deleting his account and leaving the country right now or something after seeing these lawsuits come through.

[00:27:53] Paul Roetzer: I mean, unless Google has a deal with Disney that I don't know about. Yeah. Yeah. I mean, mid Journeys cook. [00:28:00] Like they're, I don't, I don't know how they don't just go under, like, I don't, I don't know, like if Disney wants them gone, they're gone is basically, I think the premise here, like, I don't know how they can't win the lawsuit.

[00:28:13] Right. Maybe some legal precedent gets set that said it was okay to steal all this stuff and recreate it, and we change copyright law completely. Like, I guess that's a possible outcome, but again, I'm not a lawyer. I don't understand how anyone could win a case like this. Yeah. Like it's. 

[00:28:29] Mike Kaput: Yeah, and they might be hoping for a Hail Mary from the current administration, especially with their work around removing people in charge with the copyright office that are putting out stuff related to fair use.

[00:28:41] So, well 

[00:28:42] Paul Roetzer: that one would not shock. Like, I mean that's, that is probably like there's a greater chance of that happening than them winning this case. 

[00:28:53] The Singularity Is Nearer

[00:28:53] Mike Kaput: All right. Our third big topic this week, we have gotten a couple indications that AI insiders [00:29:00] aren't just focused on artificial general intelligence, but possibly artificial super intelligence.

[00:29:06] So first up, Sam Altman published an essay titled The Gentle Singularity, where he argues the singularity, which is this hypothesized point where AI surpasses human intelligence. He argues it has quietly begun. In the essay, he argues that humanity has crossed what he calls the event horizon. Towards digital super intelligence.

[00:29:30] But what's interesting is this is all happening a bit quieter than anyone expected. We don't yet have robots on the streets or superhuman AI running things, but AI systems are outperforming humans in lots of cognitive tasks. And the phase we're entering, he says, will feel more like acceleration than disruption.

[00:29:53] He outlines the near future where scientific breakthroughs arrive faster than we can imagine, and by 2027, [00:30:00] he predicts robots will be handling real world tasks by 2030. Productivity could be an order of magnitude higher than it was in 2020. Now he calls this kind of a gentle singularity because each of these wonders he argues, is just going to quickly become kind of normal life.

[00:30:17] We get used to all the progress. It just becomes mundane and we go on living our lives. Now at the same time we got news that Meta is making a bold new bet on super intelligence. Mark Zuckerberg has launched a secretive new AI division, aimed squarely at building super intelligence to kickstart it. He's personally recruiting dozens of top AI researchers and he has placed Alexander Wang, the founder of Scale ai.

[00:30:45] At the head of this, he's able to do that because Meta is looking to acquire a 49% stake in scale ai, which is best known for labeling the data that trains lots of the top AI systems. Now, this deal value scale [00:31:00] AI at 28 billion meta is hoping that Wang and his team and infrastructure can help fix what Zuckerberg sees as a performance lag in Meta's a llama AI models and their mandate is to beat the competition to AGI and possibly super intelligence, then embed those across META'S ecosystem.

[00:31:20] So Paul, let's first. Start here with Altman's essay. There are some big claims in here. Altman didn't invent the concept of the singularity, but he thinks we're approaching some version of it. What, what do you think? 

[00:31:36] Paul Roetzer: there's so many ways to go with this conversation. So in, episode 1 29, we actually, we had a main topic that was literally just titled Super Intelligence.

[00:31:49] And so I was going back and trying to figure out what led us to talk about it at that point. and it was a Sam Altman tweet. So on January 4th, 2025, [00:32:00] Sam tweeted, I always wanted to write a six word story. Here it is near the singularity, unclear which side meaning are we before or after the singularity?

[00:32:10] Has it already occurred? and so I then kind of shared the story of like this idea of, super intelligence. And so again, you can go back and listen to episode 1 29. But what I shared at that point was there was a, a, a paper published by, the Google DeepMind team Shane Legg, who kind of coined the term AGI, the levels of AGI for operationalizing progress on the path to AGI.

[00:32:40] So in that paper, DeepMind tried to lay out these sort of different levels of artificial intelligence, level zero being no AI level five being super human. And so in their paper, level five is super intelligence. And [00:33:00] so the highest level in their matrix is termed combined performance in generality.

[00:33:04] the definition means that level five, general AI or artificial super intelligence will be able to do a wide range of tasks at a level that no human can match. So they define superhuman performance as outperforming 100% of humans. So when we're talking about super intelligence, people have different definitions.

[00:33:22] But that is like the Google DeepMind definition. in terms of this Sam's most recent essay, he likes these essays. He's, been writing more of them it seems lately. in February of this year, we had three observations from Sam. We'll put the links to each of these in there. I'm not gonna dive into each of these right now.

[00:33:40] In January of this year, we had reflections, from Sam in May of last year, or, yeah, may of this year actually. We had GPT-4o where he kind of talked about the new model and the implications. But the one I want to linger on for a minute is Moore's Law for everything. This is from March [00:34:00] 16th, 2021.

[00:34:01] Anyone who's heard me give a a, a keynote, I will often reference this article because it was a moment in time when everyone wasn't listening to Sam yet there, you know, certainly within Silicon Valley and the tech world, but I. Generally speaking, when Sam wrote things, it didn't like change the world and people's perspective on things.

[00:34:22] and, so I'm just gonna read a couple of quick, paragraphs from this one because it sets the stage for the gentle singularity one. So again, March, 2021, Moore's Law for Everything Altman wrote. My work at Opening Eye reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe.

[00:34:43] Software that can think and learn will do more and more of the work that people do. Now, even more power will shift from labor to capital. If public policy doesn't adapt accordingly, most people will end up worse off than they are today. So again, remember this is a year and a half before Chad [00:35:00] CPT. Think of what he was saying, what he was predicting, and the time period we find ourselves in.

[00:35:07] So he continued in the next five years, which would put us up to 2026. Computer program, programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly line work and maybe even become companions. And then the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of everything.

[00:35:31] The coming change will center around the most impressive of our capabilities, the phenomenal ability to think, create, understand reason to the three great technological revolutions, the agricultural, the industrial, and the computational. We will add a fourth. The AI revolution. This revolution will generate enough wealth for everyone to have what they need if we as a society make it reasonable, re make it responsibly.

[00:35:58] So I share [00:36:00] that before I comment on the, this gentle singularity one because, I. Certainly Sam can be perceived as a hype man who's trying to raise the value of his companies and, you know, raise more money and do all these things. but as someone who's like followed his work and his writings for like a decade now, he generally writes things that he has seen or that he is very confident are going to be true in the near future based on things that he has seen or the trajectory of the things that they're building.

[00:36:31] So my personal experience is he, he, he's not really someone who tries to overhype things. He's someone who actually sort of sees more of the future than most of us get access to, and he tries through his words to prepare people for that future. so then when we get into this, this, the gentle singularity, a book, both you and I read Mike Super Intelligence Path, dangers and Strategies from Nick Bostrom, I think it came out in [00:37:00] 2014.

[00:37:01] They all read that book too. I actually listening to Empire of AI right now from Karen Hao that we mentioned on the show a couple weeks ago, and she tells the story of the creation of OpenAI and the significance of that book and Bostrom's thinking to Demis Hassabis and Elon Musk and Sam Altman in those, that time period, in that 2014, realm.

[00:37:25] Mm-hmm. So this idea of super intelligence and then even going further back to singularity, like this is not new stuff for these people. They have thought about this. They have worked towards these concepts. So this singularity is sort of this, in theory hypothe a hypothetical point where AI surpasses human intelligence, leading to the rapid and uncontrollable technological advancements.

[00:37:48] So it suggests that AI becomes self-improving and it can create these super intelligent machine machines that are beyond human comprehension. So when we talk about singularity, we [00:38:00] are. We are now not just talking about AGI, where it's like generally capable of doing what the average human does. We are talking about an AI that is beyond any human that has ever lived, like at everything.

[00:38:11] Mm-hmm. 

[00:38:12] And so that's what you have to understand him to mean when he is talking about the singularities, talking about the moment when super inte intelligence has arrived. And his tweet from January is like, maybe it's here, maybe it's not. But we're close to it either way. So a couple of excerpts, he says, we have recently built systems that are smarter than people in many ways and are able to significantly amplify the output of people using them.

[00:38:36] The least likely part of the work is behind us. the scientific insights that got us to systems like GPT-4 and o3 were hard won, but will take us very far. So what he's saying is. The really unknown parts already happen. Like we proved that intelligence could exist, that it could reason that it could think that it create that understand, hmm, now it's just solve a few [00:39:00] roadblocks and like we get there is kind of what he's saying.

[00:39:02] So he said in some big sense ChatGPT is already more powerful than any human who has ever lived. 2025 has seen the arrival of agents that can do real cognitive work writing computer code will never be the same. 2026 we'll likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.

[00:39:26] This generally aligns with the AGI timeline episode that I did. and you can we'll put the show notes and that was at 1 42. I forget what episode that was. That sounds right. But yeah, we'll drop 

[00:39:38] Mike Kaput: it in the nets. 

[00:39:39] Paul Roetzer: So yeah, when I laid out the AGI timeline, nothing he's saying here changes my perspective so far.

[00:39:45] He then continued. A lot more people will be able to create software and art, but the world wants a lot more of both. And experts will probably still be much, probably carries a lot of weight here. Experts will probably still be much better than novices as long as they embrace the new tools. [00:40:00] Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change and one many people will figure out how to benefit from.

[00:40:09] So this is kind of like the source of his optimism. He talks about, we do not know how far beyond human level intelligence we can go, but we are about to find out, talks about already we live in incredible digital intelligence and some initial shock. Most of us are pretty used to it. So he is basically saying like, things happen.

[00:40:27] These things got crazy smart and we just sort of adapted to it and he thinks singularity is gonna be something similar. It's just gonna happen and we're gonna adapt. I did think this was interesting and I saw a lot of people sort of citing this one. He said as data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.

[00:40:48] And then in parentheses, he put this, people are often curious about how much energy a ChatGPT query uses. The average query uses about 0.34 watt hours about [00:41:00] what an oven would use in a little over one second, or a high efficiency light bulb would use in a couple of minutes. It also uses about point, I can't even see how many zeros that is.

[00:41:12] 0 0, 0, 0. Eight. Five gallons of water, roughly one 15th of a teaspoon. Mm. I've never seen those nu like numbers like that before. Mike? No. Broken out? I dunno. You have either. Okay. he said there'll be, there will be hard parts, like whole classes of jobs going away. But on the other hand, we will be getting much, richer so quickly that will be able to seriously entertain new policy ideas.

[00:41:38] Looking forward. This sounds hard to wrap our head around, but probably living through it will feel impressive, but manageable. and then he kind of wraps with, and again, I'm just pulling out excerpts. There's a, this, this is like probably like a 2,500 word article. we, in parentheses, the whole industry, not just openair, are building a brain for the world.

[00:41:56] It'll be extremely personalized and easy for everyone to use. We [00:42:00] will be limited by good ideas. For a long time, technical people in the startup industry have made fun of the idea guys quotes, people who had an idea, and were looking for a team to build it. It now looks to me like they are about to have their day in the sun, meaning being able to build things isn't gonna be the hard part anymore, right?

[00:42:18] It's gonna be the people with the ideas to build things. So I don't know if you have any other thoughts on that before we talk about scale, but it's, it's a lot. And I want, again, for people who haven't been following AI for years or maybe listening to this podcast for the last couple years, and this is all still kind of new to you and you're trying to figure out who is Sam and.

[00:42:35] What is OpenAI and why is it so important, and why is we talking about them all the time? Sometimes I like to just like provide a little bit of a historical context as to kind of who they are, where they are, and I would actually, I'm not through the whole empire of AI book yet. 

[00:42:49] Mike Kaput: Yeah. 

[00:42:49] Paul Roetzer: But, it does do a really good job in the first couple of chapters of teeing up how Sam got where he is and became so [00:43:00] powerful.

[00:43:00] and it's very complimentary to the Genius Makers book by Cade Metz that we always recommend Mike. 

[00:43:07] Mike Kaput: Yeah, the only thing I'll say here is I love Sam's essays typically, and I agree with you. I don't really read into these like, oh, he is hyping anything up. But I do have to say that when you write that there are very hard parts, like whole classes of jobs going away.

[00:43:22] We probably won't adopt a new social contract all at once. When we look back in a few decades, the gradual changes will have amounted to something big and then in the same breath you say, it'll probably feel impressive, but manageable to live through is insane to me. Like we're speaking out both sides of our mouth here and I get what he's getting at and I don't think it's necessarily malicious, but we glossed real hard over these parts.

[00:43:47] Paul Roetzer: Yeah, and I think Mike, it's like we see this with all of them. I mean, Dario Amodei is the only one who's sort of broken. Exactly right. The thread lately. But I mean, Demi Aaba who is, you know, by far the guy I admire the most in this [00:44:00] space, he constantly is like, yeah man, this is gonna happen really fast and we're not ready.

[00:44:05] And like, it's gonna be amazing. We're gonna solve all diseases and travel the universe. But like, it may just destroy jobs like I, and it's for philosophers and you know, sociologists and economists to figure out, so. Yeah, I think that a lot of these leaders have to have this like undying optimism. Yeah.

[00:44:26] That what they're doing will change the world in a super positive way and that there's gonna be hard parts, but at the end of the day, they believe so deeply that what they're doing will have a net positive impact on society. That they have to do it and like they hope someone else figures out how to pick up the pieces along the way.

[00:44:46] Mike Kaput: And I'm by no means a dor about this. I'm very excited too. Yeah. I just think like we've had way smaller disruptions to jobs that have had huge impacts on society than what I think's coming, so, 

[00:44:58] Paul Roetzer: agreed. 

[00:44:59] Mike Kaput: [00:45:00] Alright, let's quickly talk about META'S ambitions here. Yeah. Because they are specifically calling this new effort like a super intelligence lab.

[00:45:07] They are talking about, you know, kind of pursuing AGI, but also the super intelligence issue. There's the whole scale AI thing. What's going on here? 

[00:45:16] Paul Roetzer: So, Alexander Wang, we have talked about, numerous times on the podcast. I went back and looked, we had episode 1 39. Again, we'll drop links in anytime I, you know, cite these different, episodes.

[00:45:27] We'll always drop the links in. But we talked about Wang's role in a report titled Super Intelligence Strategy. Mm-hmm. which was designed to address rapidly emerging risks of super intelligent a AGI. So he co-authored this report with Dan Hendricks, the director of the Center for AI Safety, and an advisor to XI and scale ai.

[00:45:48] and then Eric Schmidt, who the former Google, CEO and Chairman. So it's just interesting, he played this role in like, we're trying to figure out how to keep AI safe while accelerating this. And [00:46:00] they proposed a framework that mirrors Cold War nuclear strategies, calling for a balance of deterrence, non-proliferation and competitiveness.

[00:46:08] So that was just a few months ago. In episode one 17 in October, 2024, I think that's when we kind of like introduced Andrew Wang and I was saying like, Hey, this is a name you guys should, should know. Our listeners should be aware of this guy. Yeah, because at age 27, he had become a major power broker in the AI industry.

[00:46:28] So the company, just for a little perspective, they have, employ hundreds of thousands of hourly workers to fine tune data for AI models. They position themselves as a hybrid human AI system for producing high quality data at a low cost. there was a interview we cited at the time where he talked about like the three pillars of ai.

[00:46:49] He had done a podcast interview and those three pillars being the models, the compute and the data, the compute has been powered by people like Nvidia. So that's where the chips come in. [00:47:00] The algorithmic advances, like the models, those have been led by the large labs like OpenAI and others. Then the data piece of those three pillars, that was scale.

[00:47:09] So he basically positioned scale as like a data foundry. for context on his view on jobs and AGI, because I think it becomes very relevant in this instance. He said at, at one point, 80, 80 plus percent of jobs that people can do purely a computer. So digital focused jobs, this is how he's defining AGI AI can accomplish those jobs.

[00:47:32] It's not imminent, it's not immediately on the horizon, so on the order of four plus years, but you can see glimmers and depending on the algorithmic innovation cycle that we talked about before, that could be much sooner. So this is somebody who again, is like very, you know, in on the AGI talk and his timelines, the deal itself is pretty unique.

[00:47:51] So 14.3 billion is the actual final investment from meta according to Bloomberg. Values scale at 29 billion [00:48:00] post-money, but steals their CEO from them and some of the other top talent from scale. So this is very similar to like what we saw with inflection and some of these other acquihires where Microsoft or Google, which did the character ai, the big labs, the big companies that probably can't get through regulatory, on an acquisition, a straight acquisition.

[00:48:21] They just acquihire the top people from that company. Said, company continues existing but without their top leaders. And so that's what happened here. So they put in it's worth 29 billion post money. they had 870 million in revenue in 2024, and they're expecting 2 billion in revenue. this year. Wang will stay as the director of the board.

[00:48:46] They announced a new interim CEO, Jason Droge, who was the founder of Uber Eats, and it was a, a venture partner at a VC firm, met us taking 49% stake. And then, as you kind of illuminated, like [00:49:00] it's, it's a large part because meta was just struggling. Yeah. Like Zuckerberg I think was embarrassed by the launch of Llama four.

[00:49:06] We talked in the last two weeks about a total reorg of Meta's AI teams internally, and Zuckerberg like does not wanna lose. And he, I don't think they've made the progress. He was hoping when he pivoted last year from their metaverse and the 10 billion they put into that, it's gonna be chump change compared to what they're gonna put into trying to win at ai.

[00:49:27] I mean, they're gonna be spending hundreds of billions of dollars on, on this initiative if he's the guy to do this or not. I don't know, like, I don't know him. I've, I've certainly seen some opinions that he's more of like a, a front man CEO who's great at raising funding and building relationships and maybe not like the technical lead per se for this kind of thing, but.

[00:49:51] I don't know, but like we said last year, this is a name to watch and here we go, like eight months later, it's, he's now one of the probably top 10 [00:50:00] most important people in the space. 

[00:50:01] Mike Kaput: We'll have to do like a predictions episode at some point where 

[00:50:05] Paul Roetzer: we, 

[00:50:05] Mike Kaput: I feel like 

[00:50:06] Paul Roetzer: every episode we're kind of making these weird predictions without calling him that.

[00:50:09] Yeah. Like, you know, shutting down the internet. Get to that one in a minute. 

[00:50:14] AI and Jobs: Saying the Quiet Part Out Loud

[00:50:14] Mike Kaput: Alright, let's dive into this week's rapid fire. First up, we have a few more reports of what we call people saying the quiet part out loud, so to speak. So first we got a profile in the New York Times about the AI startup Mechanize, which we first mentioned on episode 1 45.

[00:50:33] This is a company that, according to one of its founders, has the goal of fully automating work. The profile outlines how mechanize is building training environments where AI agents simulate the TA daily tasks of jobs like software engineering. Learn through trial and error, and if the agent succeeds and get the reward, if it fails, it tries again and they hope to basically teach AI how to do all these white collar jobs.

[00:50:59] [00:51:00] They say full automation of the economy is a 10 to 30 year project, but there's no plan for what happens to the displaced workers in the meantime. At the same time, the CEO of Gig work platform Fiver did a very blunt interview on the 20 VC podcast with Harry Stebbings in it, among many other things. He said that a failure to adapt to AI will lead to people becoming poor or a burden on society.

[00:51:26] He said he tells his team, the expectation is they should aim to automate a hundred percent of their work so they can free up a hundred percent of their time to focus on tasks that cannot be automated. In one aside, during this. Short but impactful interview, he argues that thanks to ai quote, copyright is dead.

[00:51:43] Essentially it is dead. It's a notion from 1710 and it's dead overnight. Last but not least, we also got reports. The CEO of ai, defense tech company, Palantir, says he is worried AI could unleash deep societal upheavals that many in power [00:52:00] are ignoring. Paul, this is just another story in, this ongoing narrative about AI's impact on jobs.

[00:52:07] Why are all these leaders feeling more comfortable saying this stuff? I feel like if we had been hearing this six to 12 months ago, people would be freaking out. 

[00:52:16] Paul Roetzer: Yeah. If the, if the Fiverr CEO sounds familiar. That was episode 1 47. We talked about his internal memo and in, in that he wrote to his team, AI is coming for your jobs.

[00:52:27] Heck, it's coming for mine too. So this is not like, the first time we're hearing from him. The mechanized one. Yeah, we talked pretty in depth about that one as well when, when we first kind of learned about them. But, I'll just call it a couple of, good pieces here from Kevin Rus, who's, you know, just a great writer, New York Times.

[00:52:47] his lead, I just love. So he starts the article years ago when I started writing about Silicon Valley's efforts to replace workers with artificial intelligence. Most tech executives at least had the decency to lie about [00:53:00] it. 

[00:53:00] Mike Kaput: I thought that was such a great line. 

[00:53:02] Paul Roetzer: That was so good. So he, he then continues, quote, we're not automating workers, we're augmenting them.

[00:53:10] The executives would tell me, our AI tools won't destroy jobs. There'll be helpful assistance that will free workers from mundane drudgery. And then he, he wrote, of course, lines like those, which were often intended to reassure nervous workers and give cover to corporate automation plans. Said more about the limitations of the technology than the motives of the executives.

[00:53:29] Back then, AI simply wasn't good enough to automate most jobs, and it certainly wasn't capable of replacing college educated workers in white collar industries like tech consulting and finance. That is starting to change, RUS rights. So, mechanized approach to automating jobs using AI's on a technique known as reinforcement learning, which we talked about.

[00:53:52] This is exactly what I was saying. I don't remember what episode it was where I said I'm not even convinced that the current models aren't already [00:54:00] AGI if you just provide reinforcement learning on top of them for industries. And so like the article, New York Times goes on and says, mechanize is creating new training environments for these models.

[00:54:10] Essentially elaborate tests that can be used to teach the models what to do in a given scenario and judge whether they've succeeded or not. Mechanize is starting with computer programming and occupation, where reinforcement learning has already shown promise, but it hopes that the same strategy could be used to automate jobs in many other white collar fields.

[00:54:28] This is exactly what I was saying, right? Just take the core model, give it a bunch of examples in the legal industry, hr, find whatever it is, and you just train it. So if you remember episode 1 49, we talked about this idea of figure out which industries are gonna be impacted, which professions by the total addressable market of the salaries in those industries.

[00:54:49] And so the webinar that we talked about at the beginning, the deep dive into deep research, I was used, deep research to create a project where the [00:55:00] hypothesis was this exact concept. So the prompt I gave deep research then was I have a theory that today's most advanced AI models could already be considered AGI if they are post trained on data specific to jobs and professions.

[00:55:13] I'm assuming in definition of AGI, of AI systems that can perform. At or above average human. The motivating factor for developers and entrepreneurs to build these AGI like systems could be the total addressable market of the salaries in a given profession. And then I asked it to run that analysis. So here you go.

[00:55:29] This is, mechanize is doing this exact thing. And my guess is by this time next year, there will be dozens of these kinds of companies that are doing it for, for specific verticals and specific industries. It's it again, like I think some point last year I finally was like, I don't understand why people aren't talking about AI impact on jobs.

[00:55:47] And we sort of had that like moment where I was just kind of like, why aren't we talking more? This is, I feel very, very similar to this. Like a year from now, people will look back and be like, well of course that was gonna happen. Okay, well why aren't we talking about it? [00:56:00] Like this is inevitable, that this is what happens.

[00:56:02] This is how venture capital works. Like you find massive markets, you take tech, you train it on that industry and you eliminate jobs. Like this is, I can't comprehend how. This isn't actually like understood and being proactively addressed, like this is absolutely what is going to happen. 

[00:56:23] Mm. 

[00:56:23] And here's an article that tells you it's what they're doing.

[00:56:27] OpenAI and Google Deal

[00:56:27] Mike Kaput: All right, next up. CNBC has reported that OpenAI has officially crossed the $10 billion mark in annual recurring revenue. That figure includes revenue from consumer subscriptions, business tier chat, GPT tools and API usage. Notably, it excludes licensing money from Microsoft or one-off enterprise deals.

[00:56:46] Now what's also interesting, another report says that to keep this machine going, the company has also, according to Reuters, just signed a major cloud computing deal, not with its longtime partner [00:57:00] Microsoft, but with Google. So according to Reuters, OpenAI will begin using Google Cloud to train and run its AI models.

[00:57:07] until now, Microsoft Azure was OpenAI's exclusive infrastructure provider, but with all the skyrocketing demand. This is forcing OpenAI to diversify. So Paul, this seems like a bit of a surprise. I think you had even posted this saying, I didn't see this one coming. Yeah. 

[00:57:26] Paul Roetzer: Yeah. It's, it's wild. I mean, I'm not even sure what to think about it other than if you don't believe that OpenAI and Google think this future is possible, that we're always talking about this podcast, like build the data centers, build the energy infrastructure, build the intelligence, like, just like this obvious path that they see that anyone's willing to do deals with anyone to bring that future to life.

[00:57:53] Like it's just, yeah, the companies that seem like the fiercest of competitors. Now, now, now if [00:58:00] Xai in opening, I do a deal. All bets are off. Like, I can't see that right now. Yeah. But. Everybody's invested in everybody, all of them trade researchers, all the not trade, all of them poach researchers from each other all the time.

[00:58:14] They all go to the same parties. Like, it, I don't know, it'd be so fascinating someday to like read the story or like watch a soap opera about this because it's crazy how, how, like one week you're talking about, and honestly like other parts of the company are probably still like fighting with each other and don't like each other, but like whatever, do this deal.

[00:58:35] Big companies, you know, lots of different divisions. But yeah, I did not see that one coming. I was very surprised I had to do it. I actually did go find a second source on that one just to make sure it was like real. 

[00:58:46] AI and Google Search

[00:58:46] Mike Kaput: All right. Some more Google related news. The Wall Street Journal reports that news sites in particular are getting crushed by Google's new AI tools.

[00:58:56] Publishers like the Huff Post, Washington Post and Business [00:59:00] Insider have seen their Google search traffic plunge by over 50% in the last three years. Because Google's new AI features are answering user questions directly with no clicks required. In fact, according to the report, executives at publications like The Atlantic and the Washington Post now talk openly about preparing for a post search era.

[00:59:23] Google, however, still insists it is driving high quality traffic to publishers from things like their AI overviews. Now, at the same time, a separate article also revealed in the Verge that Google is offering buyouts to employees in its core search organization. According to internal memos obtained by the Verge, the voluntary exit program is aimed at workers who don't feel aligned with Google's current strategy.

[00:59:50] The buyouts are available to US staff and search marketing, research and core engineering, but not in DeepMind Cloud or YouTube. So [01:00:00] Paul, based on these numbers, it is getting pretty tough out there for companies that rely largely on organic traffic to drive growth. Also interesting to see Kuku buying out employees in the search division.

[01:00:14] Like, do we need to start reading between the lines here? 

[01:00:18] Paul Roetzer: So as a company, we've had this conversation internally about like organic search, and I think it was just like last month, I actually said to Mike, like, I don't even care about organic search. Yeah. Like, I don't, I honestly don't even know what ours is anymore.

[01:00:32] The organic traffic we get from Google, it was a KPI we used to look at very closely. you may still look at it, Mike, but like, I just assume it's going to zero. Like I really do. Yeah. I just, I just assume that like the future organic search just isn't gonna matter. Now that's not gonna be true for every type of business.

[01:00:49] We're a, you know, B2B company, like different industries are gonna treat it differently, but. I just kind of assume that it's just dead like that. It's just gonna be [01:01:00] very, very different and like, let's just move on with our lives. When I heard the news about the offer to buyouts in the search department, I was like, okay, they're, they're doing it like they're, they're gonna do the thing people assume they wouldn't do, which is make the move to cannibalize their own core products if they have to.

[01:01:18] Yep. And it actually led me back, I don't remember Mike if you read this one back in the day, but one of my favorite books when I was running my agency was called Will and Vision. 

[01:01:26] Mm-hmm. 

[01:01:27] And said how late comers grow to dominate markets. And they basically talked about like the enduring companies had those two components, like a vision for a market that other people didn't, but they had will to, to actually like do something.

[01:01:41] And when you break down, like what does will mean? The one thing that always stood out to me was relentless innovation is like a characteristic of having the will. In the book, they define it as enduring market leaders continually innovate, even if it means disrupting their existing successful products.

[01:01:58] This is [01:02:00] something that Google outwardly hasn't appeared willing to do. I'm not condoning layoffs or like anything like that. I'm just saying like they know where this is going. I mean, the data on Chad GT's adoption is off the charts. Yeah. It's a whole generation of new users that are just not gonna use Google search, and I think they have to accept that like they've lost that.

[01:02:20] That generation isn't gonna, so now you've gotta make your play and you, you have to kind of move to where the markets are gonna go. So you need that vision and you need the will to disrupt yourself to get there, or else you're going to get obsoleted in the thing that you've owned for 25 years. 

[01:02:38] Ohio State’s New AI Fluency Initiative

[01:02:38] Mike Kaput: Next up, Ohio State University is going all in on AI with a bold new initiative to make AI fluency a core part of every undergraduate degree.

[01:02:48] Starting this fall, all first year students will begin learning how to use and think critically about AI regardless of their major. According to an announcement from Ohio State, [01:03:00] quote, all undergraduates will be introduced to generative AI basics and the required general education launch seminar. Gen AI workshops will be integrated into the first year success series, part of the university's required survey course that helps students acclimate to college life.

[01:03:16] Additional workshops will be offered to equip students at all levels with experience in AI tools and application. And the new unlocking generative AI course will be offered and open to all majors. Students will gain essential skills to interact effectively with ai. Craft prompts that inspire creativity and explore AI's impact on society.

[01:03:37] Apparently faculty will also get support too, including funding and resources to weave AI into their courses. And Ohio State is also developing hack hackathons, internships and prototyping workshops to help students across all levels build with AI in real world context. So Paul, this is obviously, you know, kind of in our backyard still early on this, we've only kind of gotten an [01:04:00] announcement to go on, but this does seem really interesting.

[01:04:02] I kind of found myself nodding along as I read through this approach. 

[01:04:06] Paul Roetzer: Yeah, I couldn't love this anymore. I mean, I almost went to Ohio State. I was really, really close. I went to Ohio University, but, I'm a big Ohio State fan. so I love that it's happening in, in Ohio, our home, home state. but I just love it as a blueprint.

[01:04:22] You know, again, to your point, Mike like it, they haven't done it yet. This is like a plan that they have. But in terms of a blueprint for what to do at higher education and not even just higher education like high schools. Yeah. You know, potentially grade school like this is, this is exactly what we've been saying needs to be happening.

[01:04:37] So, I had, there, there was a university I talked to back in like 2019 or so, and I told them like, you need an intro to that class and everyone has to take it, like every major needs to take it. And then like, you can then carry it through. So, I mean, it's, it's been a few years, but like I'm really happy to start seeing this happening at universities and I hope it happens at more, I mean, [01:05:00] we're heading into the 25, 26 school year.

[01:05:02] I hope we hear a lot more about schools that are doing these kinds of things. Yeah. And I especially love the teach the teachers part. Like that is the fundamental thing to success. You cannot do a program like this where you aren't starting with the teachers, professors themselves. And so, and I think the issue they're gonna run into is you're gonna have a whole bunch of professors who don't wanna be a part of this.

[01:05:20] Yes. Like. Yeah. And schools are hard to change. You have tenure, you have lots of roadblocks to implement this well, but it's gotta start somewhere. And this seems like a really good starting point. 

[01:05:32] Mike Kaput: Yeah. I haven't dove too deeply into what all these major schools are doing, but this felt like one, which I don't always feel like I see, where it was just baked into everything.

[01:05:43] It wasn't just like a new major or like a possible course. It's like, oh, okay. Like you are going to be, AI is gonna be infused into everything you are doing as a first year. 

[01:05:53] Paul Roetzer: Yep. Yeah. And like I said, in past episodes, like if I was a parent and I had a kid who was heading to college, you know, like going [01:06:00] into senior high school, like Ohio State just jumped to the top of my list Yeah.

[01:06:03] Of places I would like them to look at. 

[01:06:08] xAI Data Center Environmental Scandal

[01:06:08] Mike Kaput: Now this next topic is actually from a story back in May, but it's been flying pretty far under the radar, so we thought it was kind of worth mentioning this week. XAI is under fire in Memphis actually for running one of the region's largest sources of air pollution without proper permits.

[01:06:26] This was according to some reporting from Politico In just 11 months, X AI's massive ai data center called Colossus has deployed 35 methane gas turbines to power its operations. And for context, those turbines generated enough electricity for 280,000 homes. But the problem is they also produce more nitrogen oxides, which are a key contributor to smog than nearby power plants and oil refineries do.

[01:06:54] And apparently none of them have. Pollution controls, and this does not [01:07:00] help that this site is located in South Memphis. It's an overwhelmingly black community that already struggles with high asthma rates and a history of industrial pollution. Residents have said they weren't informed about this project.

[01:07:12] They're now dealing with chemical smells, breathing issues, X AI claims. The turbines are temporary and don't require permits. Community groups, environmental lawyers, former EPA officials that talk to Politico argue XAI is violating the Clean Air Act and putting lives at risk. So Paul, in your view, as you're kind of reading this, and we've been kind of going back and forth on this story for a couple weeks here, is this more a story of XAI specifically cutting corners, disregarding regulation?

[01:07:42] Or is this a bigger problem in ai? 

[01:07:45] Paul Roetzer: Well, I think we wanted to touch on this one because I think the environmental impact of AI is, is a very important topic. but it's also something I get asked about a lot Yeah. When I go out to talk. So I think more and more people are starting to just connect [01:08:00] the dots of the bigger macro level stories related to ai.

[01:08:03] And so I think, you know, these ones around the environment are just very important to call out to people. Some people may just be unaware that this is an issue. So you have, you know, obviously the energy and the impact just from standard just training and use of ai. But then you do have stuff like this where just skirting regulations or going around regulations, I don't know that this is like this specific instance of these temporary generators and stuff is like a more widespread thing.

[01:08:29] Yeah. This is probably more of an Elon Musk thing and it's historically how Elon does stuff. He always, whether it's SpaceX or Tesla or Neuralink or any of his companies, he just pushes the limits of. What's legally allowed and sometimes is willing to just go beyond legal limits in, in lieu of progress.

[01:08:50] Yeah. And so without judgment, like it just is what it is. This is who he is, it's what, how he runs his [01:09:00] companies. and it's understandable if it, if stuff like this is very, very upsetting to people. again, it goes to this whole, you know, the current administration and their regulations and their thoughts on clean air.

[01:09:13] I don't honestly see this being something that's gonna rise to the level of concern for them. I think that there's a lot of things that, progress and acceleration will take precedent over stuff like this. Right. and so maybe if they pay a fine or something like that, but I don't, I mean, this is, they're just gonna do what they have to do.

[01:09:34] But we want people to be aware of these things. And then if they're topics that are of interest to you to know. To go pull on that thread and go deeper on it if it's something you're, you're really, passionate about. 

[01:09:46] Mike Kaput: It's also just worth a quick reminder, just the story in general as well. Like, you know, it's easy when you read about this stuff if you're not really paying that close of attention to think like, oh, okay, data center, what are you upset about?

[01:09:57] Like, there's a bunch of servers created a hundred jobs, servers next [01:10:00] to your house. Like it's functionally an industrial facility, you know, it's like, for better or for worse, I'm not judging that, but like, this is not like a server quietly humming with like a light on in a dark room. It's, it's enormous like factory like facility almost, you know?

[01:10:16] Paul Roetzer: Yeah. And we just had, like in Cleveland, so we have a IX center, which is out near the airport. There's exposition center. They do like car shows, boat shows, all that stuff. And I don't know if it's like publicly confirmed yet, but they're shutting it down and it's gonna become an Amazon, data center.

[01:10:32] Yeah. Because there's actually a. power supply on site. So like all these labs are just basically trying to find existing energy infrastructure, whether it's next to a nuclear power plant or next to an electrical grid, whatever. Yeah. And just like, nah. And again, it's, you can, like, some news stories will be like, Hey, it's gonna create 200 jobs and it's gonna be great.

[01:10:53] And it's like, okay, but what's the environmental impact of this? Yeah, right. 

[01:10:58] Kalshi’s AI-Generated NBA Finals Ad

[01:10:58] Mike Kaput: All right. Next [01:11:00] up there is a popular company called Calci, which is a prediction market where you can bet on the outcome of real world events. And they're getting a little more popular because they just released this incredible fully AI generated ad during the NBA finals.

[01:11:17] Now this ad, which was made with VO three, which we'll talk about in a second, again, is a 32nd spot that features fully AI generated video and audio that looks and sounds hyper realistic and it features. A bunch of different characters in crazy situations making pretty wild bets on a bunch of different events.

[01:11:36] So before I talk to you real quick about how this video is made, I want you, if you are watching to take a look at this ad, we're gonna play it real quick here. 

[01:11:45] ad: Indiana gonna win, baby. We're in Florida asking people what they put their money off. I'm all in on okc, indiana got that dog in them. Will egg prices go up this month? I think we'll hit $20. 

[01:11:59] Mike Kaput: Now [01:12:00] what's really jaw dropping here outside of just the ad itself being awesome is how it was made. So the ads creator posted about what went into this. He started off by posting quote, I love that this was shown next to $400,000 commercials and it cost me like 400 bucks to generate.

[01:12:17] Then he detailed how over a couple days he used Gemini and chat JBT to write scripts based on some initial ideas from himself and the Calci team. He then used Gemini to take the scripts, convert those into prompts for Veo three, which is Google's latest video model. He then said he ran about three to 400 generations in Veo to get the 15 usable clips that made this up.

[01:12:41] And he posted it cost about $400 and it took, quote, one person two to three days. That's a 95% cost reduction versus traditional ads. He did say however, quote, just because this was cheap doesn't mean anyone can do it. I've been a director 15 plus years. Brands still pay a premium for [01:13:00] taste. The future is small teams making viral brand adjacent content weekly, getting 80 to 90% of the results for way less.

[01:13:09] So Paul, I, this was really cool to see even I think with both of us coming from the agency world, I couldn't help watch this and think traditional expensive advertising is pretty good. 

[01:13:20] Paul Roetzer: I mean, it seems like a bit of an inflection point. Yeah. It's not like this is the first time someone's made something super clever with ai.

[01:13:26] Right? But it's this context of an NBA finals and a major ad spot, and like, it wasn't an AI studio that did it, or a, a ad, you know, studio that did it. It was this guy. and so there's that party that's like, man, the impact this is gonna have on creative studios, good and bad. There's the, you know, there's the downstream of, well there's that, that there's the studio that does it.

[01:13:51] But then like, what about all the people who would've been involved in making this thing, right? Not just the creatives, but the actors and, [01:14:00] you know, everyone involved in the supply chain to build an ad. Like you start to think about the downstream effect of all that. And then on the positive side you say, but anybody can now be a creator.

[01:14:09] And yes, taste and experience definitely still matters. But now all of a sudden you just, the barriers are gone to like, create something. You can take a couple classes and grind on it for like 30 days and figure out how to get really good at using Google flow or you know, in VO three and Right. All of a sudden, like you can just create anything.

[01:14:27] Like, I don't know. I mean there's, you can't put it back in the box like, this is the future we're heading towards. But I do think this in retrospect will probably end up being like a kind of a, a pretty significant moment from a creative perspective 

[01:14:39] Mike Kaput: and just we can't underrate the fact this is kind of going viral because like it or hate it.

[01:14:44] I feel like there's gonna be a fair amount of executives going to their agencies being like, well we, they just made this, why can't we do it for 400%? For sure. Yeah. Which I know it's like a worst nightmare sometimes for agencies, but like that's going to happen. 

[01:14:58] Paul Roetzer: Yeah. And think about how much like going into the [01:15:00] Super Bowl next year, like think about how much money these brands spend to make ads.

[01:15:04] Millions of dollars. They're all gonna be looking at their agency and being like, yeah, no, like I want something like that, and then I want it to be super clever and you've got $5,000 or $50,000 or whatever. It's 

[01:15:18] What Happens When AI Goes Down?

[01:15:18] Mike Kaput: All right. Next up, Paul, we really messed up because we joked a couple times this past week that we should never have done last week's segment on AI and cybersecurity because we jinx this whole thing and now it's Friday the 13th, and we're in real trouble probably because this past week we saw OpenAI have a significant outage on Wednesday, though it was short-lived.

[01:15:39] Thankfully, that was followed by a huge Google Cloud related outage that knocked out tons of popular online apps and services in the same week. We also got an interesting report in Fortune that researchers uncovered a critical security flaw in Microsoft 365 co-pilot. This is dubbed echo leak. It is what experts call a zero click [01:16:00] vulnerability.

[01:16:01] That means you can trigger it without the user doing anything. The attack basically works by sneaking hidden instructions into a seemingly harmless message. Copilot reads this automatically. It's in an email. It obeys the hidden commands and unknowingly leaks internal data, and it also covers its tracks.

[01:16:19] Now, Microsoft says it's patched this issue. But Paul, what I'm curious to get your take on, just like this is not the only issue, the issues we're gonna see with all these outages, security threats, like what do we do about this the further we go down the road of AI transformation? I feel like the harder it's going to be to get anything done if AI tools go down.

[01:16:40] Paul Roetzer: Yeah, I don't know. I mean, I don't, redundancies is the first thing, but like to your point, I mean it was, I think it was CloudFlare that was the original issue. But I mean, it was Spotify, AWS, Etsy Box, MailChimp, Google Cloud, discord, Shopify, OpenAI, Twitch, like everybody basically [01:17:00] runs through this service.

[01:17:00] Yeah. So when it went down, everything went down and yeah, I mean, I must have got a dozen messages, like text messages, LinkedIn messages, like, Hey, you, it's your, you and Mike's fault. But you guys had to go talk about what happens when, like, AI models and workers go down. God. So yes, we apologize if we jinxed the internet apparently.

[01:17:18] yeah, the, I don't, I don't have the answers. Like I think right now there is this argument for redundancy. I think it needs to be built into contingency planning and organizations as they become more reliant on agents and these different platforms like an OpenAI or Anthropic or Salesforce or Microsoft, like whatever you're reliant on.

[01:17:41] Yeah, they've all gone down before, like everybody doesn't have a hundred percent uptime, but the more your workers are dependent upon these agents to get anything done. I mean, it's almost like, you know, you're doing math and like you say, okay, you don't get the calculator for the next five hours and it's like, or spreadsheets [01:18:00] that can do formulas and I gotta like go back and do math.

[01:18:02] Well, if you have a generation of workers who never learned how to do stuff manually, then like, what whatcha gonna do? So, I don't know, I think right now it's probably one of those, like, it needs to start being part of your contingency planning. If we come to Penon agents and they're producing X amount of our output, what happens if we lose that output for 24 hours, 36 hours, whatever it is.

[01:18:24] I think it's just real things that need to start becoming part of actual strategic planning. 

[01:18:30] Mike Kaput: And maybe a small thing that can be helpful is like one of the best ways I think to. Be deploying AI in the first place is by really diligently documenting your existing workflows. Yeah. And then trying to apply AI to that.

[01:18:42] So if you're going through that process anyway, at the very least, hopefully you have some documentation or can build some processes in place to have this stuff live somewhere where I can go find if the Yeah. Agent stops working. 

[01:18:53] Paul Roetzer: When we did our AI for B2B Marketer Summit and I interviewed Andrew Au from Intercept.

[01:18:58] Yeah. That was what he was [01:19:00] saying. Like the first step for them as their agency was like doc clear documentation of workflows. And then once you have those workflows documented, then you can build the agent stuff around it. But in this case, yeah, the workflows are critical just so you even know how things happened.

[01:19:13] Yeah. And what you know, what these models are even doing that maybe you need to replace for a little while. 

[01:19:19] Meta Crackdown on “Nudify” Apps

[01:19:19] Mike Kaput: All right. A couple more topics here to round out the week. This one is a tough one, but we gotta talk about it. Meta is cracking down on. Apps that digitally undress people without their consent using ai.

[01:19:33] This week, meta filed a lawsuit against a company called Joy Timeline, which had been running hundreds of ads for what they call notify apps across Facebook, Instagram, messengers and threads. the tools are primarily marketed to men. They often target women, including celebrities. They've been linked to blackmail, sextortion, other forms of digital abuse, and some of them can end up in the hands of minors.

[01:19:55] So Meta says this company repeatedly tried to evade its ad review [01:20:00] system. The company claims to have removed many of the offending ads blocked related URLs, but it's getting harder to enforce as these apps use more and more sophisticated tactics to avoid detection. And it's not an isolated incident.

[01:20:15] There were investigations last year by 404 media and some lawsuits around non-consensual deepfake tools that are really quickly proliferating. So Paul, this is like a. Topic that definitely can make you sick to your stomach, but like we have to talk about it just briefly because it's not like a doomsday prediction.

[01:20:35] It's not like a one-off crazy headline. Like, this is happening fairly often and people and parents should probably be aware that it is possible 

[01:20:45] Paul Roetzer: and school leaders. Yeah. I mean it kind of, I I guess it starts to fall into these like societal impact ones, like the AI companion Yeah. Grieving, like these, these sort of topics that aren't always easy or fun to [01:21:00] talk about, but there has to be a level of awareness around what's going on and what the technology is capable of doing so that we can be proactive about it.

[01:21:10] And, you know, part of this is a technology platform story and like, is Facebook doing enough to, you know, curtail this stuff? Part of it is people are just disturbing, like, like who the hell creates this? Like, why? It's, it's just like. I mean, there's parts you're just like sad for humanity that someone decided like, Hey, let's go make some money by creating a product like this and then like people chose to work for a company to run ads for them to like, I don't know, like, yeah, there's parts where you just wanna be ignorant to things like this happening around the world, but that doesn't do anybody any of it.

[01:21:48] So yeah, I guess we're just trying to kind of shine a light on some of these things. So there's a level of awareness and whatever we can individually do proactively about it. 

[01:21:59] Updates to GPTs, Using Projects vs. GPTs

[01:21:59] Mike Kaput: [01:22:00] Alright, our final topic this week, couple things related to GPTs. So we kind of alluded to this. OpenAI has just released an upgrade for GPTs.

[01:22:08] So from now you now on, you can actually choose from the full set of ChatGPT models when you're configuring AGI PT. So previously if you built AGI PT, it would default I think, till our GPT-4 oh. But now you can specify which model your GPT should use. and also users can switch models apparently while they're using your GPT.

[01:22:32] So if you have already published GPT using older models, you'll probably wanna revisit those and test out how to optimize them or make them better even for new models, new options, especially reasoning models. And then also Paul, we wanted to talk a little bit about kind of a debate that sprung up in some of the comments we've seen of like using GPTs versus using projects.

[01:22:55] They have overlapping capabilities. In some cases you could use them for similar things. In [01:23:00] certain cases though they're different. Maybe walk me through the implications of the new GPT upgrade and then GT's first projects. 

[01:23:09] Paul Roetzer: Yeah, so this is, Mike and I were literally debating whether we even talk about this one.

[01:23:12] 'cause neither of us was like super confident as we were getting ready to record this about if we could even explain this properly. Then I was like, well maybe the fact that we can't figure this out is actually worth talking about. So I had put something up on LinkedIn a couple weeks ago about how custom GPT were like a great starting point and somebody commented like, yeah, they're great, but projects are better.

[01:23:33] And I was like, does he know something about projects I don't know about? Like I didn't, I didn't think they were a replacement to projects or, or to GPTs. I thought they were like complimentary in different uses. Yeah. So I started like doubting myself. So I actually went into ChatGPT and I said, how do you know when to use each one of these things?

[01:23:52] Like what is the difference in, what's the similarities? And ChatGPT wrote something and I was like, that doesn't even make sense. Like that's not very good. [01:24:00] So I had to go do more research on my own again and go to the help pages. So I don't know, without making this a main topic, here's the gist of where I think Mike and I landed.

[01:24:10] Mike, if missing something to say it. Projects basically function, function like folders in your drive. Like if, if you want to create different things, different files, share them in that same file. So you can go back and reference those chat threads. So you may have 20 different chats that you've had related to, I don't know, let's say, business strategy and all of live in a bus business strategy project.

[01:24:33] Yeah. And it could be about compensation, it could be about tech, it could be about whatever. So all of those live in a, a project and that you can use deep research, you can use voice mode, you can, you know, do images, like all of that just sits in a folder. So that's what projects is. They're basically like folders.

[01:24:49] GPTs is something you create where you're building an AI assistant that's tailored on specific instructions. You can give it files and its knowledge base, and then [01:25:00] you can use that assistant for that specific thing, and you can share that assistant with other people. Right? So like internally at Smart, we build GPTs all the time.

[01:25:11] Then we share them with each other. Like, here's, here's a great one if anybody wants to use this one. Projects don't work that way. It's not like, Hey, here's a great project, it's just a folder. So I think, if I'm wrong, if someone from OpenAI listens to this and is like, Hey, that's not how it is, then like, one, put it somewhere on your website.

[01:25:28] Right? We put the pieces together. but two, I would love clarification, but best I can currently interpret that as how to think. 

[01:25:40] Mike Kaput: That would be my interpretation. I'm sure there's, I'm sure you could do the same things in certain cases in a project that you would in AGI PT, but to me the use cases are often a little different 

[01:25:52] Paul Roetzer: for each one.

[01:25:52] Yeah. The one thing that I would add that I don't know if they fixed yet, so I let, let's say I'm using my CO C-E-O-G-P-T, which I [01:26:00] use all the time. I have a conversation in there about, recruiting. I cannot take that. One thread and move it into a project. So they don't let you take custom GPT chats. Yes.

[01:26:14] And add them to projects, which is frustrating. Very, very frustrating. Yeah. 

[01:26:19] Mike Kaput: And just one kind of final note here on these GPT updates, like, I'm excited to be able to like switch the models that GPTs use, but the fact that the user can also switch them. Yeah, I get that feels like a recipe for, I mean I guess I'm not producing a lot of gpt, so like everyone, the general public needs to use, but this feels like a recipe for disaster because I, people know how to select models correctly in the first place.

[01:26:43] Sometimes I don't even, so 

[01:26:46] Paul Roetzer: yeah, so I, the thing I'm doing with mine, Mike, is like the ones I have that are public that were built using 4.0 Yeah. That like are based, I'm going in and updating in the instructions in the backend, that four oh is the recommended model [01:27:00] to make sure that it's, at least that's the one that's being primarily used.

[01:27:03] 'cause I have no idea if someone goes in and picks oh three. Oh three pro if it breaks the GPT, like if it is, if it's even gonna do what it's supposed to do anymore. Right, right. 

[01:27:12] Mike Kaput: And while it does have that little label next to it that you can indicate a model is recommended by the creator, it's like you can just ignore that.

[01:27:20] Totally. Yeah. Yep. Alright, Paul, that is a wrap on a busy week in ai. Appreciate as always you breaking everything down for us. 

[01:27:29] Paul Roetzer: Yeah. And again, there was 18 things that didn't make the cut. So a reminder, we've got two newsletters. We've got this week in AI that comes out on Tuesdays, right Mike? That comes out on Tuesdays when the podcast drops and that's got, links to everything.

[01:27:44] and then I publish a exec, AI insider on Sundays. That's through Smart RX ai. You can go subscribe to that one. And on the Sunday one, I'm sort of previewing some of what's to come with an editorial upfront. And then that this week in AI includes like a bunch more links. [01:28:00] So, yeah, this would be a good week to be subscribed to the newsletters, I would say.

[01:28:05] Paul Roetzer: All right, thanks everyone. And as a reminder, we will have a second episode this week. AI answers will drop on Thursday and that is based on the last class that we did, which was 

[01:28:19] Mike Kaput: Intro to ai. 

[01:28:20] Paul Roetzer: Was it Intro? Yeah. Yes. 'cause Scaling's gonna, yes, we did an intro to AI class last week and, or maybe that was this week.

[01:28:27] I am completely lost right now. the most recent intro to AI class we did, we had like 70 or 80 questions during that class. So we are gonna do our best to get through as many of those as we can. So Cathy McPhillips will be back with me for an AI answers episode 1 54. That will be alright. Thanks everyone.

[01:28:47] Thanks for listening to the Artificial Intelligence Show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, [01:29:00] downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community.

[01:29:11] Until next time, stay curious and explore ai.