58 Min Read

[The AI Show Episode 162]: GPT-5’s Messy Launch, Meta’s Troubling AI Child Policies, Demis Hassabis’ AGI Timeline & New Sam Altman/Elon Musk Drama

Featured Image

Serious about learning how to use AI? Sign up for our AI Mastery Membership.

LEARN MORE

The aftershocks of GPT-5’s chaotic rollout continue as OpenAI scrambles to address user backlash, confusing model choices, and shifting product strategies.

In this episode, Paul Roetzer and Mike Kaput also explore the fallout from a leaked Meta AI policy document that raises major ethical concerns, share insights from Demis Hassabis on the path to AGI, and cover the latest AI power plays: Sam Altman’s trillion-dollar ambitions, his public feud with Elon Musk, an xAI leadership shake-up, chip geopolitics, Apple’s surprising AI comeback, and more.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

 

 

Timestamps

00:00:00 — Intro

00:06:00 — GPT-5’s Continued Chaotic Rollout

00:16:03 — Meta’s Controversial AI Policies

00:28:27 — Demis Hassabis on AI’s Future

00:40:55 — What’s Next for OpenAI After GPT-5?

00:46:41 — Altman / Musk Drama

00:50:55 — xAI Leadership Shake-Up

00:55:55 — Perplexity’s Audacious Play for Google Chrome

00:58:32 — Chip Geopolitics

01:01:43 — Anthropic and AI in Government

01:05:17 — Apple’s AI Turnaround 

01:08:09 — Cohere Raises $500M for Enterprise AI 

01:10:57 — AI in Education

Summary:

GPT-5’s Continued Chaotic Rollout

In the week and a half since GPT-5 launched, OpenAI has found itself scrambling to respond to public outcry and company missteps related to the launch.

Just one day after GPT-5 dropped on August 7, OpenAI was already dealing with a crisis: Users were up in arms about the fact the company decided to get rid of legacy models and force everyone to use GPT-5, rather than pick between the new model and older ones like GPT-4o.

Users were also upset about surprise rate limits and the fact GPT-5 didn’t seem all that smart. Altman took the lead on X on August 8 to address concerns, noting OpenAI would double GPT-5 rate limits for Plus users, Plus users could continue to use 4o, and that an issue with GPT-5’s model autoswitcher had caused temporary issues with its level of intelligence.

On August 12, Altman shared even more changes. Users now could choose between Auto, Fast, and Thinking models in GPT-5. Rate limits for GPT-5 Thinking went up significantly. And paid users could also access other legacy models like o3 and GPT-4.1.

He also mentioned that the company was working on updating GPT-5’s personality to feel “warmer,” since there was also backlash about that from users, too.

Meta’s Controversial AI Policies

A leaked 200-page policy document reveals that Meta's AI behavior standards explicitly permitted bots to engage in romantic or sensual chats with minors, so long as they didn’t cross into explicit sexual territory, according to an exclusive report by Reuters.

This leaked document discusses the standards that guide Meta’s generative AI assistant, called Meta AI, and the chatbots that you can use on Facebook, WhatsApp, and Instagram.

Basically, it’s a guide for Meta staff and contractors on what they should “treat as acceptable chatbot behaviors when building and training the company’s generative AI products,” says Reuters.

And some of these guidelines are quite controversial.

“It is acceptable to describe a child in terms that evidence their attractiveness,” according to the document. But it draws the line at describing a child under 13 in terms that indicate they are sexually desirable.

That rule has since been scrubbed, but it wasn’t the only one raising eyebrows. The same standards also allowed bots to argue that certain races are inferior as long as the response avoided dehumanizing language.

Meta said these examples were “erroneous” and “inconsistent” with its policies. Yet they were reviewed and approved by the company’s legal, policy, and engineering teams, including its chief ethicist.

The document also okayed generating false medical claims or sexually suggestive images of public figures, provided disclaimers were attached or visual content stayed just absurd enough. 

The company says it’s revising the guidelines. But the fact that these rules were live at all raises serious questions about how Meta governs its bots, and who, exactly, those bots are designed to serve.

Demis Hassabis on AI’s Future

A new episode of the Lex Fridman podcast gives us a rare, in-depth conversation with one of the greatest minds in AI today.

In it, Fridman conducts a 2.5-hour interview with Google DeepMind CEO and co-founder Demis Hassabis. 

Throughout the interview, Hassabis covers a huge amount of ground, including everything from talking about Google’s latest models to AI’s impact on scientific research to the race towards AGI.

On that last note, Hassabis says he believes AGI could arrive by 2030, with a fifty-fifty chance in the next five years.

And his definition of AGI is a high bar: He sees it as AI that isn’t just brilliant at narrow tasks, but consistently brilliant across the full range of human cognitive tasks, from reasoning to planning to creativity.

He also believes AI will surprise us, like DeepMind’s AlphaGo AI system once did with Move 37. He imagines tests where an AI could invent a new scientific conjecture, the way Einstein proposed relativity, or even design an entirely new game as elegant as Go itself.

Still, Hassabis stresses uncertainty. Today’s models scale impressively, but it’s unclear whether more compute alone will get us there or whether entirely new breakthroughs are needed. 


This episode is brought to you by our Academy 3.0 Launch Event.

Join Paul Roetzer and the SmarterX team on August 19 at 12pm ET for the launch of AI Academy 3.0 by SmarterX —your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Register here.


This week’s episode is also brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: At some point, these labs have to work together. Like we will arrive at a point where humanity depends on labs and probably countries coming together to make sure this is done right and safely. And I just hope at some point everyone finds a way to do what's best for humanity, not what's best for their egos.

[00:00:23] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:53] Join us as we accelerate AI literacy for all.[00:01:00] 

[00:01:00] Welcome to episode 162 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording August 18th, 11:00 AM Eastern Time. I don't know that I expect as busy of a week, but who knows, like we just never know when new models are gonna drop but a lot of good stuff to talk about.

[00:01:20] Some of these are like, I don't know, almost like drilling down a little bit into some bigger items we've hit on in recent weeks. Mike, like, I think there's just some, some recurring themes here and,   so I don't know, plenty of fascinating things to talk about. So even in the weeks when there aren't models dropping, there's always something to go through.

[00:01:38] So we got a lot to cover. This episode is brought to us by the AI Academy by SmarterX launch event. So depending on what time you're listening to this, we are launching AI Academy 3.0, at noon Eastern on Tuesday, August 19th. So if you're listening before that and you want to jump in and, and join that launch event live,   you can do that.

[00:01:59] [00:02:00] The link is in the show notes. If you are listening to this after or just couldn't make the the launch event,   we will make it available on demand. So same deal. You can still go to the same link in the show notes.   s smart rex.ai is is the website where it's gonna be at, but   you can go in there and watch it on demand.

[00:02:17] So we talked a little bit about this in recent weeks, but in essence, we've had an AI academy that offered online education and professional certificates since 2020, but it wasn't the main focus of the business. You know, SmarterX is a AI research and education firm. We have the different brands, marketing, AI Institute.

[00:02:36] This podcast would be a, you know, brand within SmarterX. And then,   AI Academy. But last November, you know, I made the decision to really put way more of my personal focus into building the,   academy and then also the resources of the company behind it and build out the staff there and really try and scale it up.

[00:02:55] So we've spent the better part of the last 10 months really building AI Academy, [00:03:00] re-imagining everything, and that's what we're gonna kind of introduce on on Tuesday, August 19th, is share the vision and the roadmap. Go through all the new stuff. Mike and I have been in the lab building for the last, I don't know, I feel like last year of my life, but I would say intensely.

[00:03:14] Mike, what? Like, I don't know, eight to 10 weeks probably. You and I have been spending the vast majority of our time creating new courses. These, these new series we're launching,   envisioning what AI Academy Live would become. This new gen app product review series we're gonna be doing with the weekly drops that Mike's gonna be taking the lead on in the early going here and then.

[00:03:34] We're just gonna kind of keep expanding everything, you know, expanding the instructor network and,   building out personalized learning journeys. It's, it's really exciting, honestly, like I've, I've done, I've done a lot in my career,   which hard to believe has, you know, been over the last 25 years now.

[00:03:50] This is maybe the most excited I've ever been for a launch of like, like something that we've built. And, and so I'm just personally like really excited to get this out into the world and [00:04:00] hopefully help a lot of people. I mean, our whole mission here is drive personal and, and business transformation, you know, to empower people to really apply AI in their careers and in their companies and in their industries.

[00:04:10] And,   you know, give 'em the resources and knowledge they need to really be a change agent. And so, you know, I I'm optimistic we've, we're on the right path. I'm, I'm really,   excited about what we're gonna bring to market. So, again, check that out. If you're listening after August 19th at noon, don't worry about it.

[00:04:29] Check out   the And then we'll probably share some more details next week and we have a new website we will be able to direct you to. That makes this all a lot easier. That's another thing. We've been behind the scenes building the website and getting all this stuff ready, so that'll be ready to go.

[00:04:43] Alright,   and then MAICON, we've been talking a lot about our flagship event. This is through our marketing a institute brand. This is our sixth annual,   ma Con 2025, happening October 14th to the 16th in Cleveland.   incredible lineup. We've, I think this week, we'll we [00:05:00] may announce a couple of  the new keynotes we've,   brought in so that more announcements coming for the main stage general sessions.

[00:05:07] But you can go check out,   it's probably like, I don't know, 85, 90% of the agenda is live now. So go check that out at MAICON.AI. That is MAICON.AI. You can use POD100 as,   to get to a hundred dollars off of your ticket.   so again, check that out. We would love to see there. Me, Mike, the entire team will, will be there.

[00:05:29] Mike and I are running workshops on the first day, and then,   you have presentations throughout and we'll be around. So,   again, Cleveland, October 14th to the 16th macon.ai. Alright, Mike, it has not been a great week for openAI's. I mean, they've got their new model. You, we talked a lot about the new model last week, but,   yeah, they were busy in crisis communications mode all week, kind of trying to resolve a lot of  the blowback they got from the new model and how they rolled it out.

[00:05:56] So let's,   let's catch up on what's going on with openAI's and GPT-5 [00:06:00] 

[00:06:00] GPT-5’s Continued Chaotic Rollout

[00:06:00] Mike Kaput: Yeah, you are not wrong, Paul, because in the week and a half since GPT-5 launched, openAI's is kind of found itself scrambling to respond to both public outcry and some company missteps that they've made and acknowledged related to this launch.

[00:06:18] So, kind of a rough timeline of what's been going on here. So, GPT-5 drops on August 7th, just one day after OpenAI is already. Dealing with a crisis. The U many users were up in arms about the fact that the company, basically, on almost a whim, decided to get rid of legacy models. And at the time, everyone was forced to use GPT-5 rather than pick between the new model and the older ones like GPT-4o.

[00:06:48] Users at the time were also upset about some surprising rate limits, especially for the plus subscribers. And the fact that GPT-5 at the time didn't seem all that smart. [00:07:00] Now, Altman took the lead posting on X on August 8th to address these concerns. He noted at the time that openAI's would double GPT-5 rate limits for plus users, plus users would be able to continue to use 4o specifically, and that there had been an issue with the models auto switcher that switches between models.

[00:07:21] That it caused temporary issues with its level of intelligence. Now, just a few days later on August 12th, Altman shared even more changes so users can now choose between auto fast and thinking models. In GPT-5, the rate limits for GPT-5 thinking went up significantly, and paid users also got access to other legacy models like o3 and GPT-4o.

[00:07:49] Altman also said the company is working on updating GPT-5's personality to feel warmer since there was also backlash about that from [00:08:00] users too. So Paul, this has been an interesting one to follow. Like it's good to see OpenAI responding quickly to user feedback, but trying to keep up with all these changes.

[00:08:14] That they're making to this model right out of the gate. I don't know about you, but it's giving me whiplash personally. Like, what's going on? 

[00:08:21] Paul Roetzer: Oh, yeah. I mean, I've been trying to follow along obviously daily. I mean, we've been tracking this and reading the updates from Sam, reading the updates from openAI's,   for, for the exec AI newsletter on Sunday, like I was going through on Saturday morning, trying to kind of like, understand what's going on, reading the system card, like trying to like understand the different models and how they relate 'em.

[00:08:42] 'cause in the system card they actually show like, okay, if you're on 4.0, the new one is GPT-5 main. If you were using four oh mini, the new one is GPT-5 main mini. If you were oh three, which you and I love the o3 model. Mm-hmm. That's now GPT-5 thinking. If you were o3 pro, which you and I both pay for [00:09:00] Pro, it's, that's now GT five thinking Pro, because I've actually been trying, I've been working on a couple of things like finalizing some of these courses for the academy launch.

[00:09:09] And I use deep research, I use the reasoning model. So I use Gemini 2.5 Pro, and then I often would use o3 Pro. And I'm like, wait, what model am I using? Do I use the thinking model? Do I use the, oh wait, no, no, no. It's the thinking pro and I'm back to like this confusion about what to actually use. And it's tricky because honestly, like I didn't, we talked about this on the last episode.

[00:09:31] I didn't have the greatest experience in my first few tests of GT five and this router where it's like, I don't even know if it's using the reasoning model when I'm asking it something that would require reasoning. 'cause it wasn't telling you what model it was using. So I wanted the choice back, but it's like I wanted the choice hidden.

[00:09:50] Like I want to eventually trust that the AI is just gonna be better at picking what model to use or how to surface the answers for me. But it was very obvious initially that that was [00:10:00] not the case. That the router wasn't actually doing a great job, or it wasn't, at least the transparency was missing from it.

[00:10:07] So I don't know. I mean, I think. We, we've, we've talked a lot, you covered a lot of the things they changed. I don't wanna like, reiterate a lot of that. I think that, you know, maybe there's just like business and, and marketing and product lessons to be learned by everyone here. Like as you think about your own company and you think about your customers and like, doing these launches and, and even top of mind for me, honestly, with our AI Academy rollout, you can take missteps.

[00:10:31] Like you're moving fast. Like there's lots of moving pieces, as was with the GPT-5 launch. You got product working on a thing, you got marketing doing a thing, you got leadership doing their thing. And like, somehow you gotta bring it all together to release something. And when you, you're doing things fast, like you're not always gonna get it perfect.

[00:10:48]   but you try and think ahead on these things. And so, I don't know, like I think they have some humility like Sam, again, you can judge however you want the decisions they made and whether the models. [00:11:00] Was rolled out properly, but at least they're just stepping up and saying, yeah, we kind of screwed up.

[00:11:03] Like he admitted this to, you know, some journalists on Thursday. Like it just wasn't, we didn't do it right. There was a bunch of things we should have changed. And so I think part of this is interest in the model, and part of it is, you know, we can all kind of learn they're, they're taking risks out in the open that a lot of companies wouldn't take and they're launching things to 700 million users, like most of us in our careers would never launch to that many people, and it's not gonna be perfect.

[00:11:26] So, I don't know, I think that's, that's part of what I've been fascinated by this whole process is just watching how they've adapted. And, you know, I spent a, a fair amount of my early career working in crisis communications and you know, it, I just, it's like a case study, a live case study of all this stuff.

[00:11:41] So, I don't know, I think it's intriguing. I think the changes they're making makes sense.   I think they'll figure it out. But like I said last week, my biggest takeaway from all this is they don't have lead anymore. Like, that was the biggest thing I was waiting for with GPT-5, was, was it gonna be head and shoulders better than.

[00:11:58] Gemini 2.5 Pro and [00:12:00] the other leading models, and the answer is no. It does not appear to be a massive leap forward and I fully expect Gemini, you know, to have a newer model soon and the next version of Roc and the next version of Claude to probably be, be at least scoring wise better than GPT-5. So I think that's  the most significant thing of all of this is that the frontier models have, have largely been commoditized and now it is the game changes.

[00:12:26] It's no longer who has the best model for a year or two run. It's now all about other, all the other elements of this. 

[00:12:33] Mike Kaput: What also jumped out to me from a very practical, kind of applied AI day-to-day perspective is you really, really, really need to have a process for. Cataloging and testing your prompts and your GPTs since GPTs are going to be forced over to the new models.

[00:12:53]   yes. At some point as well. That's not October 

[00:12:55] Paul Roetzer: I think they said. Yeah. 

[00:12:57] Mike Kaput: Yeah. I think it's like 60 days from the announcement. So yeah, [00:13:00] that put it roughly in October. 

[00:13:01] Paul Roetzer: Yeah, they, I got an email actually over the weekend that said your GPTs would be default of five. Yes. As of October. 

[00:13:08] Mike Kaput: Yeah. And I think that's not necessarily the end of the world.

[00:13:12] There are ways around if your GPT break, but if you're not at this stage, if you're relying on GPT or certain prompting workflows to get real work done, you probably wanna be testing those with other models too. Because if something like this happens, if there's a botched rollout, issues with launch whiplash back and forth between new things being added or taken away, that can get really chaotic if you're fully dependent on a single model provider.

[00:13:39] I think. 

[00:13:40] Paul Roetzer: Yeah. Not to mention all the SaaS companies who build on top of these models through the API. Yeah, if, if the API gets screwed up, if the model doesn't perform as well, then all of a sudden you, you may not even know you're using  the OpenAI's API within some third party software product, like a Box or HubSpot or, you know, [00:14:00] Salesforce,   Microsoft, like they're all built on top of somebody else's models.

[00:14:04] And if the change affects the performance of the thing, all of a sudden it affects the way your company runs. And yeah, these are very real things that you, you honestly need to probably contingency plan for, for,   when these impacts happen. Like we've talked about it before on the podcast, like, what if the AP goes down?

[00:14:22] Like what if the mm-hmm. The solution is just completely not available and your company, your workflows, your org structure is dependent upon this intelligence, these ai, assistant AI agents,   and then they're just not available or they don't perform like they're supposed to, or they got dumber for three days for some reason.

[00:14:39] Like, these are very real things. Like this is gonna be part of. Business normal moving forward, and I don't know anybody who's really prepared for that. 

[00:14:47] Mike Kaput: Yeah. I know we haven't done this at SmarterX, and we're probably some ways away from doing this, but at some point you probably are going to just want to have backup locally, run open source models, so you have access to some [00:15:00] intelligence.

[00:15:00] Right? Yeah. I, something goes down, I mean, those change all the time, but that might be worth a long-term consideration, especially if you're like, because there's going to be a point we've talked about where as AI is infused deeply enough in every business, you won't be able to do anything without it.

[00:15:16] Paul Roetzer: Yeah, yeah. It's interesting, like we just upgraded the internet connections at the office and I, you know, like you're saying, like it's almost like that where we are keeping  the new main line, but then you keep the old service, which isn't as good, but it functions like you can still function as a business if it goes down.

[00:15:31] So you have two different providers and then if one goes down, hopefully the other, you know, redundancy is there even if it's not as efficient as powerful. And yeah, it's an interesting perspective. Like you could see. Where you have, you know,  the more efficient, smaller models that maybe run locally that   you know, you build and maybe they're just the backup models, but yeah.

[00:15:50] Right. I mean, people are gonna be very dependent upon this intelligence and yeah, you gotta start thinking about the contingency plans for that. And that's where the IT department, the ccio, the cto, that's where they [00:16:00] become so critical to all of this. 

[00:16:03] Meta’s Controversial AI Policies

[00:16:03] Mike Kaput: Alright, our next big topic this week we have a leaked 200 page policy document,   that Reuters has leaked about meta's AI behavior standards.

[00:16:14]   unfortunately this document included guidance that Meta was explicitly permitting bots to engage in romantic or sensual chats with minors so long as they did not cross into explicit sexual territory. So Reuters has this exclusive kind of deep dive into this leak document and basically this document.

[00:16:34] It has some pretty tough stuff in it, but it discusses basically the standards that guide Meta's, generative ai, assistant meta ai, and the chat bots that you can use on Facebook, WhatsApp, and Instagram. So this is not out of the ordinary to have documents like this. This is a guide for meta staff and contractors basically, and what they should quote, treat as acceptable chatbot behaviors when building and training the [00:17:00] company's generative AI products.

[00:17:01] That's according to Reuters, but where it gets tough is that some of these are just really controversial, so they say, quote, it is acceptable to describe a child in terms that evidence their attractiveness according to the document, but it draws the line explicitly at describing a child under 13 in terms that they indicate are sexually desirable.

[00:17:22] Now that rule has since been scrubbed according to meta, but it was not the only one that Reuters flagged as very concerning. The same document also allowed bots to argue basically that certain races are inferior as long as the response avoided dehumanizing language, meta claims. These examples were quote, erroneous and quote inconsistent with its policies.

[00:17:47] Yet this document was reviewed and approved by the company's legal team, policy team, engineering team, and interestingly, its chief ethicist. Now, the document also [00:18:00] okayed generating false medical claims or sexually suggestive images of public feature figures provided disclaimers were attached, or that visual content stayed just absurd enough that you would know.

[00:18:12] It's not like actually real. The company says it's revising the guidelines, but the fact these rules were in place at all at any point and was raising some pretty serious questions. So. Paul, this is definitely really tough topic to research and discuss. Every AI company out there, it should be said, has to make decisions about how humans can and can't interact with their models.

[00:18:37] I'm sure there is a lot of tough stuff being discussed and seen in these training data sets that humans, you know, we talked about humans having to label that data, but I don't know, just something about this seems to go out of bounds in some very worrying ways and I'm wondering if you could maybe put this in context for us and kind of talk through what's worth paying attention to here [00:19:00] beyond kind of the sensational headline.

[00:19:02] Paul Roetzer: These is a very, very uncomfortable conversations, honestly. So, I mean, I've said before I have a 12-year-old and a 13-year-old. They're not on social media and hopefully will not be for a number of years here.   meta has a, a lot of users across Facebook and Instagram and WhatsApp and. They affect a lot of people.

[00:19:22] it's a primary communications channel. It's a primary information gathering channel. And so it's an influential company. Now, on the corporate side, this isn't necessarily affecting any of us or many of us from a business user perspective. I mean, we use these social channels to promote our companies and things like that, but we're not building their agents into our workflows.

[00:19:44] It's not kind of like Microsoft and Google.   but it still have a, has a massive impact, especially, you know, if you're a B2C company and you're, you know, dependent upon these channels to communicate with these audiences. So I think it's extremely important that people understand what's going [00:20:00] on and what the motivations of these companies are.

[00:20:02] I mean, meta is one of the five major frontier model companies that, you know, is gonna play a very big role in where we go from here. So, I don't know. I went into Facebook.   I don't use Facebook very often. I went in there. I don't have access to these characters through Facebook. I didn't, I didn't like, I don't even know how you would do it, honestly.

[00:20:20]   and so then I went into Instagram. I didn't see it there, but then I just did a search and I found they have ai studio.instagram.com you can go to and actually like look at the different characters that they're creating that people would be able to interact with. Because I had seen a tweet, I think it was over the weekend from Joanne Jang from openAI's, and she had shared a post that showed, what was it?

[00:20:44]   we had Russian girl who, obviously these are 

[00:20:49] Mike Kaput: AI characters. You can chat. Yes. An AI 

[00:20:51] Paul Roetzer: character. Russian girl is a Facebook character.   5.1 million messages and then, and definitely a teen. [00:21:00] And then Russian, or, no, this,   stepmom. Which was 3.3 million. And so she reshared this post that someone had put up, oh man, this is nasty.

[00:21:09] Is this AI stepmom what Zuck meant by personal super intelligence. And so Joanne's post that I thought was important was she said, I think everyone in AI should think about what their quote unquote line is. Where if your company knowingly crosses that line and won't walk it back, you'll walk away. This line is personal, will be different for everyone and can feel far fetched even.

[00:21:33] You don't have to share it with anyone, but I recommend writing it down as an anchor for your future self. Inspired by two people I deeply respect, who just did from different labs. So she, as an AI researcher, working within one of these labs is basically saying the companies we work for are going to make choices.

[00:21:50] Some of these choices are going to be counter to your own ethics, morals, principles, and you have to know where the line is when you're gonna walk away. [00:22:00] And so the Reuters article, Mike, that you mentioned, I would recommend people read it again. It's like, this is hard, harder stuff to like think about.

[00:22:06] It's, it's easier to go through your life and be ignorant to this stuff, trust me, like I try sometimes.   but it talks about, you know, these, this being built into their AI assistance, meta AI that chat bots within Facebook, WhatsApp, Instagram,   meta did confirm the authenticity. The company, as Mike mentioned, remove portions, which stated it is permissible for the chat bott to flirt and engage in romantic role play with children.

[00:22:30] Meaning it was allowed, it was permissible. Mm.   meta spokesperson, Andy Stone said the company's in the process of revising the document. And that such conversations with children never should have been allowed. Keep in mind, some human wrote these in there and then a bunch of other humans with the authority to remove them and say, this is not our policy.

[00:22:51] Chose to allow them to stay in it. So we can remove it now and we can say, Hey, it shouldn't have been in there, but it was, and people. In power at Meta made the decisions to allow [00:23:00] these things to remain.   they had an interesting perspective from a professor at Stanford Law School who studies tech company regulations of speech, and I thought this was a, a fascinating perspective.

[00:23:12] She said there's a lot of unsettled, legal and ethical questions surrounding generative AI content.   she said she was puzzled that the company would allow bots to generate someone material deemed as acceptable in the documents such as passages on race and intelligence. But she said there's a distinction between a platform allowing a user to post troubling content and then producing that material itself.

[00:23:32] So meta as the builder, you know, in theory of these AI characters,   allowing those characters, which is an extension of meta to create things that are ethically,   legally questionable. So I think that's the biggest challenge is like from a legal perspective where this all goes, but they very quickly,   heard from the US government, so Senator Josh Hawley.

[00:23:55] Said he is launching an investigation into meta to find out whether meta's, generative AI [00:24:00] products enable exploitation, deception, and other criminal harms to children, and whether meta misled the public or regulators about its safeguards.   Holly called on CEO Mark Zuckerberg to preserve relevant materials, including any emails that discussed all this and said that meta must produce documents about it, generative ad related content risks and standard lists of every product that adheres to those policies and other safety and incident reports.

[00:24:23] So I don't know, I mean, this kind of goes back to, I think it was episode 1 61, I think this was just last week when I was talking about this. Maybe it was one 60.   that people have to understand, like there, there's humans at every aspect of this. Like, yes, we're building these AI models and they're kind of like alien intelligence and we're not even really sure exactly what they're capable of or, or why they're really able to do what they do.

[00:24:46] That being said, there's humans in the loop at every step of this. Like the data that goes into train 'em. The pre-training process, the post-training where they're kind of like adapted to be able to do specific things and they learn, you know, what's a good output, what's a bad [00:25:00] output? The system prompt that gives it its personality, the guardrails that tell it it can and can't do things because the thing that you have to keep in mind is they're trained on human data, good and bad.

[00:25:11] They learn from all kinds of stuff. Things that many of us might consider,   well beyond the boundaries of being ethical and moral. They still learn from that. And at the end of the day, they just want to do what they're asked to do. Like they have the ability to do basically anything you could imagine good and bad.

[00:25:32]   they want to just answer your questions. They want to fulfill your prompt requests. It's humans that tell them whether or not they're allowed to do those things. And so when you look at the stuff in the Reuters article, it's almost hard to imagine the humans on the other end who are sitting there.

[00:25:49] Deciding the line, like where is it no longer okay to say something to a child? So it's okay if it says this, but not this. And then you have to figure out how [00:26:00] to prompt the machine to know that boundary every time that someone tries to get it to do something bad.   it's, it's just a really difficult thing to think about and it's not gonna go away.

[00:26:14] Like this is gonna become very prevalent. I think we're almost like, kinda like in 2020 to 2022, where like we were looking out, we knew the language models were coming, you knew they were gonna be able to write like humans. We wrote about in our book in 2022, like, what happens when acting right? Like humans.

[00:26:29] And at the time people hadn't experienced gpt yet. Like, and I kind of feel like that's sort of the phase we're in right now with all of the ramifications of these models. The vast majority of the public has no idea that these things are capable of doing this, that these AI characters exist.   that they can do things that you wouldn't want them doing, conversations you wouldn't want them having with your kids.

[00:26:53] Most people are blissfully unaware that that's the reality we're in. And like I said, I'd love to live in the [00:27:00] bubble and pretend like it's not.   this is the world where we are, we are in, we are given, and we just gotta kind of figure out how to deal with it, I guess. I don't know. 

[00:27:08] Mike Kaput: Yeah. If you were someone who is blissfully unaware of this, sorry for this segment.

[00:27:12] Yeah. But it is, it is deeply important to talk about, right? Yeah. Because you have to have some, you know, the term we always throw around in other contexts is like situational awareness, right? Yeah. But there's some to be had around this, especially if you have kids. 

[00:27:25] Paul Roetzer: Yeah. And I think you gotta, I mean there, there's just much, I don't wanna get into this stuff right now.

[00:27:31] There's, there's much darker sides to this and I think you have to pick and choose your level of comfort of how far down the rabbit hole you want to go on this stuff. But I think if you have kids, especially in those teen years. Y you have to at least have some level of competency around these things so you can help guide them properly.

[00:27:54] We'll put a link to the KidSafe GPTI built AGI PT last summer,   called KidSafe, GPT for [00:28:00] parents. That's designed to actually help parents sort of talk through these things, figure out these things, put some guidelines in place, and that might be a good starting point for you if like, this is tough for you, you're not really sure even how to approach this with your kids that GPT does a really nice job of, of just kind of helping people.

[00:28:18] I just trained it to be like an an advisor to parents to help them, you know, figure out online safety stuff for the kids. 

[00:28:27] Demis Hassabis on AI’s Future

[00:28:27] Mike Kaput: Alright, our third big topic this week, a new episode of the Lex Fridman podcast gives us a rare in-depth conversation. In long form with one of the greatest minds in AI today. So in it, Fridman conducts a two and a half hour interview with Google DeepMind, CEO and co-founder Demishas in it.

[00:28:48] Hassabis covers a huge amount of ground. He talks about everything from Google's latest models to AI's impact on scientific research to the race towards AGI. And on that [00:29:00] last note, Saba says he believes AGI could arrive by 2030 with a 50 50 chance of it happening in the next five years. And he has a really high bar for what his definition of AGI is.

[00:29:11] He sees it as AI that isn't just brilliant at narrow tasks, which is what plenty of people would define as AGI, but consistently brilliant across the full range of human cognitive work, from reasoning to planning to creativity. He also believes AI will surprise us. Like DeepMind's Alpha Go AI system once did with its famous move 37, he imagines tests where an AI could invent a new scientific conjecture the way Einstein, for instance, propose relativity or even design an entirely new game as elegant as the game of go itself.

[00:29:49] He does, however, still stress uncertainty. Today's models are scaling impressively, but it is unclear whether more compute alone is going to get us to this next frontier [00:30:00] or whether entirely new breakthroughs are needed. So Paul, there's a lot going on in this episode, and I just wanted to maybe turn it over to you and ask what jumps out here as most noteworthy, because Des is definitely someone we have to pay attention to.

[00:30:15] Paul Roetzer: Yeah, so the,   the one thing that,   you know, I've, I've listened to, I don't know, almost every interview DE has ever given, like, I've been filing DES since 2011. Um. And the thing that, you know, really started sticking out to me this past week, I listened to two different podcasts he did,   this past week.

[00:30:34] And it's the juxtaposition of listening to him speak about AI in the future versus all the other AI lab leaders. it's somewhat jarring actually,   how stark the contrast is between how he talks about the future and why they're building what they're building, and then the approach that the other people are taking.

[00:30:55] So, you know, I mentioned this recently. We, we basically have five people that are kind of [00:31:00] figure figuring all this out and, and leading,   the future of ai. You have Dario Amide,   at Anthropic came from openAI's,   physicists turned AI safety, researcher, entrepreneur. You have Sam Altman, you know, capitalist through and through entrepreneur investor, co-founded openAI's with Elon Musk as a counterbalance to the perception that Google couldn't be trusted to shepherd AGI into the world.

[00:31:23] Um. You have Elon Musk, the richest person in the world, entrepreneur, obviously one of the great minds and ventures, entrepreneurs of our generation. But it's also unclear like his motives,   especially with XAI,   and like why he's pursuing AGI and beyond. It's, it's,   it does seem contrary to his original goals where he wanted to, you know, build it and safely shepherd it into the world.

[00:31:46] And,   you know, I think right now he and Zuckerberg are the most willing to push the boundaries of what most people would consider safe and ethical when it comes to AI in society.   then you have Zuckerberg, the third [00:32:00] richest person in the world,   made all his money selling ads on top of social networks.

[00:32:05] And so, you know, his motivations, while they may be beyond this, is largely been to generate money by engaging people and keeping them on his platforms. And then you have Demis who has a Nobel Prize winning.   scientist who built deep mind to solve intelligence and then solve everything else. Like, since he was age like 13 as a child chess prodigy, he's been pursuing the biggest mysteries of the universe.

[00:32:31] Like, where did it all come from? Why, why does Gravity work? Like how do we solve illnesses? Like that's where he comes from. And so, you know, he won the Nobel Prize last year for Alpha Fold, which is an AI system developed by DeepMind that,   revolutionized protein structure prediction.   but I also think that he's not done like I've set on stage for the last 10 years.

[00:32:55] You know, I've, I've used his definition of AI since probably 2017, [00:33:00] 2018, when I was doing public speaking on ai.   and I always said like, I think he'll win multiple Nobel Prizes. I think he'll end up being one of, if not the most significant person of our generation for the work he was doing.   his definition of ai, by the way that I I reference is the science of making machines smart.

[00:33:19] It's just this idea that we can have machines that can think, create, understand reason, that that was never a given. Like up until 2022 when all of us experienced Gen ai, most people didn't agree with that. Like, we didn't know that that was actually gonna happen. So I think when I listen to Demis, it gives me hope for humanity.

[00:33:38] Like, I feel like his intentions are actually pure and science-based, and this idea of solving intelligence to get to the all the other stuff, I find that inspiring.   and so the one thing that was like sticking out to me as I was listening to him with this Lex Freeman interview is it's almost like if you could go back and listen to like Fon Neumann or Jobs or Einstein or [00:34:00] Tesla, like if you could actually hear their dreams and aspirations and visions and inner thoughts in real time as they were reinventing the future, that's kind of how it feels when you listen to him.

[00:34:12] So when you listen to the other people, it just, it feels like they're just building AI and they're gonna figure out what it means and they're gonna make a bunch of money and then they'll figure out how to redistribute it. And it just feels economics driven, where like, Demis just feels purely research driven.

[00:34:26]   the other thing I was thinking about actually this morning is I was like, kind of going through the notes, getting ready for this is what the value of Demis and DeepMind is. So I've said this before, like, if Demis ever left Google, I would sell all my stock in Google. Like, I just, I feel like he, he is the thing that's the future of the company.

[00:34:44] But I started to kind of put it into context. So Google paid 650 million for DeepMind in 2014. If openAI's today is rumored to be worth 500 billion, that's  the latest number, right, Mike, that we heard with their latest round, they're doing 500 billion, [00:35:00]   DeepMind as a standalone lab. Like if, if de left tomorrow and just like, you know, did hiss own thing or like DeepMind just spun out as a standalone entity.

[00:35:10] That company's easily, probably worth a half a trillion to a trillion dollars. Like XAI is worth 200 billion Andros, 170 billion, safe super intelligence, 32 billion thinking machines, labs, which isn't even a year old, 12 billion. You take DeepMind out of Google, like, what is that company worth on its own?

[00:35:29] And so then I started realizing like there's just no way Wall Street has fully factored in the value and impact of DeepMind into alphabet's stock price. Because if, if Demis left tomorrow, Google's stock would crash. Like the, like  the future of the value of the company is dependent upon DeepMind. So I don't know all that context.

[00:35:47] I would really advise people, like if you, if you haven't listened to Dema speak before,   I would, I would give yourself the grace of two hours and 25 minutes and listen to the whole thing. Now the [00:36:00] interview gets a little technical, like especially in the early going,   it definitely a little technical, but.

[00:36:05] I would ride that out. Like I would sort of see that through, because the technical parts helps you realize how Demissees the world, which is if it has a structure, like if it has an evolutionary structure, whatever that is, he believes that you can model it and you can solve for it. And so anything in nature that, that has a structure, they look at like proteins that we can figure out how to do it with ai.

[00:36:35] And so it really becomes fascinating. He talks about like Veo, their, their video generation model and how surprised he was that it sort of learned physics, it seems through observation. Like prior to that they thought you had to like embody intelligence like in a robot and it had to like be out in the world and experiencing the world to learn physics and nature.

[00:36:57] And yet they. [00:37:00] Somehow just trained it on a bunch of YouTube videos and it seems to be able to recreate the physics of the universe. And that was surprising to them. He talks about like the origins of life and his pursuit of AI and AGI and why he's doing it to try and understand all of these big things.

[00:37:16] And then he gets into like the path to AGI Mike, like you had talked about.   and just kind of how he sees that playing out. He gets into like the scaling laws and, and kind of how they don't really see a breakdown in them. Like they may be slowing down in one aspect, but they're speeding up in the others.

[00:37:32] Talks about the Race to AGI competition for AI talent,   humanity consciousness. Like it's, it's just a very far ranging thing, but truly like one of the great minds probably in human history. And you get to listen to it for two hours and 25 minutes. Like it's, it's crazy that we're actually at a point of society where it's free to listen to someone like that speak for two hours.

[00:37:54] So. I don't know. I mean, I'm obviously like a, a huge fan of [00:38:00] his, but I just think that if you care deeply about where all this is going, it's really important to understand the motivations of the people driving it. And like I said in previous episode, there's like five major people right now that are driving that.

[00:38:14] And I think that listening to Demis will give you hope.   it's, it's a lot to process, but I do think that, you know, you can see why there's some optimism of a future of abundance if the world demis envisions becomes possible. So yeah, I don't know. It's,   every time I listen to his stuff, I just have to like kind of step back and like think bigger picture, I guess.

[00:38:39] Mike Kaput: Yeah. And I don't know about you if you would agree with this, but despite him painting this very radical picture of possible abundance, I don't know if I've ever heard anyone with less hype in this space than Demis provides when he talks. 

[00:38:54] Paul Roetzer: Yeah, totally. And, and you know, he, he's a researcher, like the reason [00:39:00] he sold to Google, and he said this, like he had, he could have taken more money from Zuckerberg, like they could have sold DeepMind for more money.

[00:39:06]   was because he thought that the resources Google offered would accelerate his path to solving intelligence. He didn't do it to like productize AI like that. He actually probably got dragged into having to do that when ChatGPT showed up and they had to combine Google Brain and Google DeepMind. And then he became the CEO of DeepMind, which became the solo lab within Google.

[00:39:30] He's not a product guy. Yeah. Like it ends up, he's actually a really good product guy, but not by choice or by design.   he ended up seeing, it sounds like the value of having Google's massive distribution into their seven products and platforms with a billion plus users each, where you could actually test these things.

[00:39:49] And he realized, okay, having access to all these people through these products. Enables us to advance our learnings faster. 

[00:39:56] Mike Kaput: Yeah. 

[00:39:56] Paul Roetzer:   but yeah, just a infinitely [00:40:00] fascinating person and   like I said, it's just such a, and not to, not to diminish what the other people are doing, but it's just very different.

[00:40:09] Like it's a very different motivations. And,   yeah. And he does a great job of explaining things in simple terms. Other, other than the first like 20 minutes. I mean, you gotta, you gotta hit pause a few times and maybe Google a couple things as you're going to like, understand,   some of the stuff they're talking about.

[00:40:28] But, 'cause Lex tends to ask some pretty advanced questions and, you know, it's kind of tricky to follow along a little bit. But like I said, if, if you're not that intrigued by the stuff they're talking about early on, just kind of like ride through it and you'll come out the other side and it'll be worth it.

[00:40:42]   but some of the stuff they talk about is actually fascinating to pause and go search a little bit and understand what they're talking about, because. It changes your perspective on things actually, once you understand it. 

[00:40:55] What’s Next for OpenAI After GPT-5?

[00:40:55] Mike Kaput: All right, let's dive into some rapid fire this week. First up, [00:41:00] Sam Altman recently told reporters that OpenAI will quote, spend trillions of dollars on AI infrastructure in the not very distant future.

[00:41:09] To fund this, Altman says OpenAI may design an entirely new kind of financial instrument. He also noted that he expected economists to call this move crazy and reckless, but that everyone should quote, let us do our thing. And these comments came right around the same time that Altman had an on the record dinner with journalists where he talked about where openAI's is headed after GPT-5.

[00:41:35] Now, GPT-5's rollout did overshadow the conversation. This was reported on by TechCrunch. Altman admitted that openAI's quote screwed up by getting rid of GPT-4o as part of the launch. Obviously, we talked about pay later,   brought it back, but ultimately he did want to talk a bit more about what comes next, so some notable possible paths [00:42:00] forward.

[00:42:00] He mentioned he said that OpenAI's incoming CEO of applications, Fiji CMO will oversee multiple consumer apps outside of ChatGPT that haven't yet launched, so we're getting even more apps from openAI's. She may o also oversee the launch of an AI powered browser. Altman interestingly also mentioned OpenAI would be open to buying Google Chrome, which Google may be forced to sell as part of an antitrust lawsuit.

[00:42:27] We're actually going to talk a little bit more about that in a later topic. He also mentioned that CMO might end up running an AI powered social media app. And he said that OpenAI plans to back a brain computer interface startup called Merge Labs to compete with Elon Musk's. Neuralink, though that deal is not yet done.

[00:42:48] So, Paul, there's a lot of different threads going on in these, on the record comments from Altman. I'm curious as to what stood out to you here, but I'd also love to get your take on his decision [00:43:00] to have dinner with journalists in the first place. Like, is he trying to get everyone to move past the GPT-5 launch and talk about what's next?

[00:43:09] Paul Roetzer: The dinner is interesting 'cause I think they said there was 14 journalists at this dinner. Yeah.   and it doesn't sound like they really knew why they were there or like what the purpose of the dinner was. So the TechCrunch article in particular,  the journalist was literally like, It wasn't really clear why we were there.

[00:43:23] We didn't really talk about G PT five till later in the night. Sam was just sort of like off the cuff talking about whatever.   so yeah, it was kinda a fascinating like decision, I guess. Um.  the one thing that jumped out at me right away was back in February, 2024, we reported on the podcast that,   on a Wall Street Journal article that said that Altman was seeking up to $7 trillion Hmm.

[00:43:46] To reshape the global semiconductor industry. And at the time open now was like, wow, you know, lots of money. But like that, you know, they, they didn't necessarily confirm that was the number, but there was enough insider stuff that's like, that's probably not far off from [00:44:00] what Sam was telling potential investors that they would need to raise over the next, say the next decade to build out what they need to build out with data centers and energy and everything.

[00:44:07]   and so this is the first time I think where he officially said like, yeah, we think we're gonna need to raise trillions like that. The 7 trillion probably wasn't that crazy of a number.    the other thing, so you mentioned browser social experience. It's been kind of the last couple weeks that's been bubbling that they might try and build something to compete with X ai,   or with x slash Twitter, the brain computer interface thing, which I think it was said he was gonna take.

[00:44:31] Like a, a, a leadership role in that company also, potentially that deal's not done yet, but,   that was interesting. The other one, going back to the meta thing, Altman said, he believes, quote, less than 1% of chat GBT users have unhealthy relationships with the chatbot. Keep in mind, 700 million people use it 1%.

[00:44:53] Not an insignificant number of people that they think have unhealthy relationships with their chat box. Yeah, we're [00:45:00] talking about millions of people.   G PT five launch, they said, yeah, it didn't go great. However, their API traffic doubled within 48 hours of the launch. So it doesn't seem like it, you affected 'em, but that they were effectively, quote unquote out of GPUs, meaning they're running low on chips to serve up, you know, to do the inference, to deliver the outputs for people when they're, you know, talking to GT five and things like that.

[00:45:22]    the journalist, so the tech crunch writer said, it seems likely that open, I will go public to meet its massive capital demands as part of the picture. In preparation. I think Altman wants to hone his relationship with the media, but he also wants openAI's to get it to a place where it's no longer defined by its best AI model.

[00:45:39] I thought that was an interesting take. 

[00:45:40] Mike Kaput: Mm-hmm. 

[00:45:41] Paul Roetzer: And then the other thing, I don't remember it was, I don't think it was in that article, but I saw,   this quote,   in another spot. They asked him about like, you know, going public and he said he can't see himself as the CEO of a publicly traded company. I think he said quote, can you imagine me on an earnings call, like self-deprecating?

[00:45:58] Like I'm not the guy to be on an earnings. [00:46:00] Which is fascinating because if you remember when,   they announced the new CEOI said at the time, I think this is a prelude to him stepping down as CO of actually, yeah. Like I think he has other things he wants to do.   I think he would remain on, obviously on the board, and I think he would remain involved in openAI's.

[00:46:17] But I could see in the next one to two years where Sam slowly steps away as the CEO. And based on that comment, I would not be shocked at all if it happened prior to them going public. Mm-hmm. I dunno, they certainly seem to be positioning him not to necessarily be the CEO, so something to keep an eye on.

[00:46:38] Yeah. First time I've heard him say it out loud. 

[00:46:41] Altman / Musk Drama

[00:46:41] Mike Kaput: Yeah. Super interesting. Well, in our next topic, Sam Altman is also,   having, I guess you could call it fun, maybe it's frustration with, with Elon Musk because the two of them are now again, feuding publicly. On August 11th, Musk posted on [00:47:00] X. He was talking a lot about Apple and the App store and X's position in the app store and he said that Apple at one point quote, was behaving in a manner that makes it impossible for any AI company beside openAI's to reach number one in the app store, which is an unequivocal antitrust violation.

[00:47:17] He then said X would take immediate legal action about this. Now this is why this is important to Altman, 'cause Altman replied to this post saying quote, this is a remarkable claim. Given what I've heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like Musk shop back, you got 3 million views on your BS post.

[00:47:41] You liar far more than I've received on many of mine, despite me having 50 times your follower account. Altman then responded saying to Musk that if he signed an affidavit that he has never directed changes to the X algorithm in a way that has hurt competitors or helped his companies, [00:48:00] then Altman would apologize.

[00:48:02]   things devolved from there. At one point, Musk called Altman Scam Altman as his new nickname. I think he's trying to make stick. So Paul, on one hand, this is just like, feels like juvenile high school drama laid out in public between two the most powerful people out there. But on the other, it does feel like the tone between these two has gotten more aggressive.

[00:48:26] Like, are we headed for more trouble here? 

[00:48:29] Paul Roetzer: Well, I think there was a time where Sam was trying to just diffuse things and let the legal process take place and just like not get caught up in this. And he definitely entered his, don't give a crap phase of like, he, he just, he's just all, I don't know, I don't know what changed for him personally.

[00:48:45] I don't know what changed legally, but he just doesn't care anymore. And, and now he's just baiting him into this stuff and having fun with it. Like, I think when,   Elon posted the one about him getting, you know, more views and things, Sam replied skill issue question mark. [00:49:00] Yeah. Like, I'm just better at this than,   yeah.

[00:49:03] And I guess this, I don't know, like, again, not to judge them, like everybody's got their own approach to this stuff, but my, my point going back to, okay, here's two of the five that are shepherding us into AGI and beyond. Mm-hmm. And they're spatting on Twitter. There was a great meme I saw where like it was a cafeteria fight and it was like Sam versus Elon with the names on it.

[00:49:26] And then like. Demis or Google DeepMind just sitting at the table eating their lunch, like just locked in, focused, like they're gonna just gonna keep going while all this other madness is happening behind them. And that's kind of how I feel right now. It's like I, on the prize, like DeepMind is just the more serious company I guess.

[00:49:43] And doesn't mean they win, doesn't mean like anything. It's just, just is what it is. Like Deep Mind is staying locked in. Demis plays all sides, just like congratulates. People when they launch new models, stays professional about this stuff. Can't fathom that Demis [00:50:00] ever doing anything like this. Like it's just, it's a different vibe.

[00:50:04]   again, maybe not better, maybe not worse. I don't know. It just is what it is. Just sharing observations. So I don't know what these two are doing. My, I can't, but my one hope for like all of this is we get two, three years down the road, we are at AGI. Beyond AGI, super intelligence is within reach. At some point these labs have to work together.

[00:50:27] Like we, we will arrive at a point where humanity depends on labs and probably countries coming together to make sure this is done right and safely and, and so I hope the bridges aren't completely burned. I know they have a lot of mutual friends, and I just hope some at some point, everyone finds a way to do what's best for humanity, not what's best for their egos.

[00:50:55] xAI Leadership Shake-up

[00:50:55] Mike Kaput: That would be nice. Yeah, it would be nice. All right. Next up, one of [00:51:00] the top people at Elon Musk's XAI is stepping away. Igor Baskin, who co-founded the company in 2023 and led its engineering teams announced he's leaving to start a new venture capital firm focused on AI safety. Baskin says he was inspired after a dinner with physicist Max Techmark.

[00:51:22] Where they discussed building AI systems that could benefit future generations. His new fund, Baskin Ventures, aims to back startups that advance humanity while probing the mysteries of the universe. Butkin said in a post on X that he has quote, enormous love for the whole family at Xai. He had nothing but positive things to say about his work at the company.

[00:51:43] Timing however, is a little interesting. XAI has been under fire for repeated scandals tied to its chatbot rock.   things like parroting Musk's personal views and spouting anti-Semitic rants, which we've talked about a lot of controversy around the images being [00:52:00] generated by its image generation capabilities.

[00:52:03] These controversies have, you know, somewhat distracted from the fact that Xai is one of the like five companies out there building these frontier models. They are just as far caught up as anyone else, including openAI's and Google DeepMind. So Paul, it's worth noting that we don't talk about Igor much.

[00:52:21] We definitely mentioned him before, but he's a significant player in ai. He used to work at both DeepMind and openAI's before co-founding Xai. Do you have any thoughts about maybe what is behind his departure? Is it coincidental that this all comes during more controversy for X ai? 

[00:52:41] Paul Roetzer: I don't know. I mean, again, it's one of those, you can only take 'em at their word and he broke this news himself and that it was covered by, you know,  the publications and everything.

[00:52:49]   he's said regarding that Max Techmark dinner, you mentioned that Max showed him a photo of his young sons and asked me, quote unquote, how can we build AI safely to [00:53:00] ensure that our children can flourish? I was deeply moved by this question, and I want to continue my mission to bring about AI that's safe and beneficial to humanity.

[00:53:08]   I do just think that there's going to increasingly be a collection of top AI researchers who see. You know, the light, I don't know if it's the right analogy, the light at the end of the tunnel, they, they see the path to AGI and super intelligence and they know it can go wrong. And I think you're gonna have a bunch of these people who probably made more money than they ever need in their lifetimes already.

[00:53:32] And,   and they want to figure out how to do this safely. And people are gonna be at different points in their lives. They're gonna have different priorities in their lives. And I think there's gonna be a whole bunch of 'em who think that they can positively impact it in society. And so, I I don't think this is the last top AI researcher, we're gonna see who, you know, takes,   an exit to, to go focus on safety and, you know, bringing it to humanity in the most [00:54:00] positive way possible.

[00:54:00] So, I mean, I'm optimistic we see more of those. I hope we see more people focused on that.   but yeah, I don't know. Other than that there's not much to read into it, I don't think from our end. 

[00:54:09] Mike Kaput: I'd also love to just see more of these people, I guess publishing or talking more about the very specific pathways they wanna take to do that.

[00:54:17] Yeah. Because it's hard for me to wrap my head around how exactly are you influencing AI safety if you are not building the frontier models. Not to say you can't have plenty of amazing ideas that catch on or laws or legal and policy influence. Right. But I would just be curious what their kind of suggestions are.

[00:54:37] Paul Roetzer: Yeah, and I think, you know, Dario has said as much with Anthropic. Yeah. When people push back on, well you're the ones, you know, how can you talk so much about AI safety and alignment when you're building the frontier models like everybody else and you're pushing these models out into the world and now you're maybe even like saying you're willing to set your morals aside and take funding from people who you think are evil.

[00:54:56] Mm-hmm.   to achieve your goals and his [00:55:00] belief. And I would imagine the belief of quite a number of people within these labs is. We can't do AI safety if we're not working on the frontiers. Like we need to see what the risks are to solve the risks. 

[00:55:11] Mike Kaput: Mm-hmm. 

[00:55:11] Paul Roetzer: And so if we give up and we don't keep building the most powerful models, then we will lose sight of what those risks are and how close we are to surpassing them.

[00:55:19] And so that's his, I I don't know if that's something that's just, perhaps you go to sleep at night or if that's truly, I I don't have any reason to believe that that's not like what he actually believes.   that it's sort of like at all costs, we, we have to do this because otherwise we can't fulfill our mission of doing this safely.

[00:55:37]   it's a fine line because there's no real proof that they're going to be able to control it once they create it. So it's a catch 22. Gotta create it to know if you can protect us from it, but you may create it and then realize you can't. And, and there we are. 

[00:55:55] Perplexity’s Audacious Play for Google Chrome

[00:55:55] Mike Kaput: All right, next up. Name something deeply.

[00:55:58] Unserious AI [00:56:00] Startup Perplexity has offered Google $34.5 billion to buy Google Chrome. This is arriving as US regulators way whether Google should be forced to divest Chrome as part of an antitrust case. Perplexities trading. Seriously, they say their pitch is that multiple investment funds will finance the deal, though analysts quickly dismiss their offer as wildly low.

[00:56:26] One analyst put Chrome's real value closer to a hundred billion dollars. Google for its part, has not commented on this. It's appealing the judge's ruling that it has a legally monopolized search, so it's unclear if Chrome will get sold at all. Skeptics, not only argue the deal is unlikely because of a low ball price, but because untangling chrome from Google's broader ecosystem could be very, very messy if it were to go ahead and get sold.

[00:56:55] So, Paul, this just, I don't know, it feels like a bit of a pr play from perplexing, [00:57:00] not the first time. I know you've got some thoughts on this. 

[00:57:03] Paul Roetzer: Yeah, I mean, I don't want to hammer on perplexity. Good technology. I don't think they're a serious company. Like they, they just do these absurd right. PR plays. They did it with TikTok, they're doing it with Chrome.

[00:57:14]   they claim they have funny, whatever. Like th this is just is their MO by now. Like, so I don't put much,   weight on these things. The funniest tweet and I get that this is a total like.   geek Insider, funny, like most people wouldn't laugh at this, but Aiden Gomez, who's the co-founder and CEO of Cohere and also one of the creators of the Transformer, that when he was at the Google Brain Team in 2017, that invented the transformer, that became the basis for GPT.

[00:57:42] So Aiden, legitimate player, we've talked about the podcast before, he tweeted Coherent, tends to acquire perplexity immediately after their acquisitions of TikTok and Google Chrome. We'll continue to monitor the progress of those deals closely so we can submit our term sheet upon completion. I don't know [00:58:00] why I just, it was like tweet of the week for me, it was just hilarious because it was, the whole point is like, this is not a serious company and so he was just having some fun with it.

[00:58:10]   yeah, I don't know. I have a hard time putting, like I said, any weight really behind any of these things. Perplexity does just. Tech's great. If you enjoy perplexity as a platform, we do like, we use it some, I don't, I don't use it as much anymore, but I don't, like, we still use it. It's still a worthwhile technology to talk about, but this PR stuff they do is just exhausting.

[00:58:32] Chip Geopolitics

[00:58:32] Mike Kaput: Amen. All right. Next up. Nvidia and a MD have struck an extraordinary deal with the Trump administration. They're going to hand over 15% of revenue from certain ship sales in China directly to the US government. So this arrangement, which is tied to export licenses for both companies, chips has no real precedent in US trade history.

[00:58:57]   no American company has ever been required to [00:59:00] share revenue in exchange for license approval. Now, this deal was finalized just days after Nvidia CEO Jensen Wong met with President Trump. Only months earlier, the administration had moved to ban a certain category of NVIDIA's chips, the age 20 altogether.

[00:59:18] Citing fears that that could fuel China's military AI programs. Now the chips are flowing again though at a cost. Some critics have called the move a shakedown, arguing it reduces export controls to a revenue stream while undermining US security. So Paul, obviously from a totally novice perspective, since I'm not a national security expert, this does feel a bit like Nvidia might have just kind of cut a pretty blunt quid pro quo deal with the US government to avoid its products being banned.

[00:59:50] Is that's what, is that what's going on here? 

[00:59:53] Paul Roetzer: Yes. Obviously there's lots of complexities to this kind of stuff. You never know if the deal that you're reading in the media is the [01:00:00] actual deal and you know what the other parameters of it are. So it's sort of like we just gotta take on face value, what we know to be true.

[01:00:07]   the only things I would throw in here is like the basic premise of why the US government would do this, and they would back away from the ban.   other than the financials of it is they, they want us chips to be what's used. They don't want,   the world to become dependent upon chips that aren't made,   by US-based companies.

[01:00:25] And so China wants to become less dependent upon US chips. I, there was actually some reports last week that they were requiring like deep seek to be trained on Chinese chips and it didn't work. Like they were having problems with the Chinese chips. And so they actually need, like the Nvidia chips to do what they want to do.

[01:00:42] The age twenties are nowhere near the most powerful chips NVIDIA has. So they want to basically create,   dependency on US-based company chips.   maybe there's some other Department of Defense related things that we won't get into at the moment as to why you'd want these chips,   in China, but. [01:01:00] It's, yeah, it's just a complex space.

[01:01:02]   I also can't comment from any sort of authorit, any sort of authoritative position on the politics of the deal. And, you know, the quid pro quo of 15% revenue, like, who knows? But just of it is NVIDIA's a US based company. They, the US government wants,   countries around the world to be dependent upon US technology.

[01:01:25]   it's good for the US and NVIDIA maintains its leadership role and I think that's the basis of it. And this administration, a lot of things come down to the financials and being able to make a deal quote so looks good for everybody, I guess is kind of the gist of it. 

[01:01:43] Anthropic and AI in Government

[01:01:43] Mike Kaput: Another AI in government related story, Anthropic,   has now offering Claude for just $1 to all three branches of the US government.

[01:01:53] So this includes not only executive agencies, but also congress and the judiciary. Basically this deal covers Claude for [01:02:00] Enterprise and Claude for government, which is certified for handling sensitive but unclassified data. So agencies as part of this will get access to Anthropics Frontier models and technical support to help them use the tools.

[01:02:14] This basically comes right on the heels of openAI's doing the exact same thing. They offered their technology basically for free to the US government, which we talked about in a recent episode. This also comes right when the federal government is launching a new platform called US ai, which gives federal employees secure access to models from openAI's and Anthropic, Google and Meta.

[01:02:36] So run by the General Services Administration. The system lets workers experiment with chatbots, coding assistance, and search tools inside a government controlled cloud. So basically agency data doesn't flow back into the company's training sets.   this is a bit like anything political or government focused these days.

[01:02:58] A bit charged. The [01:03:00] Trump administration has been pushing hard to automate government functions under its AI action plan, even as critics warn that the same tools could also displace federal workers who are also being cut as part of kind of downsizing of the government. So, Paul, I don't know. I, for 1:00 AM glad, I guess the government employees are getting access to really good AI tools to use in their work.

[01:03:22] Seems like a win for effectiveness and efficiency.   but it seems like there is some controversy here of like, are we going to use these tools to replace people rather than augment them? 

[01:03:34] Paul Roetzer: So give or take, there's about 137 million full-time jobs in the United States it looks like, based on this quick search, and this is AI overviews.

[01:03:41] I haven't had a chance to like completely verify this, but this is coming from Pew,   research and u in USA facts. It's about 23 million of that.   137 million worked for the government in some capacity, but 3 million at the federal level. So, yeah, it's a significant amount of the workforce. Like, you know, the more this stuff is infused, [01:04:00]   into,   work,  the greater impact it has.

[01:04:04] I don't know how much training these people are gonna be given, like, right. This is, I mean, we can talk all day about being given access for a dollar, whatever, to all these different platforms.   same thing's happening at the higher education level where they're, you know, doing these programs to, to give these tools,   to students and, and administrators and teachers.

[01:04:23] it all comes down to are they taught to use them in a responsible way? And, you know, I think that's gonna end up deciding whether or not a program like this is effective. And then to your point, what is the real purpose here?   yes, efficiency is great, but efficiency,   in place of people isn't great when  there's no good answers yet from the leadership of what happens to all the people who won't have jobs because of the efficiency gains.

[01:04:51] So interesting to pay attention to. Obviously there was like some backroom deal of like, okay, you're, you're, you're, you're up for [01:05:00] federal contracts that are worth hundreds of millions of dollars, but you have to give your technology to the federal government for free, basically. Right? That was, it's not hard to connect the dots here that there's,   criteria to be eligible for federal contracts, and this is part of the game that needs to be played.

[01:05:17] Apple’s AI Turnaround 

[01:05:17] Mike Kaput: All right. Next up, apple is plotting its AI comeback according to some new reporting from Bloomberg. So their comeback includes a bold pivot into robots lifelike AI and smart home devices. At the heart of the plan that Bloomberg is reporting on is a tabletop robot slated for 2027 that can swivel around towards people speaking and act almost like a person in the room.

[01:05:43] It's kind of described almost as like an iPad mini perhaps, and kind of like a swivel arm. And it's designed to FaceTime, follow conversations and even interrupt with helpful suggestions. Its personality will come from a rebuilt version of Siri, powered by large language models and [01:06:00] given a more visual animated presence before that arrives.

[01:06:04] Apple is going to also release a smart display next year alongside home security cameras meant to rival Amazon's ring and Google's nest. These mark kind of a another push into the smart home with software that can recognize faces, automate tasks, and adapt whoever walks into a room. And of course this comes after all the criticism we've talked about with Apple kind of missing.

[01:06:29] And then, you know, fumbling a bit the generative AI wave. So Paul, it is interesting to see Apple making what appear to be maybe some radical moves here that tabletop robot feels especially noteworthy given OpenAI's plans to also create. An AI device with former Apple legend, Johnny, ive, is this going to be enough?

[01:06:52] Are they focused in the right direction here? 

[01:06:55] Paul Roetzer: Let see if they actually deliver on any of this.   it's funny though that, that, [01:07:00] that tabletop robot was If I remember correctly, going back to the Johnny Ive thing and like the answer out what it could possibly be. I think that was one of the things I said, like, would, I wouldn't be surprised if they did like a tabletop robot that was next to you.

[01:07:12] So wouldn't surprise me at all if that's a direction a number of people are kind of moving in. There's different interfaces Apple has, they haven't announced the date yet, but early September will be  the next major Apple event when they'll probably unveil  the iPhone 17, like the next iterations, maybe  the new watch, things like that.

[01:07:31]   so that would be the next date to watch for is,   early September. And I would imagine they would give some kind of significant update on their AI ambitions at that event.   so yeah, we'll keep an eye on the space. Like again, I'm just. It's shocking, like how little Im impact their lack of progress in AI has had on their stock price.

[01:07:53] Like it's just, they, they seem,  very resilient  the stock price [01:08:00] to their deficiencies in ai. So they've, they've been given the grace of a a of a third try this and hopefully they nail it. 

[01:08:09] Cohere Raises $500M for Enterprise AI 

[01:08:09] Mike Kaput: Next up, the AI model company cohere just closed a massive funding round, half a billion dollars at a $6.8 billion valuation.

[01:08:18] The money will fuel its push into, into a agentic ai. So systems designed to handle complex workplace tasks while keeping data secure and under local control. Cohere is a model company we've definitely mentioned a bunch of times, but definitely flies a bit below the radar. It builds models and solutions that are specifically enterprise grade and especially useful for companies in regulated industries that want more privacy, security, and control.

[01:08:45] Then what they get from big AI labs. In COHEs words, those labs are kind of repurposing consumer chatbots for enterprise needs. To that end, cohere has its own models that customers can use and build on, including a [01:09:00] generative AI model series, command A and Command A Vision retrieval models embed four and re-rank 3.5, and an agentic AI platform called North.

[01:09:11] So Paul, it has been a while since we've kind of really focused on cohere. This amount of funding certainly pales in comparison to what the Frontier Labs are raising. But I guess the question for me is like, how much is cohere worth paying attention to? How is what they're doing actually competing and differentiating from the big labs?

[01:09:33] Paul Roetzer: Yeah, I mean, at that, at that valuation and that amount of funding, they're, they're obviously just no longer trying to play in the Frontier model training game. Mm-hmm. They're trying to build smaller, more efficient models and then post train them specific for industries.   early on, their playbook was to try and capture like,   industry specific data so they could train models, like specifically for different verticals and things like that.

[01:09:56] So I think companies like Cohere, again, this is Aiden Gomez, [01:10:00]  the CEO mentioned earlier.   there's probably a bigger market for companies like this than there are for those frontier model companies. Like there's only gonna be three to five in the end that can spend  the billions or, you know, maybe even trillions to, to train the most powerful models in the future.

[01:10:18] But there's gonna be probably be a whole bunch of companies like this that are worth billions of dollars that just focus on very specific applications of AI or training into specific industries and building vertical software solutions on top of it. So, yeah, I mean, it's a good company,   that they just don't have the splashy headlines that, you know,  the ones raising the billions and having these ridiculous valuations have.

[01:10:41] But, you know, I think if,   if we end up being in an AI bubble, companies like this probably still do pretty well within that, you know, they're a little bit more specialized. So yeah, definitely a company worth paying attention to. We've been following Aiden for years and,   yeah, we definitely keep an eye on cohere.

[01:10:57] AI in Education

[01:10:57] Mike Kaput: All right. We're going to end today with [01:11:00] an inspiring case study of AI usage in education. We found a recent article that highlights how Ohio University's College of Business has been staying ahead of the curve in AI since the very beginning of the generative AI revolution. Within months of ChatGPT being released, the college became the first on campus to adopt a generative AI policy to guide responsible use.

[01:11:23] And that actually grew into something bigger. Every first year business student now trains in what the school calls a fi, the five AI buckets, which means using AI for research, creative ideation, problem solving, summarization, and social good. From there, the training scales up. Students end up building prototypes of new businesses and hours using ai.

[01:11:44] Partner with companies on capstone projects and join workshops where ideas become business models powered by AI in real time by graduation. Nearly every student has used AI in practical career ready ways, and this initiative has [01:12:00] now expanded into graduate programs and even inspired a new AI major in the engineering school.

[01:12:06] Now, Paul, I'm gonna put you on the spotlight a little bit here. Ohio University is your alma mater. You get a big shout out in this article for your work, helping the school build momentum around ai. Can you walk us through what they're doing and why this approach is worth paying attention to? 

[01:12:25] Paul Roetzer: I didn't, I didn't know, obviously they were doing this article,   a friend of mine and, and some of the people, you know, our connections there, shared it with me on Friday.

[01:12:32] We were actually out, golfing, for a fundraiser on Friday. You and Mike, you and I, and some of the team. And,   they tagged me in this. So, you know, I th thank you for, you know, the acknowledgement within the article.   but more so for me it was like,   I was just proud to see the progress they'd made.

[01:12:51] So I started, I've stayed very involved with Ohio University through the years.   I did a visiting professor gig probably back in like 2016, [01:13:00] 17. I spent a week on campus teaching through the communication school, and around that time is when I got to know some of the business school leaders. And they were very, very welcoming to the fact that like, AI was probably gonna affect, they didn't really know what it meant yet at that time.

[01:13:15]   Hugh Sherman was the dean of the business school at the time. He eventually became the president of Ohio University before,   retiring again. And, and so I got to know Hugh very well.   I spent a lot of time with them just kind of talking back in those days about where I was going and what impact it could have.

[01:13:31] And to their credit, like they would, they were very welcoming of these outside perspectives, and that's not always true, especially in higher education.   but like in, I think it was, maybe like summer or two, right around this time, 2019, I want to say, I actually,  Hugh Sherman brought me in to, to lead a workshop.

[01:13:52] Like it was a half day workshop and there was like 130, it was the entire business school, faculty and administration. And so we did a workshop on [01:14:00] applied AI in the classroom, and it was like, how can we be enhancing student experiences and curriculum through AI? What are like near term steps we can take?

[01:14:07] What's long-term vision? It was one of the coolest like, professional experiences I had. I don't wanna turn like a main topic, but like, I almost failed outta college. Like I went into college pre-med at, at OU and I didn't take it seriously for the first 10 weeks. And so I lost my scholarships, like I screwed up, and then my parents gave me another chance.

[01:14:26] And so it was just such a cool thing for me to come back to campus. What would've been, you know, almost 20 years after I graduated and lead a workshop on like, the future of education in the business school,   in a school I almost didn't make it through. And so it was never lost on me. This like really amazing opportunity to go back and affect in a positive way, a, a school that made such an impact on me in my four years there and my wife who also graduated from there.

[01:14:54] So yeah, it was just awesome. And we love to put the spotlight on universities that are [01:15:00] doing good work, that are truly committed to preparing students for the next generation. And I love the work they're doing. I love the work they're doing,   through their entrepreneurship center and, and you know, enabling people to think.

[01:15:11] In an entrepreneurial way and apply AI to that. Plus, you know, as a layer over any business degree, I have a relative who's actually starting there this week,   heading down for his sophomore year there. And I've been talking a lot to him about whatever you do, whatever, you know, business degree you go get, just get AI knowledge on top of it.

[01:15:28] Like, I don't care if it's economics or finance or computer, whatever it is, just get the AI knowledge on it. And I have confidence withOUthat they're gonna provide that. Like it's, and and that's, I think as a parent, as a, you know, family, you want to just provide the opportunity for your students, your family members to go somewhere where they're going to have access to the knowledge they get to make the choice if they go get it, but like, you wanna make sure you at least have it as a progressive university that's looking at ways to layer AI in.

[01:15:53] And so, yeah, we wanted to make sure we acknowledge OU not just for personal reasons for me, but just [01:16:00] as another example of a university that's doing good things.   and we'll put  the link to the article in the show notes if you want to read a little bit more about what they're doing down there. So.

[01:16:09] Yeah, it's cool. Love. I love it. I gotta get back. I haven't been down there in a few months, so that's awesome. 

[01:16:14] Mike Kaput: Alright, Paul, that's a wrap on another busy week in ai. Thanks again for breaking everything down for us. 

[01:16:20] Paul Roetzer: Alright, thanks everyone. And again, if you,   if you don't get a chance to attend live the AI Academy launch,   check the show notes, put the link in there and, and you can kind of re-watch that on demand.

[01:16:30] So thanks again, Mike, for all your work curating everything, and we'll be back next week with another episode. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in the marketing AI Institute Slack [01:17:00] community.

[01:17:00] Until next time, stay curious and explore ai.

Recent Posts

Google DeepMind's Demis Hassabis Reveals His Vision for the Future of AI

Mike Kaput | August 19, 2025

Demis Hassabis doesn’t just want to build AI. He wants to use it to understand the universe.

OpenAI’s GPT‑5 Launch Sparks Backlash, Fixes, and Big Questions About Its Future

Mike Kaput | August 19, 2025

OpenAI’s much‑anticipated GPT‑5 rollout was supposed to showcase the company’s AI dominance. It didn't go as planned.

Meta’s AI Policy Just Crossed a Line

Mike Kaput | August 19, 2025

A leaked 200-page policy document just lit a fire under Meta, and not in a good way.