<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

56 Min Read

[The AI Show Episode 110]: OpenAI’s Secret Project “Strawberry” Mystery Grows, JobsGPT, GPT-4o Dangers, Groq Funding, Figure 02 Robot, YouTube AI Class Action Suit & Flux

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

We have had our minds on strawberry fields this week with all the latest happenings at OpenAI.

Join our hosts as they discuss OpenAI’s secret project “Strawberry” and OpenAI's leadership changes, including Greg Brockman's sabbatical and John Schulman's move to Anthropic. They'll also explore JobsGPT, Paul Roetzer's tool for understanding AI's impact on jobs and OpenAI's GPT-4o System Card. In our rapid-fire section, we'll touch on the latest legal disputes with OpenAI, Figure’s 02 Robot, ChatGPT watermarking, new AI image generator Flux, and more.

Listen Now

Watch the Video

Timestamps

00:03:48 — OpenAI president Greg Brockman goes on sabbatical + OpenAI’s secret project “Strawberry” Mystery Grows 🍓

00:23:20 — SmarterX.ai JobsGPT

00:43:02 — GPT-4o System Card Evaluates Risks/Dangers

00:56:43 — Groq’s Huge Funding Round

00:59:08 — Figure Teases Figure 02 Robot

01:02:44 — Musk Brings Back OpenAI Lawsuit

01:05:48 — YouTuber Files Class Action Suit Over AI Scraping + Nvidia Gets Caught

01:09:00 — ChatGPT Watermarking

01:11:40 — New AI image generator Flux.1

01:13:51 —Godmother of AI on California’s AI Bill SB-1047

Summary

OpenAI Departures + The Strawberry Mystery Grows 🍓

OpenAI is experiencing some serious leadership changes… Greg Brockman, OpenAI's president and co-founder, is taking an extended leave of absence, which he is calling a sabbatical, until the end of the year.

John Schulman, another co-founder and key leader in AI model refinement and safety, has left OpenAI to join rival company Anthropic. (Schulman cited a desire to work more deeply on AI alignment as his reason for the move.)

And Peter Deng, a product leader who joined OpenAI last year from Meta Platforms, has also departed the company. This leadership shuffle follows other recent departures, including co-founders Ilya Sutskever and Andrej Karpathy, who left to form rival startups.

These moves have led some industry observers to question how close OpenAI really is to a breakthrough in AGI.

Amidst these high-profile departures, something else is brewing with cryptic posts, commentary, tweets and more about Strawberry, the secretive OpenAI project we covered in Episode 106.

JobsGPT

CEO Paul Roetzer has spent the past several months building JobsGPT by SmarterX to evaluate how AI, specifically LLMs, might affect jobs and the future of work.

JobsGPT is a ChatGPT-powered tool designed to assess the impact of AI on jobs by breaking down roles into tasks and evaluating how AI can enhance productivity.

It provides actionable insights for professionals and business leaders to prioritize AI use cases, forecast job exposure to AI advancements, and accelerate innovation.

The tool helps individuals explore AI's impact on their jobs and roles in different fields, making it easier to adapt to AI at work. It offers a detailed task analysis, AI exposure assessment, and the ability to download comprehensive reports.

In our mission to promote AI literacy for all, this tool will allow users to be proactive in assessing the impact of AI, and prepare them to evaluate the future of work.

GPT-4o System Card Outlines Risk Management

OpenAI has released a report outlining the safety work they’ve carried out prior to releasing GPT-4o.

In the report, OpenAI published both the model’s System Card and a Preparedness Framework safety scorecard to “provide an end-to-end safety assessment of GPT-4o.” As part of the work, OpenAI says it worked with more than 100 external red teamers to test and evaluate model risks.

A big focus of this work was evaluating GPT-4o’s advanced voice capabilities, the new voice features currently rolling out to ChatGPT paid users. Broadly, the process included identifying risks in how the model could be used either maliciously or unintentionally to cause harm, then take steps to mitigate those risks.

Some of the major risks identified with the voice features included: unauthorized voice generation, speaker identification requests, generating copyrighted content, and violent or erotic speech output.

OpenAI also said they prevented the model from "making inferences about a speaker that couldn’t be determined solely from audio content" (as in, estimating the intelligence of the speaker).

Interestingly, OpenAI also evaluated the model’s persuasiveness, using it to try and shape human users’ views on political races.

The company found that “for both interactive multi-turn conversations and audio clips, the GPT-4o voice model was not more persuasive than a human.”

A third-party assessment that was part of the report from Apollo Research also evaluated the capabilities of “scheming” in GPT-4o.

Links Referenced in the Show

This week’s episode is brought to you by MAICON, our 5th annual Marketing AI Conference, happening in Cleveland, Sept. 10 - 12. The code POD200 saves $200 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: Think about the value of a superhuman persuasive model. Of a model that can persuade people to change their beliefs, attitudes, intentions, motivations, or behaviors.

[00:00:10] Paul Roetzer: We are talking about something that is inevitably going to occur. If the capabilities are possible, someone will build them and someone will utilize them for their own gain.

[00:00:21] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:51] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:58] Paul Roetzer: Okay, welcome [00:01:00] to episode 110 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. As always, we have a rather intriguing episode today.

[00:01:11] Paul Roetzer: I don't even know, this, this whole strawberry mystery just continues to grow and it's gotten kind of wild.

[00:01:17] Paul Roetzer: And so, I mean, we're recording this. Monday, August 12th, 10 30 a. m. Eastern time. By the time you listen to this, I expect we're going to know a little bit more about what in the world is going on with Strawberry and who this mystery Twitter account is. And it's just wild. So we've got a lot to cover today.

[00:01:36] Paul Roetzer: prepping for this one was pretty interesting this morning, getting ready to go.

[00:01:40] Paul Roetzer: we're going to get into all that. today's episode is brought to us again by the Marketing AI Conference, our fifth annual Marketing AI Conference, or MAICON, happening in Cleveland, September 10th to the 12th. There are a total of 69 sessions, 33 breakouts across two tracks of applied AI and strategic AI, [00:02:00] 16 AI tech demos, 10 main stage general sessions and keynotes, 5 lunch labs, Three pre conference workshops, two of which are being taught by Mike and myself, and two mindfulness sessions.

[00:02:13] Paul Roetzer: So, the agenda is absolutely packed. If you haven't checked it out yet, go to maicon. ai, that's M A I C O N dot A I. I'll just give you a quick sense of some of the sessions. So, I'm leading off with the road to AGI, a potential timeline. Of what happens next, what it means and what we can do about it. We're going to preview some of that actually today.

[00:02:33] Paul Roetzer: we've got digital doppelgangers, how savvy teams are augmenting their unique talents using the magic of AI with Andrew Davis. Lessons learned in early leadership of scaling marketing AI with Amanda Todorovic from Cleveland Clinic. future of AI is open with one of our, you know, long time Institute supporters and speakers, Christopher Penn, got navigating the intersection of copyright law and generative AI.

[00:02:56] Paul Roetzer: With Rachel Dooley and Krista Lazer generative AI in the future of [00:03:00] work with Mike Walsh, marketing the Trust Economy with Liz Grin and McKinsey, and just keeps going on and on. So absolutely check it out. It's again, in Cleveland, September 10th to the 12th, you can use promo code pod 200, that's POD 200

[00:03:15] Paul Roetzer: To save $200 off all passes.

[00:03:17] Paul Roetzer: We only have about. I didn't look at the countdown clock, about 28 days left until the event. So Mike and I have a lot of work to do over the next, month or so here to get ready. But again, check out maicon. ai, click register and be sure to use that pod 200 code. All right, Mike, I don't even know where to go with this strawberry thing, but let's go ahead OpenAI, which seems like the weekly recurring topic.

[00:03:44] Paul Roetzer: And then this strawberry thing that just is taking on a life of its own.,

[00:03:48] OpenAI Departures + The Strawberry Mystery Grows

[00:03:48] Mike Kaput: Yeah. There's never a dull moment at OpenAI, it certainly seems, because first they are experiencing some pretty serious leadership changes.

[00:03:57] Mike Kaput: So we're gonna first just tee that up [00:04:00] and talk about, What the heck strawberry is and what's going on with it. So first up, Greg Brockman, OpenAI president and co founder, said he's taking an extended leave of absence, which he says is just a sabbatical, until the end of the year. At the same time, John Shulman, another co founder and a key leader in AI, has left OpenAI.

[00:04:22] Mike Kaput: to join rival company Anthropic, and he said he wanted to work more deeply on AI alignment, and that's why he leaving. possibly related, possibly not, Peter Deng, a product leader who joined OpenAI last year from Meta, has also departed the company. So, you know, as you these aren't, uh, The Only our first people to have left, I mean Ilya Sutskever, left after all sorts of controversy last year around the boardroom coup to oust Sam Altman, and Andrej Karpathy has left to go work on an AI education startup.

[00:04:56] Mike Kaput: So, these kinds of departures have really led [00:05:00] some industry observers to question, like how close is OpenAI really to breaking through, Uh, to creating AGI? So, AI researcher Benjamin De Kraker put it in a post on X. He put it really well. He said, quote, If OpenAI is right on the verge of AGI, why do prominent people keep leaving?

[00:05:20] Mike Kaput: And he went on to say, quote, Genuine question. If you were pretty sure the company you're a key part of and have equity in, is about to crack AGI within one to two years, why would you jump ship? Now, interestingly, in parallel to this, and Paul, I'll let you kind of unpack this for us, there have been a series of very cryptic posts referencing Strawberry, which is an open AI project we had referenced previously, centered around advanced reasoning capabilities for AI that have been posts that have been engaged with by Sam Altman, posts coming from anonymous accounts, really does seem in a weird way, like [00:06:00] something is brewing when it comes to strawberry, as well as we're seeing more and more references, both from Sam and from other parties, in relation to those possible AI capabilities.

[00:06:10] Mike Kaput: So, Paul, let's kind of maybe take this one step at a time. Like, I want to start off with the question that the AI researcher posed. If OpenAI is right on the verge of AGI, why do you think prominent people like these are leaving?

[00:06:27] Paul Roetzer: Yeah, it's a really good question. I have no idea. All any of us can really do at this point is speculate. The couple of notes I would make related to this is Greg and John are co founders. Like, I would assume their shares have vested long ago in OpenAI. So, I don't know. Unless, you know, more shares are granted or they have some that have invested their, their money's safe either way.

[00:06:49] Paul Roetzer: So if Greg wants to peace out for a while and things keep going, his, his equity is not going anywhere. So I don't think their equity has anything to do [00:07:00] with whether or not a breakthrough has been made internally or whether the next model is, you know, on the precipice of coming.

[00:07:07] Paul Roetzer: So Greg is supposedly taking leave of absence, as you said, maybe he is, maybe he's done, I don't know.

[00:07:13] Paul Roetzer: And maybe John's leaving because he thinks AGI is actually near and Anthropic is a better place to work on safety and alignment. So, I don't know that we can read anything into any of this, really. It's, it's complicated and I think we just gotta let it go. sort of play out. I do have a lot of unanswered questions about the timing of Greg's leave.

[00:07:34] Paul Roetzer: So on August 5th is when he tweeted, I'm taking a sabbatical through end of year. First time to relax since co founding OpenAI nine years ago. The mission is far from complete. We still have a safe AGI to build. he tweeted on August 8th, This first tweet since he left, everyone on sabbatical. A surprisingly hard part of my break is beginning the fear of missing out for everything happening at [00:08:00] OpenAI right now, lots of results, cooking.

[00:08:02] Paul Roetzer: I've poured my life for the past nine years into OpenAI, including the entirety of my marriage. Our work is important to me, but so is life. I feel okay taking this time in part because our research safety and product progress is so strong. I'm super grateful for the team we've built and it's unprecedented talent density and proud of our progress.

[00:08:22] Paul Roetzer: Looking forward to completing our mission together. So I don't, I mean, I don't know. He doesn't really tweet about his personal life too much. It kind of indicates to me like maybe this is just to get his personal life in order, you know, give some focus to that after nine years, maybe. That's all it is. and then I just kind of scanned back to see, well, what has he been tweeting leading up to this?

[00:08:42] Paul Roetzer: He doesn't do as many cryptic tweets as Sam Altman. He, he does his own fair share, but his last like six tweets were all pretty product related. So on 7 18, so July 18th, he said, just released a new state of art and fast, cheap, but still quite capable models. That was the four. [00:09:00] Oh, Mini, which we're going to talk more about, July 18th, just launched ChatGPT Enterprise

[00:09:05] Paul Roetzer: Compliance Controls and then featured some of their enterprise customers like BCG and PwC and Los Alamos.

[00:09:13] Paul Roetzer: On July 25th, SearchGPT Prototype now live, and then on July 30th, Advanced Voice Mode rolling out. So he's been very product focused in his tweets, so we can't really learn too much from that. The thing I found unusual is Sam Altman didn't reply to Greg's tweet. Sam replies to every high profile person's tweet that leaves or posts.

[00:09:31] Paul Roetzer: You know, temporarily separates from OpenAI. So, for example, the same day that Greg announced he was taking a sabbatical, John Shulman announced he was leaving. And Sam posted 25 minutes later a reply to John's tweet saying, we will miss you tremendously, telling the story of how they met in 2015. so, I just thought it was weird that he didn't individually tweet about or reply to Greg's tweet.

[00:09:59] Paul Roetzer: [00:10:00] Again, can you read anything into that? I don't know. It's just out of the ordinary.

[00:10:03] Paul Roetzer: And maybe it's because he was too busy vague tweeting about strawberries and AGI to deal with it, so.

[00:10:10] Mike Kaput: Or

[00:10:11] Mike Kaput: Yeah. So maybe, cause that is such a key piece of this is amidst all these like personnel changes, which is what everyone's like, you know, the headlines are focused on there's all these cryptic tweets he has been posting about AGI, about strawberry, can you maybe walk us through like what's going on here?

[00:10:29] Mike Kaput: Because, you know, as we've seen in the past, I think on this show and just in our work, like Paying attention to what he posts is usually a very good idea.

[00:10:38] Paul Roetzer: Yeah, so the last, like, four days have been kind of insane if you follow the inner people within the AI world. So, if you'll recall, the strawberry thing, this Codename Strawberry project, was first reported by Reuters. we talked about it in episode 106, so about a month ago, [00:11:00] talked about this. So at the time, Roetzer said, Strawberry appears to be a novel approach to AI models aimed at dramatically improving their reasoning capabilities.

[00:11:08] Paul Roetzer: The project's goal is to enable AI to plan ahead and navigate the internet autonomously to perform what OpenAI calls deep research. While details about how Strawberry works are tightly guarded, OpenAI appears to be hoping that this innovation will significantly enhance its AI models, Ability to reason.

[00:11:25] Paul Roetzer: The project involves a specialized way of processing AI models after they've been pre trained on large data sets. Now, the strawberry reference we also talked about in episode 106, half jokingly, but I'm not so sure it isn't true, is maybe a way to troll Elon Musk. So, if you'll remember, Elon Musk was involved early days of OpenAI.

[00:11:47] Paul Roetzer: And in 2017, three months before the Transformer paper came out from Google Brain that invented the Transformer, which is the basis for GPT, Generative Pre trained Transformer. Elon [00:12:00] Musk, who was still working with OpenAI in the same moment at the time said, let's, say you create a self improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more, and it is self improving.

[00:12:13] Paul Roetzer: So all it really wants to do is pick strawberries. So then it would have all this world of the strawberry fields, strawberry fields forever, and there would be no room for human beings. So that was kind of like episode 106. We just sort of talked about it. It was in Reuters. Now, fast forward to August 7th, so this is now two days after Greg announces his sabbatical.

[00:12:35] Paul Roetzer: Sam tweets a picture of actual strawberries, not AI generated. And he says, I love my summer garden. So here's Sam Vague tweeting about strawberries. Uh, about seven hours later, a new Twitter account called I rule the world MO, and double check me on that, Mike, make sure I'm getting the right Twitter handle here, tweeted in all [00:13:00] lowercase, now, which is Sam's sort of MO is all lowercase.

[00:13:04] Paul Roetzer: Welcome to level two. How do you feel? Did I make you feel? And Sam, now keep in mind, this account had been created that morning, Sam actually replied, amazing, tbh, to be honest. So Sam replied to this random Twitter account that was tweeting about AGI and strawberries. So, what is level 2?

[00:13:30] Paul Roetzer: So what is this welcome to level 2 tweet?

[00:13:32] Paul Roetzer: Well, um, Level 2, as reported in July 2024 by Rachel Metz of Bloomberg, is that OpenAI has come up with a set of five levels to track its progress towards building AI software capable of outperforming humans. They shared this new classification system with employees that Tuesday, so this is in early July.

[00:13:55] Paul Roetzer: At the meeting, company leadership gave a demonstration of a research project [00:14:00] involving GPT 4A model that OpenAI thinks shows some new skills that rise to human like reasoning. So the assumption is, whatever strawberry is, was shown to their employees in early July. Now, their five levels are Level 1, chatbots, AI with conversational language.

[00:14:17] Paul Roetzer: That's what we have. Level 2,

[00:14:20] Paul Roetzer: Reasoners.

[00:14:20] Paul Roetzer: human level problem solving. That's the assumption of what we are about to enter. Level 3, Agents, systems that can take actions. We don't have those yet, other than demonstrations of them. Level 4, Innovators, AI that can aid in invention. That is not currently possible.

[00:14:38] Paul Roetzer: Level 5, Organizations, AI that can do the work of an organization. This goes back to, something we talked about an article earlier on with Ilya Sutskever, where he was quoted in the Atlantic as talking about these like hive like organizations where there's just hundreds or thousands of AI agents doing these things.

[00:14:55] Paul Roetzer: So, the I rule the world MO Twitter account, let's go back to that for a second. The [00:15:00] profile picture is Joaquin Phoenix from the movie Her. With three strawberry emojis. So that's the, what the Twitter account states. the first tweet from that account was August 7th at 1. 33 PM. So again, that's right before Sam replied to this account.

[00:15:17] Paul Roetzer: So Sam is aware of this account very, very, very early in its existence. So that reply was to Yam Pelig. I don't know who he is. Uh, he's an AI guy. He had said, feel the AGI guys. And this I rule the world MO account tweeted, nice. the account then started a flood of hundreds of strawberry and AI related tweets, multiple times referencing Sam's garden tweet with his pictures of strawberries and implying that a major release is coming.

[00:15:47] Paul Roetzer: So I'll just run through a few of these to give you a sense of what's going on. So later on August 7th, um. Tweets, Sam's strawberry isn't just ripe, it's ready. Tonight we taste the fruit of AGI. The singularity has a flavor. [00:16:00] The three minutes later, someone very high up is boosting my account. Guess who? In other words, the algorithm at Twitter immediately started juicing this anonymous account.

[00:16:11] Paul Roetzer: And it was very obvious that it was happening. And thousands of people were starting to follow

[00:16:15] Paul Roetzer: it.

[00:16:15] Paul Roetzer: 21 minutes later, Altman Strawberry isn't a fruit, it's a key. Tonight we unlock the door to superintelligence. Are you ready to step through? Eight minutes later, it turns out that I'm AGI. Oh, if it turns out I'm AGI, I'll be so pissed.

[00:16:28] Paul Roetzer: Because now people are trying to like, at this point, guess what is this account? Is this an AI? Like is someone running a test? Is this actually like, OpenAI is screwing around with people. Is it something else? Um, six minutes later, it tweets, No one's guessed Groq yet, even though they know of Elon Musk's engineering prowess and his superclusters.

[00:16:47] Paul Roetzer: Obviously, I'm not saying I'm Groq, but just that it's kind of odd, right? And then, on, fast forward to, August 10th, so just a couple days ago, And the, this anonymous account tweeted a [00:17:00] rather extensive, what appears to be very accurate summary of OpenAI in the current situation, and this connects back to Greg in a moment.

[00:17:08] Paul Roetzer: So the

[00:17:09] Paul Roetzer: tweet is rushed a little, but we'll refine and add some more info I've been given in it, if it bangs. Project Strawberry slash QSTAR. AI Explained has been close to this for a while. So I'd watch them for a cleaner take. If you want to dig in, this is what Ilya saw. It's what has broken math benchmarks.

[00:17:26] Paul Roetzer: It's more akin to reinforcement learning, human feedback than throwing computers at the problem. gets into strawberry and larger models comes on Thursday. So they're implying this week. Think of LLM fine tuned to reason like a human, hence why Sam liked the level two comment and felt great about it.

[00:17:43] Paul Roetzer: Ilya did not. Here we

[00:17:45] Paul Roetzer: are.

[00:17:46] Paul Roetzer: and then it talks about what I talked about last week, that maybe we're actually seeing the future model with a combination of Sora, voice, video, and then all the stuff that's going into safety. It goes on to say that, GPT [00:18:00] Next, internally called GPT X, you can call it GPT 5, it says, is also ready to go.

[00:18:05] Paul Roetzer: Lots here relies on safety and what Google does next. It's difficult to say if competition will trump safety. This next model is through red teaming, it's finished, post training is done, it's an enormous leap in capabilities, and on and on and on. and then, as of this morning, so 527 AM Eastern Time on August 12th, this anonymous account tweets, attention isn't all you need, referring to the attention is all you need transformer paper from 2017.

[00:18:35] Paul Roetzer: New architecture announcement, August 13th at 10 a. m. Pacific time, the Singularity begins. Now, oddly enough, the next Made by Google

[00:18:45] Paul Roetzer: event

[00:18:46] Paul Roetzer: Is August 13th at 10 a. m. Pacific. Now, I don't know if that's a reference to, depending on what Google does, whether or not this next model gets released. So the question is, what is this I rule the world MO account, which at the moment of recording this has almost [00:19:00] 23.

[00:19:01] Paul Roetzer: 5 thousand followers, which it has amassed in four days. It is getting juiced, obviously, by Twitter slash X and maybe Elon Musk himself. Um, is it like an anonymous count of GPT 5? Like, are they running an experiment and it's actually an AI? Is it Elon trolling OpenAI for trolling him? And it's actually like Groq 2?

[00:19:23] Paul Roetzer: Is it a human who has a massive amount of time on their hands?

[00:19:26] Paul Roetzer: Is it another AI? Like, we don't know. But then to add to the mystery, last night, Arvind Srinivas, the founder of Perplexity, shows a screenshot that says, how many Rs are there? In this sentence, there are many strawberries in a sentence that's about strawberries.

[00:19:45] Paul Roetzer: And whatever model he was teasing got it correct, which is a notoriously difficult problem. And he put, guess what model this is with a strawberry in it.

[00:19:56] Paul Roetzer: The implication being that Perplexity Pro is running [00:20:00] Whatever this strawberry model is, then we're following along at home. Elon at 6 38 PM on August 11th tweets, Groq 2 beta release coming soon.

[00:20:13] Paul Roetzer: So what does this all mean? I have no idea who this anonymous account is, but it does appear something significant is coming. We may have a new model this week. It may already be in testing with Perplexity Pro.

[00:20:27] Paul Roetzer: I think we will find out sooner than later. So now back real quick to Greg. What does this mean for OpenAI and

[00:20:34] Paul Roetzer: Greg?

[00:20:35] Paul Roetzer: A. His work is done for now. And this, whatever he has now built, whatever this thing that is in its final, you know, training and safety is, is whatever model that is, won't be released until he returns at the end of the year.

[00:20:48] Paul Roetzer: I find that doubtful. his work is done for now and he's leaving the team to handle the launch or nothing has changed internally.

[00:20:55] Paul Roetzer: There is no major release coming and he's just taking time off. If I was a betting [00:21:00] man, I'm going with option B. I think Greg is heavily involved in the building of these models. I think the work of building the next model is complete and they're just finalizing Timing and plans for the release of that model.

[00:21:15] Paul Roetzer: Um, and I think he's stepping aside to take some personal time

[00:21:20] Paul Roetzer: and.

[00:21:21] Paul Roetzer: come back. so I don't know, Mike, I don't know

[00:21:26] Paul Roetzer: if you followed along the craziness

[00:21:27] Paul Roetzer: of the strawberry stuff over the weekend, but I mean, that account has tweeted. I don't know how many tweets it actually, it has to

[00:21:32] Paul Roetzer: be over a thousand like in the first four days.

[00:21:35] Mike Kaput: It is. I mean, look, obviously we've said. We have no idea what this all ends up meaning, but the fact, I think there's something directionally important about the fact we're even talking about this and taking it seriously. These kinds of breakthroughs and levels of AGI, or call advanced artificial intelligence, whatever you'd like to term it.

[00:21:58] Mike Kaput: it really does speak to kind of some of [00:22:00] the paths and trajectories that we've been kind of anticipating throughout the last year or two.

[00:22:05] Paul Roetzer: Yeah, my guess that it is, it is some form of an AI. I think there's some human in the loop here, but I don't think a human is, is managing this. So I do think it's probably some model, I don't know whose model it is. and I think it's an experiment being run. and the fascinating thing is. It's not just 24, 000 random followers, it's

[00:22:27] Paul Roetzer: 000

[00:22:28] Paul Roetzer: people who are paying very close attention to AI, who are not only following, but who are interacting with it.

[00:22:34] Paul Roetzer: And so what do we learn from this experiment? Like, whoever it is, whatever model it is, in four days time, it amassed 24, 000 followers, including a lot of influential AI people who are not only engaging with it, but trying to figure out how to use it. What it is, who it is. So I dunno, there's just a lot to be learned, you know, when we can look back and understand a little bit more about this moment, I, there's just, I [00:23:00] have a sense that this is a meaningful moment while the anonymous account itself may end up being seemingly insignificant.

[00:23:08] Paul Roetzer: When we find out what it actually is. I think that there's a lot of underlying things to be learned from this. And if it is an AI that is doing most of the engagement, I think That's gonna be kind of interesting.

[00:23:20] SmarterX.ai JobsGPT

[00:23:20] Mike Kaput: Yeah.

[00:23:24] Mike Kaput: All right. So in our second big topic today, Paul, I'm Paul, I'm going to basically turn this over to you, but you through, uh, your company smarterX. ai have built a ChatGPT powered tool called JobsGPT. And. This is a tool that is designed to assess the impact of AI, specifically large language models, on jobs and the future of work.

[00:23:50] Mike Kaput: So basically you can use this tool, we both used it a bunch, to assess how AI is going to impact knowledge workers by breaking your job into a series of tasks [00:24:00] and then starting to label those tasks based on Perhaps the ability of an LLM to perform that for you. So really the whole goal here is whether it's your job, other people's jobs within your company or in other industries, you can use JobsGPT to actually unpack, okay, how do actually start, um, transforming my work using artificial intelligence?

[00:24:22] Mike Kaput: What levels of exposure does my work have to possible AI disruption? So Paul, I wanted to turn it over to you and just kind of get a sense of why did you create this tool? Why now? Why is this important?

[00:24:35] Paul Roetzer: Yeah, so this is going to be a little bit behind the scenes. This isn't like a highly orchestrated launch of a tool. This is, um, something I've basically been working on for a couple months. And over the weekend, I was messaging Mike and saying, Hey, or I think Friday, I messaged Mike said, Hey, I think we're going to launch this thing like next week.

[00:24:53] Paul Roetzer: We'll You know, talk about it on the podcast and put it out into the world. And I think part of this is, [00:25:00] SmarterX company, you know, you mentioned, so we, we announced SmarterX, it's just smarterx. ai is the URL, in, in June, and the premise here is I've been working on this for a couple of years.

[00:25:11] Paul Roetzer: This company, it's an AI research and consulting firm, but heavy focus on the research side. And the way I envision

[00:25:17] Paul Roetzer: the future of research firms is much more real time research, not spending six months, 12 months working on a report. That's outdated the minute it comes out because the models have changed since you did the research.

[00:25:30] Paul Roetzer: I envision our research firm being much more real time and honestly, where a lot of the research is going to be things we dive deep on, and then Mike and I talk about on podcast episodes. And so I would say that this is probably Jobs GPT is sort of our first public facing research initiative that I've chosen just to put out into the world to start accelerating like the conversation around this stuff.

[00:25:54] Paul Roetzer: So, It is not available in the GPT store. It's a beta release. So if you want to go play with it, [00:26:00] you can do it while we're talking about this and follow along, just go to smarterx. ai and click on tools. and it's, it's right there. Um, now the reason we're doing it that way is because I may iterate on versions of this, pretty rapidly.

[00:26:14] Paul Roetzer: And so we're just going to keep updating it. And then the link from our SmarterX site will be. Linking to the most current version of it. So why, why was this built? We'll talk a little bit about the origin of the idea, a little bit about how I did it, and then back to why it matters, I think, and why people should be experimenting with stuff like this.

[00:26:35] Paul Roetzer: So you, you highlighted two main things, Mike. So we talk to companies all the time on the, That was episode 105. We talked about like the lack of adoption and education and training around these AI platforms, specifically large language models. We're turning employees loose with these platforms and not teaching them how to use them, not teaching them how to prioritize use cases and identify the things they're going to.

[00:26:57] Paul Roetzer: Save them time or make the greatest impact. [00:27:00] And then at the higher level, this idea that we need to be assessing the future of work and the future of jobs by trying to project out one to two models from now, what are these things going to be capable of that maybe they're not capable of today? That's going to affect the workforce and jobs and job loss and disruption.

[00:27:20] Paul Roetzer: So when I set out to build this, I had those two main things in mind. Prioritize AI use cases, like hold people's hand, help them find the things where AI can create value in their specific role, and then help leaders prepare for the future of work. So how, how it kind of came to be though. So I, I've shared before.

[00:27:41] Paul Roetzer: Since early last year, when I do my keynotes, I often end for like leadership audiences with five steps to scaling AI. Those became the foundation for our scaling AI course series. Those five steps, as a quick recap, are education and training. So build an AI academy, step one. Build an AI [00:28:00] council, step two.

[00:28:00] Paul Roetzer: Step three is Gen AI policies, responsibility principles. Step four, and this is what we're going to come back to, AI impact assessments. Step five, AI roadmap. Now, the AI impact assessments, when I was creating that course for Scaling AI, course eight, I was creating this at

[00:28:16] Paul Roetzer: The

[00:28:16] Paul Roetzer: end of May, early June of this year.

[00:28:19] Paul Roetzer: I wanted to find a way to assess the impact today, but to forecast the impact tomorrow.

[00:28:25] Paul Roetzer: tomorrow.

[00:28:27] Paul Roetzer: since we don't know really what these models are going to be capable of, I wanted to build a way to try and project this. So the way I did this is I went back to the August 2023 paper, GPTs are GPTs, an early look at the labor market impact potential of large language models.

[00:28:45] Paul Roetzer: So What that means is generative pre trained transformers, the basis for these language models, are general, oh, I just lost, purpose technologies, so GPTs are GPTs. That paper says, and I [00:29:00] quote, August 2023, OpenAI research paper investigates the potential implications of large language models, such as generative pre trained transformers.

[00:29:08] Paul Roetzer: on the U. S. labor market, focusing on increasing capabilities arising from LLM powered software compared to LLMs on their own. So they're trying to look at, when we take the basis of this, this large language model, and then we enhance it with other software, what does it become capable of? And how disruptive is that to the workforce?

[00:29:26] Paul Roetzer: Um, using a new rubric, we assess occupations based on their alignment with LLM capabilities. then they used human expertise and GPT classifications. Their finding revealed that around 80 percent of the U. S. workforce would have at least 10 percent of their work tasks affected by the introduction of large language models, while approximately 19 percent of workers may see at least 50 percent of their tasks impacted.

[00:29:50] Paul Roetzer: We do not make predictions about the development or adoption timelines of such LLMs. The projected effects span all wage levels and higher income jobs, potentially [00:30:00] facing greater exposure to LLM capabilities and LLM powered software. They then go into kind of how they did this, where they take the O NET database, which I've talked about on the show before.

[00:30:11] Paul Roetzer: But if you go to O NET, it has like 900 occupations in there, and it'll actually give you the tasks associated with those occupations. So you can kind of like train the model on these tasks. Um, Their findings consistently show across both human and GPT 4 annotations that most occupations exhibit some degree of exposure to large language models.

[00:30:31] Paul Roetzer: Occupations with higher wages generally present with higher exposure. Um, so basically what they did is they took, um, two levels of exposure. So there was no exposure, meaning the large language model isn't going to impact the job. So very little exposure. And then they took direct exposure. So if using a ChatGPT, like a large language model, ChatGPT, that could affect,

[00:30:56] Paul Roetzer: The, the job that it would, it would be able to do it at like a [00:31:00] human level.

[00:31:01] Paul Roetzer: And it would affect the, that job within the workforce. And then they took another exposure level they called level two and said, if we took the language model and we gave it software capabilities, how much impact would it then have? So when I was creating. The scaling AI course and trying to explain to people how to do these AI impact assessments, I adapted a version of that exposure level, and I took it out to E0 to E6, where it added image capability, vision capability, audio, and

[00:31:32] Paul Roetzer: reasoning.

[00:31:33] Paul Roetzer: So I ran an experiment, I created this prompt in the course, and I put it into GemIIni and ChatGPT. And I was kind of shocked by the output because it assessed jobs with me not telling it what the job did. I could just say like marketing manager, and it would build out the tasks based on its training data of what marketing managers do, and then it would assess it based on exposure levels of those tasks and how much time could be saved.

[00:31:57] Paul Roetzer: By using a large language model with these different [00:32:00] capabilities. So, after I finished recording those courses and released those in June, I couldn't shake the idea of like, we needed to do more with this. That this early effort was like, really valuable. And so for the last month and a half or so, I've been working on a custom GPT, which is the jobs GPT that we're kind of releasing today.

[00:32:18] Paul Roetzer: But the key was to, to expand that exposure, key, like the exposure levels. And so the way I designed this prompt, so this, system prompt for this thing, about 8,000

[00:32:30] Paul Roetzer: characters.

[00:32:31] Paul Roetzer: But the gist of it is that it doesn't just look at what an AI model can do to your job today, whether you're an accountant, a lawyer, a CEO, a marketing manager, a podcast host, whatever you do.

[00:32:43] Paul Roetzer: do.

[00:32:44] Paul Roetzer: It's looking at your job, breaking it into a series of tasks, and then projecting out the impact of these models as the models get smarter and more generally capable. So those are the exposure levels. So I'll kind of give you the breakdown of the [00:33:00] exposure key here. And again, you can go play with this yourself, and as you do an output, it'll tell you what the exposure key is, so it'll kind of remind you.

[00:33:05] Paul Roetzer: So

[00:33:07] Paul Roetzer: the

[00:33:07] Paul Roetzer: first is no exposure. The LLM cannot reduce the time for this task, typically requires high human interaction. Exposure one, direct exposure. The LLM can reduce the time required. Two, exposure level two is additional software is added. So such as software, like a CRM database, and it's able to, you know, write real time summaries about customers and prospects.

[00:33:28] Paul Roetzer: E3 is, it now have, it has image capabilities. So the language model plus the ability to view, understand, caption, create and edit images. E4 is video capabilities, so it now has the ability to view, understand, caption, create, and edit videos. 5 is audio capabilities, which we talked about with GPT 04 voice mode.

[00:33:48] Paul Roetzer: um, so the ability to hear, understand, transcribe, translate, output audio, and have natural conversations through devices. E6, which is where the strawberry stuff comes in, so now I'll [00:34:00] kind of connect the dots here for people as to why this is so critical we're thinking about this. E6 is exposure given advanced reasoning capabilities.

[00:34:08] Paul Roetzer: So the large language model plus the ability to handle complex queries, solve multi step problems, make more accurate predictions. Understand deeper contextual meaning, compete higher level cognitive tasks, draw conclusions, and make decisions. E7, which we're going to talk about a little later on, exposure given persuasion capabilities.

[00:34:29] Paul Roetzer: the LLM plus the ability to convince someone to change their beliefs, attitudes, intentions, motivations, or behaviors. E8, something we've talked about a lot on this one, AI agents. for joining us. On this show, exposure given digital world action capabilities, so the large language model we have today, plus AI agents with the ability to interact with, manipulate, and perform tasks in digital environments, just as a human would using an interface such as a keyboard and mouse or touch or voice on a smartphone.

[00:34:58] Paul Roetzer: E9, [00:35:00] exposure given physical world vision capabilities. This is like Project Astra from Google DeepMind. So we know labs are building

[00:35:06] Paul Roetzer: these things. but No

[00:35:07] Paul Roetzer: economist I know of is projecting impact on workforce based on these things. So E9 is large language model plus a physical device such as phones or glasses that enable the system to see, understand, Analyze and respond to the physical world.

[00:35:21] Paul Roetzer: And then E10, which we'll talk about an example in a couple minutes, is exposure given physical world abilities like humanoid robots, the LLM embodied in a general purpose bipedal autonomous humanoid robot that enables the system to see, understand, analyze, respond to, and take action in the physical world.

[00:35:39] Paul Roetzer: So These exposure levels are critical and, and I know we're like giving some extended time on this podcast to this, but it is extremely important. You understand these exposure levels, like go back and re listen to those, go to the landing page on SmarterX and read them.

[00:35:57] Paul Roetzer: We cannot

[00:35:59] Paul Roetzer: plan. [00:36:00] Our businesses or our careers or our next steps as a government based on today's capabilities.

[00:36:06] Paul Roetzer: This is the number one flaw I see from businesses and from economists. They are making plans based on today's capabilities. This is why we shared the AI timeline in episode 87 of the podcast. We were trying to like see around the corner a little bit. We have to try and look 12 to 18 to 24 months out.

[00:36:25] Paul Roetzer: We know all the AI labs are working on the things I just explained. This is what is happening. Business leaders, economists, education leaders, government leaders, all need to be doing. We have to be trying to project out the impact. So this jobs GPT is designed to do that. You literally just go in, give it your job title and it'll, it'll spit out a chart with all this analysis.

[00:36:49] Paul Roetzer: So, um, it's taken a lot of trial and error, lots of internal testing. You know, I had Mike help me with some of the testing over the last couple of weeks. But the beauty of this is like, Up [00:37:00] until November 2023, when OpenAI released GPTs, custom GPTs, I couldn't have built this. Like, I've built tools in my past life at my age, when I owned my agency, using developers and hundreds of thousands of dollars and having to find data sources.

[00:37:15] Paul Roetzer: I didn't have to do any of that. I envisioned a prompt based on an exposure level I created. With my own knowledge and experience. And then I just played around with custom GPT instructions until I got the output I wanted. I have zero coding ability. This is purely taking knowledge and being able to build a tool that hopefully helps people.

[00:37:38] Paul Roetzer: So I'll kind of wrap here with like. a little bit about the tool itself. So it is ChatGPT powered, so it'll hallucinate, it'll make stuff

[00:37:45] Paul Roetzer: up.

[00:37:45] Paul Roetzer: but as you highlighted, Mike, the goal is to assess the impact of AI by breaking jobs into tasks and then labeling those tasks based on these exposure levels. So it's about an 8, 000 character prompt, which is the limit, by the [00:38:00] way, in custom GPTs.

[00:38:01] Paul Roetzer: The prompt is tailored to the current capabilities of today's leading AI frontier models and projecting the future impact. So the way I do that is here is the, an excerpt of the prompt. So this is literally the instructions. and this is on the landing page, by the way, if you want to read them.

[00:38:17] Paul Roetzer: Consider a powerful large language model, such as GPT 4. 0, Claude 3. 5, Gemini 1. 5, and Llama 3. 1 405b. This model can complete many tasks that can be formulated as having text and image input and output, where the context for the input can be measured or captured in 128, 000 tokens. The model can draw on facts from its training data, which stops at October 2023, which is actually the cutoff for GPT 4.

[00:38:41] Paul Roetzer: 0. Access the web for real time information and apply user submitted examples and content, including text files, images, and spreadsheets. Again, this is my instructions to the GPT. Assume you're a knowledge worker with an average level of expertise. Your job is a collection of tasks. This [00:39:00] is a really important part of the prompt.

[00:39:02] Paul Roetzer: You have access

[00:39:03] Paul Roetzer: to the LLM. As well as any other existing software and computer hardware tools mentioned in the tasks, you also have access to any commonly available technical tools accessible via a laptop, such as a microphone and speakers. You do not have access to any other physical tools. Now, part of that prompt took, is based on the GPT's or GPT's system prompt.

[00:39:22] Paul Roetzer: So that's actually kind of where the origin

[00:39:24] Paul Roetzer: of

[00:39:25] Paul Roetzer: the inspiration for that prompt came from. And then the GPT itself has three conversation starters. Enter a job title to assess. You can literally just put in whatever your job title is and it'll immediately break it into tasks and give you the chart. You can provide your job description.

[00:39:38] Paul Roetzer: So this is something Mike and I teach in our Applied AI workshops. Literally just upload your job description. Copy and paste the 20 things you're responsible for. And it'll assess those. Or you can say, just show me an example assessment. it then outputs it based on task, exposure level, estimated time saved, and the rationale, which is the magic of it.

[00:39:57] Paul Roetzer: Like the fact, how it assesses. The [00:40:00] estimated time it's giving you is remarkable. Um, so it's powered by ChatGPT, as I said. It's capable of doing beyond the initial assessment. Think of it as like a planning assistant here. You can have a conversation with it. can push it to help you turn your chat into actual plan.

[00:40:15] Paul Roetzer: Where I have found it excels is in the follow up prompts. So

[00:40:19] Paul Roetzer: I

[00:40:19] Paul Roetzer: you know, gave those on the landing page. Where you say, break it into subtasks is a magical one. help prioritize the tasks. It'll actually go through and use reasoning to apply, like, how you should prioritize them. You can ask it to explain how a task will be impacted and give it a specific one.

[00:40:34] Paul Roetzer: You can say, ask it, how are you prioritizing these tasks? Like, how are you doing this? You can say more tasks. You can say, give me more reasoning tasks. Like, whatever you want. Just, have a conversation with it and play around with it. And then the last thing I'll say here is the, this importance of this average skilled human.

[00:40:53] Paul Roetzer: So when I built this, I considered Should I try and build this to future proof based on this thing becoming superhuman, [00:41:00] or like, how should I do it? So I chose to keep it at the average skill of human, which is where most of the AI is today. So if we go back to episode 72 of the podcast, we talked about the levels of AGI paper from DeepMind, and their paper outlines like level two being competent, at least 50 percentile of skilled adults.

[00:41:20] Paul Roetzer: I built the prompt and the jobs GPT to assume that is the level, getting into expert and virtuoso and superhuman, the other levels of AGI from DeepMind, I just didn't mess with at this point. So we're going to focus on, is it as good or better than an average skilled human? And is it going to do the task, faster, better than that average skilled human?

[00:41:42] Paul Roetzer: So I'll kind of stop there and just say, we have the opportunity to reimagine. AI in, in, in its use in our companies and its use in our careers. And we have to take a responsible approach to this. And so the only way to do that is to, to be [00:42:00] proactive in assessing the impact of AI on jobs. And so my hope is that by putting this.

[00:42:05] Paul Roetzer: GPT out there into the world, people can start accelerating their own experimentations here. Start really figuring out ways to apply it. So again, whether you are an accountant, an HR professional, a customer service rep, a sales leader, like whatever you do, it will work for that job. And the beauty is I didn't

[00:42:23] Paul Roetzer: have to give it any of the data.

[00:42:24] Paul Roetzer: It's all in its pre training data. Or you can go get it your own and like give it, you know, specific job descriptions. So it's just, you To me, it's like kind of a amazing thing that someone like me with no coding ability can build something that I've already found immense value in. And I'm, I'm hoping it helps other people too.

[00:42:45] Paul Roetzer: And again, it's a totally free tool. It's available to anyone with the link. It is not in the GPT store. we'll probably drop it into the GPT store after some further testing from the community. all

[00:42:56] Mike Kaput: That's fantastic. And, you know, kind of [00:43:00] related to this are,

[00:43:02] GPT-4o System Card Evaluates Risks/Dangers

[00:43:02] Mike Kaput: Kind of big third topic is actually ties together these previous two, I think, pretty well.

[00:43:07] Mike Kaput: Um, It's about OpenAI having just released a report that outlines the safety work that they carried out prior to releasing GPT 4.

[00:43:18] Mike Kaput: So in this report, OpenAI has published both what they call the model's system card and a preparedness framework safety

[00:43:28] Mike Kaput: scorecard.

[00:43:29] Mike Kaput: In their words, to quote, provide an end to end safety assessment of GPT 4 0. So as part of this work, OpenAI worked with more than a hundred external red teamers to test and evaluate what are the risks that could be inherent in using GPT 4 0.

[00:43:45] Mike Kaput: Now, they looked at a lot of different things. I would say that It's actually well worth diving into the full report. But a couple things were an area of interest and big

[00:43:56] Mike Kaput: focus. So

[00:43:57] Mike Kaput: one was GPT 4. 0's more [00:44:00] advanced voice capabilities. So these new voice features that are in the process of being rolled out to paid users over the next, probably couple months here.

[00:44:09] Mike Kaput: And broadly, this process involved, like, how do we identify the risks of the model being used maliciously or unintentionally to cause harm, then how do we mitigate those risks? So some of the things that they found with the voice features in particular were kind of pretty terrifying ways like this could go wrong.

[00:44:29] Mike Kaput: I mean, there was a possibility the model could perform unauthorized voice generation. There was a possibility it could be asked to identify speakers in audio. There was, a risked that the model, you know, generates copyrighted content based on its training. So it's now been trained to not accept requests to do that.

[00:44:49] Mike Kaput: And they also had to tell it to block the output of violent or erotic speech. OpenAI also said they prevented the model from quote, making inferences about a speaker that couldn't [00:45:00] be determined solely from audio content. So, if you asked like, hey, how smart do you think the person is talking? It kind of won't really make those big assumptions.

[00:45:10] Mike Kaput: They also evaluated the model's persuasiveness, using it to try to shape human users views on political races and topics to see how well it could influence people. And they found that, quote, for both interactive multi turn conversations and audio clips, the GPT 4. 0 voice model was not more persuasive.

[00:45:31] Mike Kaput: Then a human. So I guess take that as perhaps encouraging, perhaps terrifying. Then also kind of the final piece of this that I definitely want to get your thoughts on, Paul, is this, they also had some third parties do some assessments as part of this work. And one of them was from a firm called Apollo Research, and they evaluated what they called the capabilities of quote, scheming in GPT 4L.

[00:45:56] Mike Kaput: So, here's what they say. Quote, they tested whether [00:46:00] GPT 4. 0 can model itself, self awareness, and others, theory of mind, in 14 agent and question answering tasks. GPT 4. 0 showed moderate self awareness of its AI identity and strong ability to reason about others beliefs in question answering contexts, but it lacked strong capabilities in reasoning about itself or others in applied agent settings.

[00:46:23] Mike Kaput: Based on these findings, Apollo Research believes it is unlikely that GPT 4. 0 is capable of what they call catastrophic scheming. So Paul, there's a lot to unpack here. And I want to first ask just kind of what were your overall impressions? Of the safety measures that they took with GPT 4. 0, especially with the advanced voice mode, like, of the overall approach here to making this thing safer and more usable by as many users as possible.

[00:46:53] Paul Roetzer: I'm going to zoom out a little bit. I mean, if you haven't read the system guide, like read it, [00:47:00] it's extremely enlightening. If you don't, if you aren't aware how much work goes into making these things safe and how bizarre it is that this is what we have to do to understand these models. So, you know, we hear all these talk about, like, well, have they achieved AGI?

[00:47:17] Paul Roetzer: Is it self aware? The fact that they have to go through months of testing, including 14 outside bodies, to answer those questions is really weird to think about. So, if the model Like after red teaming, if the model had these capabilities before red teaming, so think about all the work they're putting in to make these safe, all the experiments they're running to prompt these things in a way that they don't do the horrible things that they're capable of doing.

[00:47:47] Paul Roetzer: So if they had these capabilities before red teaming, one key takeaway for me is it's only a matter of time until someone open sources a model that has the capabilities. This model had [00:48:00] before they red teamed it and tried to remove those capabilities. So, the thing people have to understand this is really, really important, this goes back to the exposure levels.

[00:48:10] Paul Roetzer: The models that we use, the ChatGPT, Gemini's, Claude's, Llama's, we are not using anywhere close to the full capabilities of these models. By the time these things are released in some consumer form, they have been run through extensive safety work to try and make them safe for us. So, they have far more capabilities than we are given access to.

[00:48:36] Paul Roetzer: And so, when we talk about safety and alignment on this podcast, this is what they do. So, as odd as it is, like, these things are alien to us. Like, And I'd say us as like people observing it and using it, but also in an unsettling way, they're alien to the people who are building them. So we don't understand when I say we now, I'm saying [00:49:00] the AI researchers, we don't really understand Why they're getting so smart.

[00:49:04] Paul Roetzer: Go back to 2016, Ilya Sutskever told, was it Greg Brockman, I think he said, they just want to learn, or, or was it, it was the guy who wrote the situational awareness paper or no, Yann LeCun, he, I think he said it too, but he said in the early days of OpenAI, these things just want to learn. And so we don't understand how they're getting so smart,

[00:49:22] Paul Roetzer: but

[00:49:23] Paul Roetzer: but we know if we give them more data, more compute, more time, they get smarter, we don't understand why they do what they do.

[00:49:31] Paul Roetzer: But we're making progress on interpretability. This is something that Google and Anthropic are spending a lot of time on. I assume OpenAI is as well. We don't know what their full capabilities are and we don't know at what point they'll start hiding their full capabilities from us. And this is, this is why some AI researchers are very, very concerned and why some lawmakers are racing to put new laws and regulations in place.

[00:49:53] Paul Roetzer: So if we don't understand When the model finishes its training run and has all these capabilities, and then [00:50:00] we spend months analyzing what is it actually capable of and what harm could it do? The fear some researchers have is if it's achieved some level of intelligence that is human level or beyond, it's gonna know to hide its capabilities from us.

[00:50:16] Paul Roetzer: And this is like a fundamental argument of the Doomers. It's

[00:50:19] Paul Roetzer: like, if it

[00:50:20] Paul Roetzer: achieves this, we may not ever know it's achieved the ability to replicate itself or to self improve because it may hide that ability from us. So this isn't like some crazy sci fi theory. We don't know how they work. So there's, it's not a stretch to think that at some point it's going to develop capabilities that it'll just hide from us.

[00:50:42] Paul Roetzer: So if you dig into this, paper from OpenAI, the system card, here's one

[00:50:47] Paul Roetzer: excerpt.

[00:50:48] Paul Roetzer: Potential risks with the model were mitigated using a combination of methods. So basically, we found some problems here, and then we found some ways to get it to not do it. We trained the [00:51:00] model to adhere to behavior that would reduce risk via post training methods, And also integrated classifiers for blocking specific generation as part of the deployed system.

[00:51:10] Paul Roetzer: Now, the trick here is, they don't always do what they're told. And having just built this jobs GPT, I can tell you for a fact, they don't do what they're told. Like sometimes, by you telling it not to do something, they It actually will do the thing more regularly. So here's an excerpt from it where we see this come to play.

[00:51:29] Paul Roetzer: While unintentional voice generation still exists as a weakness of the model. And I think what they're indicating here is they found out that the model had the capability to imitate the user talking to it. So the user would be talking to it in whatever voice they've selected. And then all of a sudden it would talk back to them and sound exactly like the user. That's the kind,

[00:51:53] Paul Roetzer: Yeah,

[00:51:54] Paul Roetzer: that's the

[00:51:54] Paul Roetzer: kind of emergent capability that's just so weird. So they say, while unintentional voice generation still [00:52:00] exists, in other words, it'll still do this. We used the secondary classifiers to ensure the conversation is discontinued if this occurs. So imagine you're talking to this advanced voice thing and all of a sudden it starts talking back to you.

[00:52:14] Paul Roetzer: It sounds exactly like you. Take peace in knowing that it'll just discontinue the conversation. So, and then when you go further into like how they decide this, so they say only models with post mitigation score of medi meaning this is after they've trained it not to do the thing. If the post mitigation score is medium or below, they can deploy the model.

[00:52:39] Paul Roetzer: So why don't we have advanced mode yet? Because it wasn't there yet. They hadn't figured out how to mitigate the risks of the voice tool to the point where it was at medium or below risk level. Which hits their threshold to release it. What was it out of the box? We will probably never know. Then they say only models with post mitigation score of high [00:53:00] or below can be further developed.

[00:53:02] Paul Roetzer: So if they do a model run and that thing comes out and they're testing at a critical level of risk, they have to stop training

[00:53:09] Paul Roetzer: it,

[00:53:10] Paul Roetzer: Stop developing it. That means we're trusting them to make that decision, to make that objective assessment that it is critical, is below critical. So, and then I'll, the final note I'll make is this persuasion one you mentioned.

[00:53:25] Paul Roetzer: So go back to my exposure key. E7, exposure level seven, is persuasion capabilities. The language model plus the ability to convince someone to change their beliefs, attitudes, intentions, motivations, or behaviors. Imagine a language model. Imagine a voice language model. That is capable of superhuman persuasion. And if you don't think that that's already possible. I will refer you back to October 2023, when Sam Altman tweeted, I expect AI to be capable of superhuman persuasion well before it is [00:54:00] superhuman at general intelligence, which may lead to some very strange outcomes. Again, I've said this many, many times on the show, Sam doesn't tweet things.

[00:54:09] Paul Roetzer: about capabilities he doesn't already know to be

[00:54:12] Paul Roetzer: true.

[00:54:13] Paul Roetzer: So my theory would be whatever they are working on absolutely has beyond average human level persuasion capabilities. It likely is already at expert or virtuoso level. If we use DeepMind's levels of AGI at persuasion. And so that's why they have to spend so much time red teaming this stuff.

[00:54:34] Paul Roetzer: And why it's such. Alien technology. Like, we truly just don't understand what we're working with here.

[00:54:41] Mike Kaput: Yeah, again, it's these capabilities are in the model. We have to After the fact, make sure that it doesn't go use those negative capabilities.

[00:54:52] Paul Roetzer: trying to extract capabilities from something that we don't know how it's doing it in the first place. So we're band aiding it. With [00:55:00] experiments and safety and alignment to try and get it to stop doing the thing. And if it still does the thing, then we're trying to, we just shut the system off.

[00:55:07] Paul Roetzer: And we assume that the shutoff works. Yeah.

[00:55:22] Mike Kaput: right? You know, when a mod a new model comes out that's more competitive, there's likely some very murky gray areas of how safe do we make it versus staying on top.

[00:55:33] Mike Kaput: of the market.

[00:55:34] Paul Roetzer: Yeah, think about, we live in a capitalistic society. Think about the value of a superhuman persuasive model. Of a model that can persuade people to, as my exposure level says, to convince someone to change their beliefs, attitudes, intentions, motivations, or behaviors.

[00:55:51] Mike Kaput: Right.

[00:55:51] Paul Roetzer: If the wrong people have that capability, that, that is a very bad situation. And the wrong people will have [00:56:00] that capability. Like, spoiler alert, like, we are talking about something that is inevitably going to occur. There will be restrictions that will keep it from impacting society in the near term. But if the capabilities are possible, someone will build them and someone will utilize them for their own gain.

[00:56:19] Paul Roetzer: Individually or as an organization, or as a government. This is the world we are heading into. It is why I said those exposure levels I highlighted are so critical for people to understand. Nothing I highlight in those E0 to E10 isn't going to happen. It's just the timeline in which it happens. And then what does that mean to us in business and society?

[00:56:43] Groq's Huge Funding Round

[00:56:43] Mike Kaput: Alright, let's dive into some rapid fire news items this week.

[00:56:47] Mike Kaput: So first up, the artificial intelligence chip startup Groq, G R O Q, not G R O K like Elon Musk's, X A I tool. This Groq has [00:57:00] secured a massive 640 million dollars. in new funding. This is a Series D funding round that values the company at 2. 8 billion, which is nearly triple its previous valuation in 2021. So some notable names led this funding round, BlackRock Inc.,

[00:57:18] Mike Kaput: uh, and also some investments from the venture arms of Cisco. and Samsung Electronics. So what Groq does is they specialize in designing semiconductors and software to optimize how AI works. can perform. So basically, this is putting them in direct competition with chip makers like Intel, AMD, and

[00:57:37] Mike Kaput: of course,

[00:57:38] Mike Kaput: NVIDIA.

[00:57:39] Mike Kaput: So the company's CEO, Jonathan Ross, emphasized that this funding is going to accelerate their mission to deliver quote instant AI inference compute globally. So Paul, can you maybe unpack for us here why this funding is significant? Why what Groq is trying to do is significant to the overall AI [00:58:00] landscape?

[00:58:00] Paul Roetzer: Yeah, so just the quick recap here, NVIDIA has made most of their money in the AI space in recent years training these AI models. So companies like Meta and Google and OpenAI and Anthropic doing these massive training runs to build these models. So they buy a bunch of NVIDIA chips to enable that. The future.

[00:58:21] Paul Roetzer: is inference. That is when all of us use these tools to do things. So Groq, G R O Q, is building for a future of omnipresent intelligence. AI in every device, in every piece of software, instantly accessible in our personal and professional lives. And to power all that intelligence on demand we will all have, that is inference.

[00:58:44] Paul Roetzer: That is, What they have managed to do better and seemingly way faster than NVIDIA. It doesn't mean NVIDIA won't catch up or NVIDIA won't buy,

[00:58:53] Paul Roetzer: buy

[00:58:53] Paul Roetzer: Groq. But at the moment, they are going after that inference market, not the training market. And that is [00:59:00] where five to 10 years from now, that market will probably dwarf the training model

[00:59:05] Paul Roetzer: market.

[00:59:08] Figure Teases Figure 02 Robot

[00:59:08] Mike Kaput: So next up, we just got a new demo video from robotics company FIGUR, who we've talked about a number of times on the podcast, and they just released a two minute demo of their FIGUR 02 02 humanoid robot.

[00:59:21] Mike Kaput: the demo video showed the robot walking through a factory as other FIGUR 02 models performed tasks and moved around in the background. That included showing one of the robots, completing some assembly tasks that Figure is actually demoing right now for BMW at a Spartanburg, South Carolina, uh, car plant.

[00:59:43] Mike Kaput: The Figure posted that their engineering and design teams completed a ground up hardware and software redesign to build this new model that included technical advancements on critical Technologies like onboard AI, computer vision, batteries, electronics, and sensors. [01:00:00] The company says the new model can actually have conversations with humans through onboard mics and speakers connected to custom AI models.

[01:00:07] Mike Kaput: It has an AI driven vision system powered by six onboard cameras. Its hands have 16 degrees of freedom and, according to the company, human equivalent strength. And its new CPU slash GPU provides three times the computation and AI inference available on board compared to the previous model. Now, Paul, I love these demo videos, and it's really easy to kind of look at this and be like, oh my gosh, the future is here.

[01:00:34] Mike Kaput: But how do we like gauge the actual progress being made here? Because, you know, demo's just a demo. I don't get to go test out the robot yet on my own. Are we actually making Real progress towards humanoid robots in your opinion?

[01:00:48] Paul Roetzer: Yeah, I do think so. And you know, the I timeline I'd laid out. Back in episode 87, sort of projected out this explosion of, of humanoid robots, like later in the, in the decade, like 27 to 30. And

[01:00:59] Paul Roetzer: I [01:01:00] do

[01:01:00] Paul Roetzer: think that still holds true. I don't think we're gonna be walking around and seeing like these humanoid robots in your local Walmart anytime soon, or in a nursing care facility or things like that.

[01:01:09] Paul Roetzer: But that is where going.

[01:01:11] Paul Roetzer: listening. This is the idea of embodied intelligence. So Figures working on it in partnership with OpenAI, NVIDIA is working on it, Project Groot, Tesla has Optimus, that some believe that Optimus will end surpassing, Tesla cars as the predominant product within that company.

[01:01:28] Paul Roetzer: Boston Dynamics makes all the cool videos online that have gone viral through the years. So there's a lot of companies working on this. The multimodal AI models are the brains, the humanoid robot, the bodies are the vessels. So go back to the exposure level key I talked about. Exposure level nine is exposure given physical world vision capabilities.

[01:01:50] Paul Roetzer: So LLM plus physical device, such as phones or glasses, or in this case, being able to see through the screen of a robot and see and understand the world around them. And then exposure level [01:02:00] 10 is physical world action capabilities. So access to the LLM, plus a general purpose bipedal autonomous humanoid robot.

[01:02:08] Paul Roetzer: That enables the system to see, understand, analyze, respond to it, take action in the physical world, and the robot's form enables it to interact in complex human environments with human like capabilities like in a BMW factory. So, again, everything in that exposure level key is happening right now, and you can see, kind of the future coming when you look at what's going on with Figure.

[01:02:30] Paul Roetzer: So it's a combination of a hardware challenge, getting the dexterity of human hands, for example. But the embodied intelligence is the breakthrough that's allowing these humanoid robots to accelerate their development and potential impact.

[01:02:44] Musk Brings Back OpenAI Lawsuit

[01:02:44] Mike Kaput: So next up, Elon Musk has reignited his legal battle with OpenAI and co founders, Sam Altman and Greg Brockman. He has filed a new lawsuit in federal court against the company. This comes just seven weeks after [01:03:00] he withdrew his original suit. And the core of this complaint is. The same as the previous lawsuit, he is alleging that OpenAI, Altman, and Brockman betrayed the company's original mission of developing AI for the public good, instead prioritizing commercial interests, particularly through their multi billion dollar partnership with Microsoft.

[01:03:20] Mike Kaput: It also claims that Altman and Brockman intentionally misled Musk and exploited his humanitarian concerns about AI's existential risks. Now, okay, with the caveat that we are not lawyers, the suit also does introduce some new elements, including some type of accusations of violating federal racketeering law on the part of

[01:03:43] Mike Kaput: the company as well.

[01:03:44] Mike Kaput: It challenges OpenAI's contract with Microsoft and argues that it should be voided if OpenAI has achieved AGI. Interestingly, the suit asks the court to decide if OpenAI's latest systems have achieved AGI. [01:04:00] OpenAI has, for a while now, maintained that Musk's claims are without merit, and they pointed to and published some previous emails with Musk that suggested he had been pushing for commercialization as well, just like they were, before leaving.

[01:04:14] Mike Kaput: the country.

[01:04:14] Mike Kaput: in 2018.

[01:04:16] Mike Kaput: So, Paul, why is Elon Musk, if we can attempt to get inside his brain, trying to start this lawsuit back up again now?

[01:04:24] Paul Roetzer: I don't know. Maybe he just wants to force, force discovery and force them to unveil a bunch of proprietary stuff. I don't know. episode 86 on March 5th, we talked pretty extensively about this lawsuit. Uh, basic premise here is Musk,

[01:04:40] Paul Roetzer: you know, co founds OpenAI, puts in the original money, as a counterbalance to Google's pursuit of AGI, which he sees as a threat to humanity.

[01:04:47] Paul Roetzer: You know, remember Strawberry Fields taking over the world kind of stuff.

[01:04:51] Paul Roetzer: He leaves OpenAI on harmoniously in 2019 after trying to roll OpenAI into Tesla. he forms Xai in [01:05:00] 2023, early 2024 to pursue a GI himself through the building of GR GROK, and he still has a major grudge against Greg, Sam and OpenAI.

[01:05:10] Paul Roetzer: And maybe this is what Greg is doing. Maybe he's just taking time off to deal with a lawsuit. I'm joking. I have no idea that's what Craig's doing, but

[01:05:18] Paul Roetzer: I, I, you know, again, it's, it's fascinating because at some point it may lead to some element of discovery and we may learn a bunch of insider stuff. but up until then, you know, I don't know.

[01:05:29] Paul Roetzer: It's just interesting to note that it's back in the news again.

[01:05:33] Mike Kaput: Especially with what we suspect are impending releases. I think this is sometimes something Elon Musk also does when something big is coming and he's about to get, perhaps. Uh, overshadowed. Yeah.

[01:05:45] Paul Roetzer: It's Very possible.

[01:05:46] Mike Kaput: Yea.

[01:05:48] YouTube Class Action & NVIDIA

[01:05:48] Mike Kaput: Alright, so next up, a YouTube creator has filed a class action lawsuit against OpenAI and they're alleging that the company used millions of YouTube video transcripts to train its models without notifying [01:06:00] or compensating content creators. So this lawsuit is filed by David Millett in the U. S.

[01:06:04] Mike Kaput: District Court for the Northern District of California. It claims that OpenAI violated copyright law and YouTube's terms of service to by using all this data to improve its models, including ChatGPT.

[01:06:18] Mike Kaput: So Millet is actually seeking a jury trial and 5 million in damages for all affected YouTube users and creators.

[01:06:25] Mike Kaput: And as longtime listeners of the podcast know, this is coming Just as the latest report of many other AI companies using YouTube videos to train without permission. We've talked about Runway, Anthropic, Salesforce, all on previous episodes. And we now have a new, huge expose that NVIDIA has been doing the same thing.

[01:06:48] Mike Kaput: So 404 Media recently reported that leaked internal documents show that NVIDIA had been scraping massive amounts of video content from YouTube and other sources [01:07:00] like Netflix to train its AI model. So, NVIDIA is trying to create a video foundation model to power many different products, including like world generators, self driving car systems.

[01:07:11] Mike Kaput: So To create that model, apparently they have been downloading a ton of copyright protected videos. Um, and this wasn't just a few. This wasn't just by mistake. Emails viewed by 404 Media show NVIDIA project managers discussing using 20 to 30 virtual machines to download 80 years worth of videos per day. So, it also doesn't seem, unfortunately, like this happened via some rogue elements in the company.

[01:07:41] Mike Kaput: Employees raised concerns about it, and they were told several times the decision had executive approval. So, Paul, we just keep getting stories like this. It seems like basically every major AI player is involved. Like, could something like a class action lawsuit actually stop this behavior, or what?

[01:07:59] Paul Roetzer: [01:08:00] Yeah, I don't know, this one's pretty messy. There's a, there's a separate one in the Proof proof news, where they actually quoted like from internal stuff and one vice president of AI research at NVIDIA said we need one Sora like model,

[01:08:13] Paul Roetzer: Sora being OpenAI's, in a matter of days NVIDIA assembled more than a hundred workers to help lay the training foundation for a similar state of the art model. they began curating video datasets from around the internet ranging in size from hundreds of clips to hundreds of millions. According to the company Slack and internal documents, staff quickly focused on YouTube. but then they asked about whether or not they should go get all of Netflix. And if so, how, how do they do that?

[01:08:39] Paul Roetzer: So yeah, I'll be interested to see, to follow this one along. It's pretty wild that they've got all this internal documentation, but not surprised at all. Like I said, and last time we talked about this, they are all doing this, like, and they're all doing it under the cover of, we know that they did this.

[01:08:56] Paul Roetzer: So like, we'll do it too.

[01:08:57] Paul Roetzer: and

[01:08:58] Paul Roetzer: it's the only way to compete [01:09:00] basically.

[01:09:01] ChatGPT Watermarking

[01:09:01] Mike Kaput: So, in other OpenAI news, a big theme this week, OpenAI has developed, apparently, an effective tool to detect AI generated text. And it can do this from, with text from, particularly from ChatGPT. however,

[01:09:17] Mike Kaput: it has not released this tool, according to an exclusive in the Wall Street Journal. So, the tool uses a watermarking technique to identify when ChatGPT has created text, and according to internal OpenAI documents viewed by the journal, It is reportedly 99.

[01:09:34] Mike Kaput: 9 percent accurate. So, the company apparently has been debating for about two years whether to even release this. The tool has been ready to release for over a year, and OpenAI has not let it out of the, outside the company. Now why is that? It seems part of this is that users Could be turned off by such a feature.

[01:09:59] Mike Kaput: A [01:10:00] survey that OpenAI conducted found that nearly 30 percent of ChatGPT users would use the tool less if watermarking was implemented. An OpenAI spokesperson also said the company is concerned such a tool could disproportionately affect non native English speakers. Paul, what did you make of this story?

[01:10:19] Mike Kaput: It seems like pretty powerful technology to be keeping under wraps. Do you agree with kind of OpenAI's logic here?

[01:10:27] Paul Roetzer: I don't know it, it's hard to. You know, put yourself into their position and these are big, difficult decisions. There's, while it is 99. 9 percent accurate, um, they have some concerns that the watermarks could be erased through simple techniques, like having Google translate the text into another language and then change it back.

[01:10:46] Paul Roetzer: So it's kind of that whole thing. Like the cheaters are going to stay ahead of the technology. You would think that this doesn't seem foolproof at this point. it also could give bad actors the ability to decipher, The watermarking technique, and Google does have the [01:11:00] Sith ID and they haven't released it widely.

[01:11:02] Paul Roetzer: I did find it interesting, the one note was that John Shulman, who we talked about earlier, and left to go to Anthropic, he was heavily involved in the building of this in early 2023. He outlined the pros and cons of the tool. And an internal shared Google doc. And that's when open executives decided they would seek input from a range of people before acting further.

[01:11:22] Paul Roetzer: So yeah, this has been going on for a while. I don't know. I don't know. I'm not sure we're going to get to a point where there's some, you know, I've said before, like we need a universal standard. We don't need just a watermarking tool for

[01:11:32] Paul Roetzer: ChatGPT or just a watermarking tool for Google Gemini, like we need an industry standard tool if we're going to do it and then we got to do it the right way.

[01:11:40] New AI Image Generator Flux

[01:11:40] Mike Kaput: So in some other news. a new AI image generator is getting a ton of attention online. It's called Flux. Technically it's Flux. 1 is the model, but everyone kind of refers to it as Flux. And it's getting a ton of buzz because it's generating really, really high quality results and it is open source.

[01:11:59] Mike Kaput: [01:12:00] So Flux was developed by Black Forest Labs, whose founders previously worked at Stability AI.

[01:12:05] Mike Kaput: And Flux is kind of seen as a potential successor to stable diffusion. And what kind of sets this apart is that it has these smaller models that can run on reasonably good hardware, including high performance laptops. So you

[01:12:19] Mike Kaput: can basically

[01:12:19] Mike Kaput: as a hobbyist, developer, small business, Run this really sophisticated image model that people are sharing lots of examples of, not only like these stunning kind of hyper realistic or artistic results like a mid journey would produce, but also it's doing things like getting text right in the images, so it really does seem to be pretty powerful, and it appears to be open source, which means You can go access it yourself through things like Poe, Hugging Face, other hubs of AI, of AI models that are open source and kind of run it on your own, kind of customize the code however you would like.

[01:12:54] Mike Kaput: So, Paul, I've seen some pretty cool demos of this, like, this seems like the real deal, [01:13:00] interesting to have this capability open sourced, which we've talked about, you know, could be a potential problem as people are generating deepfakes and other problematic types of images. What did you make of this

[01:13:12] Paul Roetzer: Yeah, the wildest demos I've seen are taking Flux images and then, animating them with like Gen 3 from Runway. So turning them into 10 second videos are just crazy. So you know, it's not easily accessible, which is, it's always just so interesting how these things are released. Like it's, there's no app to go get, like, you can't, you have to like, download the It's something I think to be able to use it.

[01:13:34] Paul Roetzer: So I haven't tested it myself. It is just checking it out, but yeah, it, it, it's just the continued rate of improvement of these image and video models is really hard to comprehend. And it just seems like there's no end in sight for how realistic the outputs are becoming.

[01:13:51] SB-1047

[01:13:51] Mike Kaput: All right. Now our last news topic today, um, California's proposed AI legislation, we've talked about before, known as [01:14:00] SB1047. is facing criticism from a prominent figure in AI.

[01:14:05] Mike Kaput: And that figure is Dr. Fei Fei Lee, who's often referred to as the godmother of AI. Uh, she's a researcher. Who has voiced strong concerns about the potential negative impacts of the bill. So SB 1047 is short for the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aims to essentially regulate large AI models in California.

[01:14:31] Mike Kaput: Dr. Lee, however, argues that the bill itself could end up harming the entire U. S. AI ecosystem. So she outlines three big. points. Problems with how the bill is written today. First, it unduly punishes developers and potentially stifles innovation because it's starting to hold people liable for any misuse done with their AI models, not just by them but by other people.

[01:14:58] Mike Kaput: Second, there is a [01:15:00] mandated kill switch for AI programs that could she says devastate the open source community. And third, the bill could cripple public sector and academic AI research by limiting access to a lot of the necessary models and data. To do that work. So, Paul, this is kind of yet another prominent AI voice to raise objections to this bill.

[01:15:21] Mike Kaput: We've talked about Andrew Ng, published quite extensively on X, recently about the bill. However, other people like Geoff Hinton support it. Do you support it? see this as potentially problematic for AI innovation? How are you kind of looking at this?

[01:15:37] Paul Roetzer: Yeah, I do think it would impact innovation, certainly, it would definitely impact open source. Um. I don't know. I mean, the more time we spend in this space, the more I think about these things, the more I think we need something. I don't know if this is the right thing, but I think we're, by 2025 gonna enter an arena where it's very important [01:16:00] that there's more of there are more guardrails in place than currently exist for these models, and I don't know what the solution is, but we need something, and I think we need it sooner than later. And so I, I think it's good that conversations like these are happening. I get that there's going to be people on both sides of this, like any, you know, important topic.

[01:16:22] Paul Roetzer: I don't feel strongly one way or the other at the moment, but I feel like something needs to be done. We cannot wait until mid to late next year. Um, to have these conversations. So I hope something happens sooner than later.

[01:16:38] Mike Kaput: All right, Paul, that's a wild week. Lots of tie ins, lots of related topics. Thanks for connecting all the dots for us this week. just a quick reminder to everyone, if you have not checked out our newsletter yet at marketingainstitute.

[01:16:53] Mike Kaput: com forward slash newsletter, it's called This Week in AI. It covers a ton of other stories You that we didn't get to in this [01:17:00] episode and does so every week. So you have a really nice comprehensive. Brief as to what's going on in the industry, all curated for you each and every week. And also, if your podcast platform or tool of choice allows you to leave a review, we would very much appreciate if you could do that for us.

[01:17:17] Mike Kaput: Every review, helps us improve the show, helps us, um,

[01:17:22] Mike Kaput: Get it into the hands of more people and just helps us generally create a better product for you. So if you haven't done that, it's the most important thing you can do for us. Please go ahead and drop us a review. Paul, thanks so much.

[01:17:36] Paul Roetzer: Yeah, thanks everyone for joining us again. A reminder, get those MAICON tickets, just M A I C O N dot A I, and keep an eye on the strawberry fields this week. It might be an interesting week in AI.

[01:17:48] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders who [01:18:00] have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:18:11] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 98]: Google I/O, GPT-4o, and Ilya Sutskever’s Surprise Departure from OpenAI

Claire Prudhomme | May 21, 2024

In Episode 98 of The Artificial Intelligence Show: Google unveils impressive AI updates at I/O, OpenAI launches GPT-4o, and key leadership changes shake up the AI landscape.

[The AI Show Episode 101]: OpenAI’s Ex-Board Strikes Back, AI Job Fears, and Big Updates from Perplexity, Anthropic, and Showrunner

Claire Prudhomme | June 4, 2024

Episode 101 of The Artificial Intelligence Show covers OpenAI’s movement forward amidst controversy, concerns grow over AI's impact on jobs, and AI tech updates abound.

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.