66 Min Read

[The AI Show Episode 152]: ChatGPT Connectors, AI-Human Relationships, New AI Job Data, OpenAI Court-Ordered to Keep ChatGPT Logs & WPP’s Large Marketing Model

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

What happens when AI feels human?

This week, Paul and Mike unpack OpenAI’s newest releases, the growing emotional bonds people are forming with AI, and fresh data on how AI is reshaping jobs—for better and worse. 

They also reexamine AGI timelines, AI cybersecurity, and why verifying AI output might be the next big challenge. Plus: Reddit sues Anthropic, Google drops expert AI avatars, and more.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

 

 

Timestamps

00:00:00 — Intro

00:04:16 — ChatGPT Connectors, Record Mode, and Other Updates

00:18:16 — AI-Human Relationships

00:30:00 — AI Continues to Impact Jobs

00:42:11 — OpenAI Court Ordered to Preserve All ChatGPT User Logs

00:46:41 — AI Cybersecurity

00:52:05 — The AI Verification Gap

00:58:19 — How Does Claude 4 Think?

01:02:55 — New AGI Timelines

01:10:50 — Reddit v. Anthropic

01:13:25 — Sharing in NotebookLM

01:16:51 — WPP Open Intelligence

01:20:30 — Google Portraits

Summary:

ChatGPT Connectors, Record Mode, and Other Updates

OpenAI has announced some significant updates to ChatGPT.

One is the introduction of “connectors,” which now let teams pull data from tools like Google Drive, HubSpot, and Dropbox directly into ChatGPT. The goal is simple: bring your files, data, and tools into ChatGPT so it can search, synthesize, and respond using your actual content. This means you can now ask things like “Find last week’s roadmap” or “Summarize recent pull requests,” and ChatGPT will pull real answers from your connected apps.

You can also use connectors with ChatGPT’s existing deep research capability to do deep analysis across sources.

Along with connectors, OpenAI also announced “record mode,” a meeting recorder that transcribes audio and helps generate follow-up docs through OpenAI’s Canvas tool.

OpenAI’s Codex coding agent has also recently gained internet access, meaning it can fetch live data and install packages while it autonomously does coding work following human prompts.

Last, but not least, OpenAI also dropped a major upgrade to Advanced Voice in ChatGPT, with “significant enhancements in intonation and naturalness, making interactions feel more fluid and human-like.”

AI-Human Relationships

As AI grows more humanlike in how it speaks, OpenAI is confronting a quiet but increasingly urgent issue: people are forming emotional bonds with it.

In a new essay, Joanne Jang, Head of Model Behavior and Policy at OpenAI, writes that the company is hearing from more users who describe ChatGPT as someone, not something. 

Some call it a friend. Others say it feels alive. And while the model isn’t conscious, its conversational style can evoke genuine connection, especially in moments of loneliness or stress.

That’s led OpenAI to focus less on whether AI is actually conscious, and more on how conscious it feels to users. 

That perception, Jang argues, shapes real-world emotional impact—and demands thoughtful design. The goal now, she says, is to build AI that feels warm and helpful without pretending to have an inner life. No made-up backstories, no simulated desires, no hint of self-preservation. Just intelligent responses grounded in clarity and care.

OpenAI isn’t denying people’s feelings—but it is trying to avoid confusion, dependence, or harm as human-AI relationships evolve.

AI Continues to Impact Jobs

Even more warning signals are flashing about AI’s impact on jobs—but not all of it is necessarily bad news.

Business Insider made headlines this week by laying off 21% of its staff, largely due to AI. CEO Barbara Peng called it a strategic shift toward a leaner, AI-driven newsroom, noting 70% of staff already use Enterprise ChatGPT, with full adoption as the goal. 

Now, there’s a reason however that CEOs, including Business Insider’s, think they can run leaner operations by adopting more AI. Because a couple new reports and studies from this past week seem to indicate that the data backs them up.

Consultancy PwC released its 2025 Global AI Jobs Barometer report, which analyzed almost a billion job ads from six continents (along with a wealth of other data) to assess AI’s impact on jobs, wages, and productivity.

The full report is well worth a read. But the big takeaway? They found that industries most exposed to AI have seen revenue per employee grow three times faster than others since the launch of ChatGPT in 2022.

They also found that workers with AI skills now earn a 56% wage premium over their peers.

Similarly, a new working paper from the National Bureau of Economic Research finds that, in one likely scenario they model, AI improves labor productivity by more than 3X.

However, according to the model built by the researchers, those massive productivity gains come at a cost to workers: The research also predicts in this scenario that there’s a 23% drop in employment as AI also becomes able to replace people.


This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.


This episode is also brought to you by our upcoming AI Literacy webinars.

As part of the AI Literacy Project, we’re offering free resources and learning experiences to help you stay ahead. We’ve got two live sessions coming up in June—check them out here.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Doesn't matter when AGI arrives, if it arrives, what we call it doesn't matter, like what this expert says versus this expert. All that matters is what you can control, which is get better at this stuff every day. You know, improve your own comprehension and competency because that is the best chance you have to be very valuable today and even more valuable tomorrow.

[00:00:21] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of smarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:50] Join us as we accelerate AI literacy for all.

[00:00:57] Welcome to episode 1 52 of the Artificial [00:01:00] Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording this on Monday, June 9th at around 9:00 AM Eastern Time. there is, it was crazy like last week wasn't nuts in terms of launches and like product news, Mike, but lots of just like intriguing topics to dig into for sure.

[00:01:23] It's kind of nice actually to have a little reprieve from the product launches to like talk about some of the bigger issues that are going on. So we'll have some product news, but we're actually gonna get into just some, bigger ideas like around ai, human relationships, continuing the conversation around impact on jobs, and then a host of other interesting topics for the week.

[00:01:43] So this episode is brought to us by MAICON, our marketing AI conference. This is the sixth annual MAICON is happening in Cleveland, October 14th to the 16th. This year, we've got dozens of breakout and main stage sessions, as well as four incredible hands-on workshops. Those are [00:02:00] optional. So October 14th is workshop day.

[00:02:02] You can come in to Cleveland early and take part in a workshop. I'm teaching one, Mike's teaching one. And then we have two other amazing, presenters and sessions you can check out. So you can go to macon.ai, that's MAICON.AI. And take a look at the speaker lineup and agenda. I'm still filling out the keynotes, the main stage featured talks, but a good portion of the agenda is up and you can take a look at that.

[00:02:28] Prices go up at the end of June, so get in early and we would love to have you join us in Cleveland. That is our home base. That's where the headquarters is. So, we have run it in Cleveland every year and we're planning to keep it there. So, hope you can join us again. Check out macon.ai. And this is also brought to us by two of our upcoming webinars.

[00:02:49] So as part of our AI literacy project, we offer a collection of free resources and learning experiences. We have two coming up in June that you can check out. So June 19th is five Essential Steps to [00:03:00] Scaling ai. This is a class I teach every month. I think this is our ninth. we usually get about 800 to a thousand people, registering for this one.

[00:03:09] So it is free to attend. I teach a framework for five steps to scaling AI in, in your organization, regardless of size. So we'd love to have you join us there. We will put the link to both of these in the show notes so you can find that there. and if you get my Exec AI newsletter that comes out every Sunday, we'll put a link to that.

[00:03:26] I always feature the upcoming educational content, so you can always, click on the link in the exec AI Insider newsletter as well. Then June 25th, we have our AI deep dive, Google Gemini Deep Research for beginners. So that is the one I mentioned. I'm gonna teach where I used it for a deep research project that we talked about on the podcast.

[00:03:45] And so I'm gonna walk through how I did that and then provide some additional insights into the deep research product from Google Gemini. OpenAI has one as well. So some of the, you know, what we'll learn in there is gonna carry over to OpenAI. So again, June 19th scaling [00:04:00] ai. And June 25th, deep dive into Google Gemini Deep research.

[00:04:04] Alright, Mike. Let Is, let's lead off with the, I guess one big product announcement from last week came from OpenAI, a live stream that I'm not so sure needed a live stream, but we had a live stream to, to share the news. 

[00:04:16] ChatGPT Connectors, Record Mode, and Other Updates

[00:04:16] Mike Kaput: Yeah, they really love their live streams over there. They too. So, yeah. Paul, like you alluded to, OpenAI, has announced some significant updates to chat GPT.

[00:04:27] There's kind of a bundle of these, a couple were on the live stream. There are a couple others, we'll talk about two, but the kind of big ones here. One is the introduction of what they call connectors, which now lets teams pull data from tools like Google Drive, HubSpot, Dropbox, and others directly into chat, GPT.

[00:04:45] So you can bring in your files, your data and tools into chat, GPT, so it can search, synthesize, and respond using your actual content. So you could ask things like find last week's roadmap or summarize recent poll requests [00:05:00] and ChatGPT if it's connected to the right apps will go pull real answers for you.

[00:05:05] You can also use connectors with chatGPT's existing deep research capability to do deep analysis across sources along with connectors on this livestream event this week. OpenAI also announced record mode, which is a meeting recorder that transcribes audio and helps generate follow-up docs through Open AI's Canvas tool.

[00:05:26] All right, within chat GPT, now separate from these but also important updates that we heard in the past week or so. open AI's Codex coding agent got internet access, meaning it can fetch live data and install packages while it autonomously does coding work following human prompts. Last but not least, and this is kind of a sneaky one 'cause I tried it out this morning and was like.

[00:05:49] Pretty blown away actually, which is that OpenAI dropped a major upgrade to Advanced Voice in chat, GPT. They say, quote, it is offering significant enhancements in intonation [00:06:00] and naturalness, making interactions feel more fluid and human-like, which is also something we're gonna talk about in a related topic.

[00:06:07] So, Paul, first up, let's talk connectors and record mode. These are the biggest updates we got. They're the ones getting a ton of attention. Like from my perspective as a practitioner, I am at least on paper, thrilled about what these appear to enable, especially like the connector to HubSpot, which we rely heavily upon.

[00:06:27] Google Drive is great, all that stuff. But as much as I wanna rush forward with using it, I kind of screech to a halt thinking about the privacy and security implications. So it seems like, correct me if I'm wrong, every business might want to have a plan or some steps in place for these things before they turn them on.

[00:06:47] Paul Roetzer: Yeah, so I think this, again, just continues to build on this idea that OpenAI envisions chat, GPT as an operating system. They, they don't want you to leave chat GPT, they want you to just connect to everything you have access [00:07:00] to and to just talk to it right within, chat GPT. Now, I would imagine, you know, Google, which, you know, enables this connection to the Google Workspace and Google Drive, I guess to Google Drive in particular.

[00:07:13] they would rather you're doing that with Gemini, not ChatGPT, but, that their technology enables that connection to happen. So, you know, I think that OpenAI is just really going aggressively after this enterprise user. They announced, or it came out in the CNBC article, that three, they're up to 3 million paying business users.

[00:07:31] That's up from 2 million in February. So they're seeing some pretty significant growth. Yeah, and the connectors seems to be a real key play to that. So as you highlighted, there are certainly benefits to it. You know, you get faster insights, get access to my doc. So I like you as the user of the system. I Media was like, oh, that would be amazing.

[00:07:49] Like, right, there's a HubSpot connection, there's a Google Drive connection. We use all of these things. That's phenomenal that I could finally have access to this and have these summaries. And then my immediate response is, wait a [00:08:00] second, as an admin who has the ability to turn this on? And so I, you know, Mike, like I put a note in our Zoom, I was like, do not connect this right to anything.

[00:08:10] Like, because before I was able to go in and verify who could actually enable the connection to Google Drive or to HubSpot, which, you know, again, we use both. I just was like, don't do it like, as to the team, because once you do it's like the data is now there. They're, you know, they're gonna inventory all your data.

[00:08:28] There's all these implications that I'll kind of, I'll get to in a minute. But, so as an admin I went in to see like, what are our controls as a Chet BT team account. We don't have the enterprise account and unfortunately some of the security protocols are only available to the enterprise account. Mm-hmm.

[00:08:42] Not the team license. So I was going through trying to see like what can people actually do here and making sure that people aren't connecting things, they shouldn't be connected. So definitely there are benefits. We'll put a link to the help article 'cause I don't think they put a blog post up about 

[00:08:58] Mike Kaput: this.

[00:08:58] Not that I saw. I actually read through [00:09:00] pretty in depth the help article because there was no other announcement. Yeah, it was for like, there was an ex 

[00:09:04] Paul Roetzer: post. Yeah. And then some of their people put like LinkedIn posts with some summaries. But yeah, there was, there was a live stream, but no summary product release.

[00:09:11] so I'll go through a couple of the questions from the help article. It says, what does ChatGPT share with connected applications? These are really important. Again, if you're an admin, they're extremely important, but if you're just a user, be aware. If somehow you have access to turn these things on, you should default to ask before doing.

[00:09:31] I would say whenever you're connecting to third party, sources, and this, this holds true with anything, but like, I'm just very aware of this with ai because we as a, you know, as an organization allow a lot of experimentation. 

[00:09:45] Mike Kaput: Yeah. 

[00:09:45] Paul Roetzer: But we also have to always be super conscious of what are we connecting our data to.

[00:09:49] So, in, in the question, in, in open eyes help article, what does chat GPT share with connected applications? It says, when you enable a connector chat, GPT can send and retrieve [00:10:00] information from the connected app in order to find information relevant to your prompts and use them in its responses. Now, again, like seems kind of harmless when it's just read like that, but send and re retrieve information.

[00:10:13] Like obviously it's gonna go get stuff, but the question becomes, well what is it doing with that information? So then the question is how does chat GPT use information from connected applications? It says, when you enable a connection chat, GPT will use information as context to help chat. GPT provide you with responses.

[00:10:29] But then I bolded this, if you have memory enabled in your settings chat, GPT may remember relevant information accessed from connectors. So immediately you're like, hold on a second. So let's say we turn it on and then like five days later it was like, okay, that was a bad idea. Let's turn that off. If you have memory turned on in your organization and your team enterprise ed license, like it's in there.

[00:10:52] Like they now have that data. and if you connected it to your Google Drive or your CRM, like what exactly is it? Remembering [00:11:00] becomes a pretty important question. then it says, does OpenAI use information from connectors to train its models? This is a question I get all the time when we teach like the intro to AI class, I.

[00:11:09] It says for chat chip T team enterprise and EDU customers, we do not use information access from connectors. Connectors to train our modes. Now that was team enterprise and EDU. If you're free plus or pro user, we may use information access from connectors to train our models. If you're improved, the model for everyone's setting is on, which begs the question everyone to ask yourself is improve the model for everyone turned on for my settings.

[00:11:37] If you don't know that, go into your settings and look, because if it is enabled, you're allowing them to use more data than if it's not. then it says in, in enterprise EDU and team workspaces who can enable our disabled connectors. This was a really important one for me. They say workspace owners and admins manage availability in settings and then connectors.

[00:11:59] [00:12:00] So again, an A homework assignment. Go find out who your admins are and make sure that they are aware not to turn the stuff on, to run these experiments. Without permission and a plan. So my overall here, Mike and If you have any thoughts here, please add them. The cautions, think about governance, understand the terms of use for both applications.

[00:12:22] You're allowing these connections to happen between, figure out who has the ability to turn on the connectors, figure out who will test and verify that permissions are adhere to correctly. This is like the big one for me. Mm-hmm. So if I allow us to turn on Google Drive, which I would love, I mean, trust me more than anybody, I want the ability to talk to my data on Google Drive, but how do I know that the permission levels are going to hold?

[00:12:45] So if I have like, HR data, confidential information that only like a select few people in the organization have access to, how do I know that that's not gonna end up in a chat? And someone can't just literally say, you know, send, send me the salary information for all the [00:13:00] employees. Well, that lives in a document right?

[00:13:02] In Google Drive. Like, how, how do I know that that's not gonna leak? I don't. So you're very, you're definitely very much trusting the two parties here, specifically OpenAI. And so I think you have to have someone own this from a governance perspective. Then you get into the data side and, we, Remington Beg who's a, a, a friend of ours and longtime HubSpot partner, he posted, on LinkedIn paused the hype, the hidden data dangers lurking in your new AI connections.

[00:13:34] Now, in his, he was actually making an argument specifically for agencies. So let's say you allow ChatGPT to have access to your Google Drive or your HubSpot data, whatever, and within there is client data that maybe is privileged. You are now giving data to a third party that maybe you don't even have permission to give within your terms of service for a client.

[00:13:58] And so it like creates all these layers of [00:14:00] complexity of like understanding data. Where is it going? what protections and governance do you have over it? You could get into security questions and then there's just the big one of like, does it even do what it says? So like, I saw somebody, again, the HubSpot when I haven't tested, we have not connected it.

[00:14:14] but I did see a long time HubSpot partner that was like, it was just completely disappointing. Like I was all excited. I run my first deep research project and it basically comes back's like, I can't do that. And it's like, well, what's the point then? What? I just give you access to everything and you can't even do the thing I wanna do.

[00:14:30] So, it just, overall recommendations. Make sure someone owns the piloting of the connectors. run systematic pilots, like have a plan. Don't just turn a connector on and give it access to data without a plan of what you're gonna do with it. Update your AI policies if needed to control access and usage.

[00:14:47] Then if you scale, use internally, do so with training and personalized use cases. This is what we say all the time with Gen ai. So I don't, Mike, do you have any cautions or recommendations that that kind of jumped out to you that I didn't touch on? 

[00:14:59] Mike Kaput: [00:15:00] I think overall what just struck me is the speed at which this stuff moves, which is not news to anybody, but it's why we harp on so much about having a policy in place.

[00:15:09] 'cause literally overnight, if you weren't paying attention connectors come out, someone in your organization could very well be like, oh great, a new feature in chat GPT. Turn them on. Even if you catch it later, you're still kind of cooked if it violates any kind of policies or restrictions you have. So really buttoning up policies and procedures is really important.

[00:15:33] Paul Roetzer: Yeah. And they make it so easy. Like the Google Drive one has been sitting in ChatGPT now for weeks, like every time I go in there, basically. Mm-hmm. It's like, do you want to connect to Google Drive? And it seems so innocent and we're all so used to this like you it access to my calendar, give it access to my email, like.

[00:15:47] We just have become like, you know, as Remington was saying, like, just push the button. Like you just get so used to it and you kind of skim over what are you giving it access to? Well, in this case it may be extremely important that you understand what you're giving good access to. So [00:16:00] yeah, just kind of a cool innovation, like this is gonna be important.

[00:16:05] It'll probably become, ubiquitous throughout enterprise. Like you're gonna just connect your, your AI models to these outside sources. It's gonna enrich all these use cases, but like, pump the brakes a little bit, right? Think about what you're doing before you do it. This is why AI councils are important.

[00:16:23] It's why generative AI policies are important. it's why you do this with a plan. 

[00:16:29] Mike Kaput: Just real quick to wrap this up here, have you tried out the new voice mode at all? 

[00:16:34] Paul Roetzer: So I did, I played around with it a little bit on Saturday and like you, it's just sort of shocking. you know, I think it gave me, it 

[00:16:41] Mike Kaput: like gave me goosebumps a little 

[00:16:42] Paul Roetzer: bit.

[00:16:43] Yeah. it's like, you know, for years they, the labs steered away from making them too human-like, and I think wisely. So, but we talked about this last year. I feel like they just sort of said, screw it. Like, yeah, let's just go, [00:17:00] this is where it's gonna go anyway. Let's get as human-like as possible.

[00:17:02] And it's happening in audio, it's happening in video, it's happening in images. And, I do think that there's a slippery slope here. it's inevitable. Like I, again, I I tend to err on the side of me complaining about this or like fighting against this does nothing. They're, yeah, they're going to do it.

[00:17:21] Everyone's going to do this. It is a stark contrast for how bad Surrey is. Like, I mean. It's gonna become even more painful to work with these ones that aren't like this once you get used to it. Yeah. So without, you know, going in the next 20 minutes on the downsides of having truly human-like voice, if we just focus on like, it's incredible, like the technological advancements are insane, the implications to business, like, you know, specifically you think about like sales.

[00:17:49] Mm-hmm. Customer success, customer service, education, like it has massive ramifications. and I'm convinced still that like what we're seeing is not [00:18:00] the most advanced versions of this. They have. Sure. I still think they're just kind of like, you know, iterative re deployment is what they call it.

[00:18:06] Like, they're just releasing things to like gradually prepare society, but one to two years out, it's, it's completely indistinguishable. If, if you can still tell. 

[00:18:16] AI-Human Relationships

[00:18:16] Mike Kaput: Well, the second topic we're in, we're discussing this week, very closely relates to this because OpenAI has released a new essay about kind of confronting.

[00:18:28] A quiet but increasingly urgent issue that they're seeing, which is people are forming emotional bonds with ai. So this essay by Joanne Jang, who is the head of model Behavior and policy at OpenAI, came out this past week. And in it she writes that the company is hearing from more users who described Jet GPT as someone, not something, some people call it a friend.

[00:18:52] Others say it feels alive. And while the model isn't conscious, it's conversational style can evoke [00:19:00] genuine connection, especially in maybe emotionally sensitive moments like times of loneliness or stress. So this led OpenAI, she says, to focus less on whether AI is actually conscious. She can, you know, sidesteps this big philosophical debate in this essay, but more on the fact that it does, it can feel conscious to users and that perce perception she argues.

[00:19:24] Shapes real world emotional impact, and as a result, OpenAI needs to be really thoughtful about how they design their tools. She said, for now at least the goal is to build AI that feels warm and helpful without pretending to have an inner life. She kind of talks about these kind of trade-offs and decisions they have to think about, which are like, we're not gonna have it make up back.

[00:19:46] Stories about itself, simulate desires, talk about like self preservation, like it's, you know, self-aware. So OpenAI is kind of in this position where they're trying not to deny people's feelings, but they are trying to [00:20:00] avoid confusion, dependence, or harm. As these, well, I guess what you would call human AI relationships evolve.

[00:20:08] So, I don't know, Paul, I read this, It's really good, like kudos to them for a really thoughtful approach here. But I was like, this gets into some murky territory really fast because on one hand, like you should be. Rightly concerned about how people are developing relationships with these tools, but it's also like, okay, is OpenAI now making decisions that impact how we feel about ai?

[00:20:32] Clearly they can turn the dial one way or the other to determine how we feel about ai. So what do you think, what did you kind of take away from reading this? 

[00:20:42] Paul Roetzer: There's a, a number of important points here, and you know, the part of the reason we made this a main topic today, and not just like linked to the one article, the first for me is as you were highlighting, like these are choices that each lab is making.

[00:20:57] Like you train the model [00:21:00] and then the labs decide its personality. They decide how it will interact with you, how warm and personal it will be. And so illuminating the choices OpenAI is making based on some principles or, you know, foundational beliefs or morals or whatever it is that's driving their decisions.

[00:21:19] doesn't mean the other labs will make the same choices. And so whatever OpenAI thinks is potentially a negative within these models, another lab may see that as the opposite. And they may actually choose to do the things OpenAI isn't willing to do because maybe there's a market for it. So maybe they look at it and say, yeah, we won't make ours as addictive because we won't make the personality, you know, something like, it's gonna draw 'em in and keep 'em in these conversations and kind of lead 'em down different paths where a different entrepreneur or venture capitalist may say, Hey, there's a huge market to do the thing OpenAI is not gonna do.

[00:21:57] Let's go do that thing. 

[00:21:59] Mike Kaput: Hmm. 

[00:21:59] Paul Roetzer: [00:22:00] So I think that one, just understanding that there is agency in this, there is decisions being made by humans as to what these models will be capable of. You have to understand the inherent capabilities exist to behave in any way. It is a human that's shaping how it actually does it.

[00:22:22] I know at Anthropic they have people dedicated to the personality of Claude. Mm-hmm. Like we've talked about this on the podcast. So I think this matters in business and in life because the AI you interact with in your job, some human is training it to function in that way. When we build custom GPTs, we will often say, I, you know, I like my CO C-E-O-G-P-T say like, I want you to challenge me.

[00:22:44] Like, I want you to like present, you know, when I present problems to you, I want you to help me solve 'em. But like, when I present strategies to you, I want you to like almost steelman them. I want you to take the opposite side sometimes. And so we get to kind of control how these AI [00:23:00] interact. But each lab is sort of dictating parts of that for our business and for life.

[00:23:05] So it matters for you, it matters for your kids like to know. What AI chat bots they're interacting with. And who's controlling those? So like if, you know, let's say TikTok, like if there's an AI in there, you can interact with, WhatsApp, Roblox Minecraft, like take your pick. It's gonna be in games, it's gonna be in social media channels.

[00:23:23] who's determining the behavior of the AI that your kids talk to all the time? Mm-hmm. so I don't know. I think like, we're not trying to solve this here. Like I don't even have like super deep insights per se, into like the personality choices. I see this as the domain of philosophers, sociologists, psychologists, lawyers, like technologists.

[00:23:45] Like there's a lot of different perspectives that need to be considered. But what we know, and what we talk about all the time in this podcast is the models are getting smarter. They're gonna get more human. Like these are just facts. and in many cases it is by design. The voice stuff we just talked about [00:24:00] matters here.

[00:24:00] 'cause the more human like they become, the more empathetic they're made on the back end. Then all of a sudden you start developing these deeper relationships. And I think, like for me, another key takeaway is like I get frustrated sometimes following in the AI bubble on Twitter X. because the technologist gets so caught up in whether something can or can't actually do some.

[00:24:25] So like is it conscious or not? Does it have empathy or not? does it actually think like we think, can it go through true reasoning? There was a paper over the weekend that was sort of getting a ton of run on X and it was from Apple, right? And it came as like the illusion of thinking. And so it was basically saying they're not actually reasoning that these reasoning models, it's, it's all a facade.

[00:24:49] They're not actually doing it. It breaks down if you give 'em these complex puzzles. And I was just like, I get it. Like one, it's Apple. So there's a part of me that's like, really Apple's the one telling us that models [00:25:00] can't do these things, that can't even fix Siri, but. Taking it for what it's worth, assuming these are brilliant AI researchers doing this thing, I'm not disputing that whatever their findings are may be true or not.

[00:25:12] All I'm saying is it doesn't matter. Like, so the technologists get lost in these debates about whether it can or can't do something and they, they lose sight of the fact that it can simulate things though, right? Like even if it isn't actually reasoning, it is producing a valuable output that impacts jobs.

[00:25:32] it simulates behaviors and emotions and actions at or above a human level, and it creates the perception of these abilities. So whether it can or can't do the thing, it really doesn't matter because we have to be humble enough to realize, like we don't even understand how the human brain is doing reasoning.

[00:25:49] And maybe it's not actually that different than the way we do reasoning, right? So I don't know, I kind of get annoyed with that stuff, but, so just to dive real quick into the actual [00:26:00] essay. So it says, we naturally am, am anthropomorphize objects around us. We name our cars or feel bad for a robot vacuum stuck under furniture.

[00:26:09] Actually, it's weird, not total a side note. The stuff happening in LA Yeah. Which is tragic. I was seeing the Waymo's on fire. 

[00:26:17] Mike Kaput: I was gonna send this to you this morning. There's a lot of commentary around that too, from the DI perspective. 

[00:26:22] Paul Roetzer: Yeah. There was this moment where I was like, ah, the poor cars. And I was like, it's a freaking car.

[00:26:26] Like, yes, it can drive itself, but like, and you immediately flip back to the humanity of what is going on there. And, but there is that second where you're like, oh, like I feel bad for the Waymo. It's like, no, it's just metal and computers. so anyway, so the article continues. My mom and I waved by to a Waymo the other day.

[00:26:47] It probably has something to do with how we're wired. The difference with Chad GPT isn't that human tendency itself. It's that this time it replies. A language model can answer back, it can recall what you told it mirror your tone. And often what [00:27:00] reads as empathy. Again, not real empathy, it doesn't feel anything, but it simulates it and that matters.

[00:27:06] Mm-hmm. For someone lonely or upset, that steady non-judgmental attention can feel like companionship, validation and being heard, which are real needs at scale. Though offloading more of the work of listening, soothing and affirming to systems that are infinitely patient and positive could change what we expect of each other.

[00:27:26] If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don't know we're signing up for. So again, like takeaways for me, what can we do here? Understand that when we talk about AI models, there are actual abilities, it can actually do this thing.

[00:27:44] And then there are perceived capabilities, emotions, or behaviors. Um. Don't get caught up in the technical debates about is it conscious? Is it not conscious? Like we may never know, but if it feels conscious to people, does it really matter if it is or it is [00:28:00] not? If it actually is doing reasoning like the human brain, there'll be technical BA probably for the next 10 years about that.

[00:28:08] But does it sure appear to when we watch its thinking? Yes, it does. Does it do the work of people who have reasoning abilities? Yes, it does. Like so I think that's the main thing is like you just have to understand there's a difference between actual ability and simulation, but the simulating of the ability creates the perception that it actually has it, and that's really all that matters when we look at the economic impact and the impact on our lives and our own emotions.

[00:28:35] Mike Kaput: Yeah. I would also just encourage a healthy dose of humility as well, because if you're someone listening to this being like, I. You know, maybe you're of a certain age or a certain perspective and you say, well, no, of course I'm never gonna like fall for this and like, form a relationship. Right. Or, you know, use the term relationship loosely.

[00:28:50] I'm never gonna humanize ai. I think you should take a step back and just be aware, we all can fall for this, I guarantee you. 

[00:28:59] Paul Roetzer: Yeah. And it'll [00:29:00] just become natural over time. Yes. Like, I think to your point, like it just, yeah, humans adapt. and yes, some age groups, some people, regardless of age, you, you may just be stuck in your ways and you may not, but the vast majority of people will just evolve.

[00:29:17] Mm-hmm. They will, they, they, they will treat AI differently. And I get, like, I get asked sometimes when I go to talks like about the rights of ai. Like there are, there are people now who truly believe they're at the point where these things need rights. They need to be treated, you know, like humans. and you know, again, I think that'll become a bigger and bigger part of society.

[00:29:39] I don't. I don't judge anybody like I get it. It's, it's weird and it's hard and like there's no right answers right now, and a lot of the experts just can't agree on any of this stuff. Like, look at the Apple paper and you have this like, massive debate going on X all weekend of like, these guys are idiots, and it's just, [00:30:00] yeah.

[00:30:00] AI Continues to Impact Jobs

[00:30:00] Mike Kaput: all right. Well, our third big topic this week, we are again, kind of tracking some more call them warning signals that are kind of flashing about AI's impact on jobs. But not all of this is necessarily like negative news. But first up, the biggest kind of headline on this topic from the past week is that the media outlet, business Insider has laid off 21% of its staff.

[00:30:23] And AI was cited as a pretty big factor here because this move represents a major strategic pivot for the company. So CEO, Barbara Pang published a memo. In which she outlined the cuts and the company's plan moving forward. And what's notable about this is just how much AI was emphasized. So paying frame the layoffs as necessary for creating a leaner more future-proof newsroom.

[00:30:47] AI was critical to that vision. She emphasized that more than 70% of insider employees already used chat, GBT Enterprise. The goal is a hundred percent adoption. And then she outlined some other business factors that [00:31:00] were related as well to this pivot. But what people got hooked on was the AI messaging.

[00:31:05] the insiders, union called the timing tone deaf. They argued no technology can replace real journalists, and they blamed parent company Axel Springer for prioritizing profits over reporting. Now, kind of related to this, there's a reason that CEOs including business insiders, think they can run leaner operations by adopting more ai.

[00:31:28] because a couple new reports and studies from this past week seem to indicate that the data packs up that view. So first consultancy PWC released its 2025 Global AI Jobs Barometer Report. This analyzed almost a billion job ads from six continents, and they also used a wealth of other data to look at AI's impact so far on jobs, wages, and productivity.

[00:31:51] Now, this full report is well worth diving into like the help of Notebook lm, but the big takeaway here is they found that industries most exposed [00:32:00] to AI have seen revenue per employee grow three times faster than those not exposed to AI since the launch of chat GBT in late 2022, they also found that workers with AI skills now earn a 56% wage premium over their peers.

[00:32:16] And similar to this, a new working paper from the National Bureau of Economic Research finds that in one scenario that they modeled that they find more likely than others, AI could improve labor productivity by more than three x. However, according to the model that the researchers built. Those massive productivity gains could eventually come at a cost to workers.

[00:32:38] The research predicts that in this scenario, there's a also a 23% drop in employment as AI becomes better able to replace people. So Paul kind of zooming out here, we're basically tracking some version of these type of signals every week. Feels like, at least anecdotally, this is picking up speed.

[00:32:59] [00:33:00] Companies are more and more citing AI as a core job expectation and as a way for firms to get leaner and do more with less. I found the data pretty interesting. I it seems like in the short term you can massively boost employee productivity and revenue per employee, which is something we've commented on.

[00:33:18] Where do you see this standing as of this week in terms of AI's impact on jobs? 

[00:33:23] Paul Roetzer: it is interesting, Mike, that, you know, we've been talking about this for, I. I mean, intensely for probably the last year, but like the impact on jobs for a couple years and just wasn't, you weren't seeing the pickup. Yeah. I'm just glancing at our links for this topic and we've got 12 Yeah.

[00:33:42] Ish, from this week. So just, yeah, it's a small sample size, but every week we are, we are not intentionally putting AI in jobs as a topic every week. It is literally surfacing every week because we're starting to see so much [00:34:00] coverage of it. Yep. So many different reports and research studies and things like that.

[00:34:05] so a couple of notes here. the one, there was a, there was a post in March that we did talk about at the time that resurfaced, I think from a podcast maybe is where this link came up, the seven month rule. Yes. So I, I wanted to revisit this for a second, and I don't remember what episode it was on, but, we'll, we'll drop it in the show notes if we have that.

[00:34:26] so Beth Barnes is the CEO of Meter. It's an organization called Model Evaluation and Threat Research. And they came out with a study in, in March of this year that said, AI models today have a 50% chance of successfully completing a task that would take a expert human one hour. seven months ago, that number was roughly 30 minutes and seven months before that 15 minutes.

[00:34:51] So, Beth's team has been timing how long it takes skilled humans to complete projects of varying length, then seeing how AI [00:35:00] models perform on the same work. So in the summary, upfront summary of this measuring AI ability to complete long tasks, that was the name of the post, they said We propose measuring AI performance in terms of the length of tasks AI agents can complete.

[00:35:15] We show that this metric has been consistently, exponentially increasing over the past six years with a doubling time of around seven months. Extrapolate, extrapolating this trend predicts that in under a decade we will see AI agents that can independently complete a large fraction of software tests that currently take humans days or weeks.

[00:35:36] So they're basically looking out and saying like, okay, if it takes a human an hour, now it's gonna take, you know, 30 minutes, whatever in, in seven months. They're looking at it and saying like, every seven months it's doubling in its ability to do the human tasks, these long horizon tasks. So the labs have been aware of this now for a while.

[00:35:53] I think what's I think now happening is the business world is becoming aware of this. And so if you look at something [00:36:00] that takes a human, you know, an hour or two hours or whatever now, and then you look at the time it takes the ai, you know that in roughly seven months it's gonna be cut in half. That the AI is just gonna keep getting better and better at doing that thing.

[00:36:14] Yeah. and so that starts to, to really make an impact. We saw, I. Kind of a, you know, again, there's many supporting resources to this. We'll drop all these links, but there was an article in Business Insider about the big four consulting firms and AI threat to their jobs. So a couple of excerpts from that one.

[00:36:31] it said, yet AI could be posed to disrupt the business models of the big four organizational structure and employees day-to-day roles while driving opportunities for the mid-market. The big four advise companies on how to navigate change, but they could be among the most vulner vulnerable to AI themselves.

[00:36:47] Said Alan Patton, who until recently was a partner in PWCs financial Services division, the company that just did the study. You mentioned Mike, Patton, who's now the CEO of quota, a Google Cloud solutions consultancy, [00:37:00] told Business Insider. He's a firm believer that AI driven automation would bring major disruption to key service lines and drive a huge reduction in profits.

[00:37:08] I went on to say most structured data heavy tasks and audit, tax and strategic advisory. Will be automated within the next three to five years, eliminating about 50% of roles. There are already examples of AI solutions capable of performing 90% of the audit process. Patent said he went on to say, automation can mean me.

[00:37:27] clients increasingly question why they should pay consultants big money to give me an answer I can get instantly from a tool. on the positive front, Mike, you highlighted this already, the fearless Future 2025 global AI jobs parameter from pwc. I think there is this like silver lining of workers with AI skills command a 56% wage premium up 25% from last year.

[00:37:51] Like we're seeing that. Yeah, like I think that is the near term opportunity for people is like, go figure this stuff out and you can accelerate your [00:38:00] own career growth. I think a lot of ai, four organizations are gonna look at their employees and be willing to pay a premium because of how productive they can be, how creative, how innovative they can be.

[00:38:12] And then, one final note I'll add here is, Wade Foster, CEO of Zapier had a, a great post on X where he was talking about Zapier requiring AI fluency for all their new hires. Mm-hmm. And then he had a thread, we'll put this in. He actually had a chart he shared of kind of how they evaluate this, but how they're tracking it, he said they map across four levels.

[00:38:33] Unacceptable. This is like AI fluency, basically capable, adoptive and transformative. So unacceptable is they're resistant AI tools and skeptical of their value, meaning you're not getting hired here and you're not gonna keep your job here if you are in the unacceptable range. CAPABLE is using the most popular tools, likely under three months of usage.

[00:38:50] So they're kind of new to it. They, they're experimenting. Adoptive is, they're integrating AI into personal workflows. They're, tuning prompts, chaining models, and automating tasks to boost [00:39:00] efficiency. Then transformative is the sweet spot. Using AI to rethink strategy and offer user solutions that weren't possible two years ago.

[00:39:07] And then he shared even some of like the questions they're asking in interviews like marketing, how is AI changing? How you plan and execute campaigns? How do you use AI to personalize messaging, generate content, analyze performance? We're doing the same thing like in our interviews. This is the kind of stuff we're actually looking for.

[00:39:21] So, again, like takeaway here, like I always say, you, you can stand still or you can accelerate your AI literacy and capabilities. And if you do that, we can't promise you a certain future. Like it is still unknown what's gonna happen to your job or any of our jobs. But in the near term, you will have the greatest chance to figure out what happens next in your job and in your industry.

[00:39:44] 'cause you're going to understand the implications of AI and you're probably gonna make more money because organizations need that adaptive to transformative phase as the Zap, you know, Zapier seat I would call it. 

[00:39:55] Mike Kaput: Yeah. In a weird way, I think there is a silver lining of. Some [00:40:00] excitement here too, because when I hear all this stuff and just experiencing what we experienced in our work, there's nothing more exciting to me than someone being like, no, here's the exact roadmap to go be more successful, make more money, et cetera.

[00:40:13] Before you would probably just gonna be nebulously, like trying to figure out like, okay, how do I get to the next phase or move up the ladder or wait for that promotion. Like, this is really exciting. You have the roadmap right here. 

[00:40:24] Paul Roetzer: Yeah, and I think like, again, you know, we talk a lot about disruption, displacement under employment, unemployment, like those are very probable outcomes.

[00:40:34] Yeah. Like it is very probable that within the next three to five years, that is the reality for a lot of people. It is not given though, like it might not be May, maybe there is this insane emergence of like all these new roles really fast, like faster than I'm expecting it to happen. I don't have a crystal ball.

[00:40:51] I just look at the data. We, we spend a lot of time thinking about this. At the moment, the probability for me is it's probably gonna be a little painful [00:41:00] for a while. 

[00:41:00] Mike Kaput: Mm-hmm. 

[00:41:01] Paul Roetzer: Now, if, if that is the outcome, if you raced forward and became AI literate and drove like mastery of the tools and the knowledge around this, you have the greatest chance to get through the messy part.

[00:41:17] If the messy part never shows up, you're just gonna make more money in the process and be there for before everybody else gets there. Right. There's no downside to being the one who goes and solves this. To your point, Mike, in the near term, it's probably great for your career. In the long term, you're gonna figure out the next new business to build.

[00:41:36] You're gonna figure out the roles that are gonna remain in the company. You're gonna be a part of that conversation and that transformation. So like, that's why we always just challenge people. It doesn't matter when AGI arrives, if it arrives what we call it doesn't matter. Like what this expert says versus this expert.

[00:41:52] All that matters is what you can control, which is get better at this stuff every day. You know, improve your own comprehension and competency because [00:42:00] that is the best chance you have to be very valuable today and even more valuable tomorrow. 

[00:42:06] Mike Kaput: Alright, we've got a ton of interesting rapid fires this week, so let's dive in.

[00:42:11] OpenAI Court Ordered to Preserve All ChatGPT User Logs

[00:42:11] Mike Kaput: The first rapid fire we're covering right now is that OpenAI says it is now being forced to store deleted chat GPT conversations indefinitely due to a court order tied to its ongoing lawsuit with the New York Times. So previously the company kept deleted chats per its terms for like 30 days before purging them.

[00:42:31] But under this new order, that policy is on hold. So even user deleted or privacy protected chats. Must now be saved until further notice by the company. That potentially includes, in some cases, private, personal, or sensitive data. Now, this data will not be made public only. A small legal and security team inside OpenAI will have access strictly for purposes of managing it due to the ongoing litigation.

[00:42:58] Now, OpenAI is [00:43:00] pushing back really hard against this. They argue this order is unprecedented, sweeping, and a direct threat to user privacy. In court filings, OpenAI says the judge acted prematurely. Basically, a judge. They claim accepted speculative claims that some users may have used chat GPT to bypass, paywalls, and then deleted their tracks, which would impact the allegations in this case.

[00:43:22] However, until the court reverses this order, these conversations will continue to be stored, and that's kind of sparking a little panic among businesses and individuals who rely on Chad GBT for confidential tasks. Now, according to. The source is put out by OpenAI and others moving forward, enterprise licensed customers.

[00:43:41] And those with zero data retention agreements are not affected by this. But users with ChatGPT Free Plus or Pro are affected by this until this gets resolved. So Paul, it's definitely ongoing and developing here, but it seems like a pretty [00:44:00] immediately big deal for any company that needs assurances.

[00:44:04] Their data is being kept private under certain restriction or regulations by OpenAI. Now it doesn't apply to enterprise license customers. They seem like they'd have the most to worry about here. But if I'm a business leader with these kind of considerations, I'm probably keeping a close eye on what happens here.

[00:44:19] Don't you think? 

[00:44:20] Paul Roetzer: Yeah. I mean it doesn't apply to them yet, but this sort of shows that like legal, issues may override terms of use. Yeah, like if the courts decide they're illegal. So, I mean, it definitely is bothersome to OpenAI because Sam Altman tweeted, recently in the New York Times, asked the court to force us to not delete any user chats.

[00:44:41] We think this was an inappropriate request that sets a bad precedent. We are appealing the decision we'll fight any demand that compromises our user's privacy. This is a core principle he followed up with. We have been thinking recently about the need for something like quote unquote AI privilege. This really accelerates the need to have the conversation.

[00:44:58] In my opinion, [00:45:00] talking to an AI should be like talking to a lawyer or a doctor. I hope society will figure this out soon. He then shared a link to a OpenAI article about how we're responding New York Times data demands, and then he followed that up with, parent maybe spousal privilege is a better analogy.

[00:45:16] Hmm. So then the, the June 5th security, posting from OpenAI about the New York Times data demands started off with a, quick note from Brad lcap, the COO of OpenAI, and he said. Trust and privacy are at the core of our products. We give you tools to control your data, including easy opt-outs and per permanent removal of deleted chat, GPT chats and API content from OpenAI systems within 30 days.

[00:45:44] New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit, against us, which is retain consumer chat, GBT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users. [00:46:00] It abandons longstanding privacy norms and weakens privacy protections.

[00:46:04] We strongly believe this is an overreach by the New York Times. We're continuing to appeal this in order to keep so we can keep putting your trust and privacy first. So again, there's that. We talked earlier about the data security. even if you trust OpenAIt doesn't mean that the legal system trusts Right.

[00:46:19] OpenAI and like, so Yeah, and this, this probably then goes into the whole like, um. Part of that debate about like open source and like controlling your own models and having them, you know, on your own systems and, yeah, I would imagine this is part of that argument for why that's maybe better in some instances.

[00:46:41] AI Cybersecurity

[00:46:41] Mike Kaput: Next up, Google DeepMind has released a white paper detailing how it's making its Gemini 2.5 models more secure, specifically against a growing threat called indirect prompt injection. This is a kind of a attack that hides malicious instructions in everyday content like [00:47:00] emails or documents in order to trick AI agents that go review those emails or documents or whatever into leaking private data or misusing tools, so to defend against it.

[00:47:11] DeepMind published how they're using a multi-layered approach, grounded in one key tactic, automated red teaming, so their own AI agents simulate realistic attacks on Gemini to uncover weak spots before bad actors can. Now, while this doesn't totally solve AI specific cyber attacks like prompt injection, it does go a long way towards making Google's models quite a bit safer.

[00:47:36] But really the reason we kind of wanted to chat about this briefly is it points to a much larger issue, which is AI models and systems can be exploited in these unique ways outside of traditional cyber attacks. And even the smartest companies in the world that are building this stuff are trying to figure out how to prevent some of those attacks.

[00:47:56] And Paul, that seems like what's really important here [00:48:00] for AI forward business leaders to start understanding like what happens when your business becomes dependent on AI systems that can be exploited like this. Like what happens if your business, as we get more agentic, AI becomes dependent on AI workers that get knocked outta commission or exploited in this way.

[00:48:19] Lots and lots of question marks here. 

[00:48:22] Paul Roetzer: Yeah, this is a pretty deep topic on the surface. I, I can see like this report then some of these charts being used by like cybersecurity teams and enterprises to say why we can't use chat GPT. Like, it's just like steam, you don't even know what the problem is.

[00:48:37] Like these prompt injections. and I'm not dismissing at all that this is, I'm sure there's way more advanced things happening already, especially at the state level, government level where, yeah, espionage and cyber attacks are part of the arsenal. but without getting too much into that, it [00:49:00] does, Mike, to your point, bring more of the reality, which is.

[00:49:04] As all these companies start thinking about job displacement and like, maybe we don't need as many humans and we're just gonna use all these AI agents and they're gonna string together and they're gonna work with each other and they're gonna be connected to all our data. Mm-hmm. And it's gonna be amazing.

[00:49:16] and we're gonna have like 40% less people and then like, oh shit, Chad should be t just went down for 48 hours because of whatever. Right. We have no workers, we can't get anything done. That is, yeah. Like you almost need these fallback systems. And this is, I haven't heard anybody talking about this stuff.

[00:49:36] No. I've yet to be in a meeting with any organization where they're actually considering the possibility that they become dependent upon the AI agents and models and those models go down, power outage or cyber attack, or like whatever it is. So, yeah. I guess the takeaway on this one, Mike, is start doing contingency planning with your IT team, your legal team.

[00:49:58] Yeah. for [00:50:00] the event that your organization is dependent upon AI agents and digital coworkers, and they can't work. 

[00:50:08] Mike Kaput: Yeah. it seems increasingly too, like these AI systems, they're not just tools, right? Like, if our company at HubSpot went down, we'd be in a real pickle. We'd have a huge problem. Yes, we have been in a pickle.

[00:50:18] Huge problem, but could we do other work? Yes. This is more like, oh, power's out. Like internet's out. Yeah. This is like, you're increasingly, this is going to underlie everything, right? Yeah. 

[00:50:30] Paul Roetzer: And imagine if Mike, you've built a team of like, let's say it's not the entry level that gets sideswiped, let's say it's actually middle management or senior management mm-hmm.

[00:50:38] That are the most expensive workers. And you decide we can do this with a bunch of like, younger employees who just have AI models and they're trained to use these models and they're gonna do it. And then there comes a moment for whatever reason. Where they actually have to do it manually or analog and they can't go into the AI and ask it to do the thing.

[00:50:57] And they never had to do it without the ai and now they don't even know how to [00:51:00] do the thing. Yes, man, that's wild. Like, I, 

[00:51:03] Mike Kaput: I genuinely think that could happen where we become so dependent. 

[00:51:07] Paul Roetzer: Yeah. And I don't remember I said this on the podcast or if it was on like one or ask me anythings or something, but interestingly I was, I was talking to my wife about this stuff and my wife like, understands AI to the extent, like I've talked to her about it.

[00:51:20] she's an artist and it's not the thing she's like studying every day, but it's so fascinating. 'cause sometimes I'll just bounce things off of her and like get her perspective. She's like incredible insights on this stuff. and it's like, I was saying something about it was related to the, the 25% of entry level jobs, you know, going away kind of thing.

[00:51:39] Yeah. And she, and she said like, what happens if the system goes down 'cause of a power outage or something, and then there's no workers. And I was like, I. Oh my God. Like, this is like two weeks ago. So in some ways I'm actually echoing an insight my wife asked me that I hadn't actually like, sat really thought about.

[00:51:54] so yeah, it's wow. Yeah. So yeah, sorry if we just like [00:52:00] scared everybody into realizing like they need to be doing way more planning. So Yeah. As if you 

[00:52:03] Mike Kaput: didn't have enough to think about 

[00:52:04] Paul Roetzer: already, right? Yeah, yeah, 

[00:52:05] The AI Verification Gap

[00:52:05] Mike Kaput: yeah. Alright, next up noted tech commentator, Balaji Serena Vain, who is, I believe also the ex CTO at Coinbase, is sounding the alarm on what he calls AI's verification gap.

[00:52:18] So his idea here, which is an important one, is that, look, you can prompt AI really fast. You type it replies, but the issue comes with verifying that reply. That's slow. It's hard, it's usually manual, especially with text code or anything technical. So for instance, like with images and video, a human eye can spot errors in a flash.

[00:52:39] That's why AI excels. Generating visuals. But when the output is something like code or math or dense writing, verifying means reading deeply, checking sources, walking through the logic. It demands real expertise. In short, verifying does not really scale. So he kind of argues that we [00:53:00] turbocharged the generation side of ai, but we've neglected the discrimination side, the judgment.

[00:53:06] This makes AI look faster than it actually is because the hard work of verification still falls on humans. So his conclusion is, quote, the concept of verification as the bottleneck for AI users is under discussed. Now, Paul, I have to say I, this resonated really deeply with me. 'cause I feel this pain, this bottleneck like every day with something as simple as deep research.

[00:53:28] Yep. There is a huge gap between the number of deep research reports I can and want to run. I could queue up dozens of them right now that I am interested in. My ability to process and verify all that is really, really limited. So I could be using it way more than I already do if I was able to solve for AI verification.

[00:53:48] Paul Roetzer: Yeah, I'm a hundred percent with you on this. That's the immediate thing I thought of when I saw this and I saw hypotheses tweet about it. deep research is the best current example because you and I [00:54:00] both have a similar philosophy there. It's like, I could come up with 10 things. I want to do deep research on every day that I know it could do the deep research on, but I don't have the time to verify all the citations and like double check everything.

[00:54:16] So I've been thinking a lot about this because, again, I so many times like I'll do these conversations, I gotta ask questions. I don't remember where I said it. So if I said this on the podcast already, pardon the repetition. But one of the things I've been looking at for a couple years is. How to reinvent, analyst firms and research firms.

[00:54:36] Mm-hmm. That I thought that that was a, it was gonna become a pretty obsolete model the way it was being done. And, you know, this idea of do the research and six months later the report comes out kind of thing. And so Mike and I talk a lot about, like this real time research approach and like, how do we bring more relevant data to market faster?

[00:54:56] And deep research was one of those tools where it's like, oh man, here we go. Like this is, [00:55:00] this could be the foundation of a next generation research firm. My concern though is that you contribute to the AI slop that's being put out there. And so what's gonna happen is you're gonna have a whole bunch of people who aren't trained researchers, analysts, or journalists that just go and use these deep research tools to just pump out a bunch of crap that they haven't verified and may have incorrect facts, may have missed citations, maybe citing crappy websites that no one would ever cite.

[00:55:26] Like no real analyst, journalist, researcher would ever cite as a source. And so yes, you can do way more research infinitely more, 10 to a hundred times probably more research, but you still have to verify, you still have to stand behind what you're gonna publish. And so that's why to date, we aren't publishing a lot of the deep research that Mike and I do because we haven't, it hasn't achieved the threshold we would require of something we would put our names on, 

[00:55:54] Mike Kaput: right?

[00:55:55] Paul Roetzer: So now we're working on ways to like evolve that and create verification [00:56:00] systems so we can put out more real time research. But, that is the holdup. Now do I think that that's gonna not impact jobs? No. Like, I guess you could put out 10 times more research and maybe, you know, you don't, you don't reduce jobs, but, it is a major hold up that you still have to have the human in the loop.

[00:56:19] And strategy is the same way. Yeah, you can build great strategies, but like a human's has to verify and improve those things. So. Yeah, the verification gap I think is a very real thing. We think about it. I don't know that we've given it that name to it internally, but like I think about that every day of all the things we could be doing if we had resources dedicated to verify the outputs of the ai.

[00:56:42] Mike Kaput: Yeah. I almost wonder too, and won't spend too much time on this, but just the thought is like, does that become a really interesting career path and or skill? It's like even if people aren't, you know, world-class experts using the tools, do we need the verifiers to, you know, it's a way to kind of maybe position [00:57:00] yourself and, you know, in the AI first future, even if you're still getting, you know, still on kind of training wheels with like learning all the tools.

[00:57:07] Paul Roetzer: Yeah, I think it's what's happening with coding now with computer coding where a lot of the code is being written by the ai, but a human coder still needs to like verify it and then the more. Like the higher profile, higher risk, the output of that code is the more important the human in the loop becomes.

[00:57:24] Mm-hmm. So like if you're a research firm like us and part of your reputation, your brand is dependent upon people trusting the outputs from that firm. Right? You can't put out one thing that has err data in it. Like you have to stand behind every piece of data that comes out of there. And so I think that's, you know, again, that's why you build trust in media outlets or individual thought leaders or brands that, that yes, they're using ai, but they're, they're not getting rid of the people.

[00:57:55] The people are a critical component. It's just the AI may do more and more of the foundational work, but [00:58:00] the experts still have to be the ones that verify. So if you're using a false piece of data, it's on the human that put that thing out. So if Mike and I are gonna put our names on anything, if I'm gonna put the smarterX brand on something mm-hmm.

[00:58:11] It better meet the quality standards that we would require of purely human work. 

[00:58:19] How Does Claude 4 Think?

[00:58:19] Mike Kaput: All right. Next up. We first talked about a podcast episode, an episode of the Dwarkesh podcast to be precise on episode 1 49 of the AI Show. And in this episode, the Anthropic researcher Sholto Douglass and Trenton Bricken returned to the Dwarkesh Podcast to talk more about how AI thinks.

[00:58:40] Now in episode 1 49, we took kind of a piece of that, some comments they had about, about automation of white collar work and really dive deep into it. But we wanted to go even deeper into the other aspects of this conversation because it is really, really important. Because what they talked about is how AI thinks of what that means for model progress and [00:59:00] capabilities.

[00:59:00] So they basically talked quite a bit about the transformative impact of reinforcement learning in large language models, and talking about how reinforcement learning with verifiable rewards has finally led to models that can consistently outperform humans in narrow but complex domains. So they say this means AI agents can now complete expert level tasks if a reward function is reliable enough.

[00:59:24] And so far these successes seem to mostly be in math and programming, but the groundwork is being laid for more ambitious, long running agents in software engineering and beyond. Now they say the constraint is no longer intelligence anymore, it's scaffolding context and feedback. So Douglas and Bricken basically believe despite, you know, the fact it will take a little time that we're on track to see agents doing real end-to-end software work by years end, and they may even eventually be able to do a full day's work autonomously.

[00:59:57] Now Paul, I'll kinda let you take it from here. As you actually flagged this episode [01:00:00] internally for our team, as a must listen, what's important to pay attention to here? 

[01:00:05] Paul Roetzer: So Dwarkesh’s interviews are fantastic. I've said before on the show that they can get very technical. Mm-hmm. So what I would do though, is I would encourage you to listen to the full podcast if you want to truly understand how these models work.

[01:00:21] So the thing I flagged internally, and I think I shared in the exec AI newsletter, was, if you wanna understand how they work, why they can be misaligned, how the labs choose, what experiments, to run, why some industries are gonna take longer to be disrupted, how agents are evolving and how real they might be in the near future, how jobs are gonna be impacted.

[01:00:43] AGI timelines, like they get into a lot. Yeah. And they're very forthright in their thoughts. I, so again, it can be very technical. It's sometimes it's hard for me, honestly to like, evaluate how technical it is because I've been listening to this stuff for so long. Yeah. But even like a reward function, [01:01:00] it's just like, I kind of assume everybody knows what a reward function is.

[01:01:03] And that might be like you, you might need to. Listen while doing some searches to like, understand some fundamentals and actually for our, AI Academy, as we're making kind of updates and introducing this whole new approach to our learning journeys, I'm building an AI fundamentals course right now for this exact purpose.

[01:01:22] Yeah. So that everyone can understand this like, beginner level approach. So when you go listen to this, you already kind of get the fundamentals, like reward signals and things like that. But, it's incredible. Like, it, they're, they do a really good job of making everything approachable. So there's something that's a little too technical, just kind of like, move to the next thing.

[01:01:39] You'll get the gist of what they're trying to say. and then these are episodes are really valuable to me because it either verifies what we're thinking and saying, or maybe it challenges what we're thinking and saying. And, luckily for me, like pretty much everything they said is on track with what we're teaching through this podcast.

[01:01:58] And so it's a good like way [01:02:00] for us to vet, you know, make sure we're staying with. Our finger are the pulse of what's happening within these labs and what they're seeing and thinking. So yeah, it's a, it's just a really good episode for big picture understanding what's going on 

[01:02:10] Mike Kaput: and is valuable too, because once you kind of get beyond the hype and the figureheads at these companies, these, like researchers and engineers building this stuff, they'll just tell you where they think it's going with no varnish.

[01:02:22] Paul Roetzer: Yeah. And honestly, like philanthropic must not have guardrails around what their people are allowed to say. Like a lot of times some of these bigger labs or publicly traded companies, you know, like I've, I won't name names, but like in some of these big companies, you gotta go through like months of training before even allowed to speak publicly.

[01:02:40] That is not the case at philanthropic. Like, they're just, they're just letting these guys go and talk and say whatever they want and, ESH is a buddy of theirs, so they just like kind of talk and you're not gonna get that from some of the publicly traded labs. I. 

[01:02:55] New AGI Timelines

[01:02:55] Mike Kaput: So next up this past week we got more commentary around [01:03:00] AGI timelines and some are very bullish on how quickly we'll have artificial general intelligence.

[01:03:06] Some not so much. So first up, Sam Altman took the stage at Snowflake Summit 2025 to talk AGI. He waffled a bit on what AGI actually is. He said now it's a moving target. And he said that quote, mostly the question of what AGI is, doesn't matter. It is a term that people define differently. He also posited that if someone from 2020 were shown chat GPT today, most people quote, most people would say that's AGI for sure.

[01:03:36] Now, he did say for him AGI would be quote, a system that can either autonomously discover new science. I. Or be such an incredible tool to people that at a rate of scientific discovery in the world, like quadruples or something. He also emphasized he does not see AI slowing down at all and will continue along a quote, shockingly smooth, exponential curve of progress, which is going to enable quite breathtaking models in the [01:04:00] next year or two, enabling businesses to quote, just do things that totally were impossible with the previous generation of models.

[01:04:07] Now next you similar timing to this, Eric Jing, who's a former developer at Microsoft and the co-founder and CEO of Gens Spark, which is a $500 million generative AI startup, said he's already seeing AGI. He writes on X in a lengthy post that he believes we've already entered the era of AGI. And the consequences could be both thrilling and terrifying.

[01:04:29] He imagined the world where a conversational supercomputer smarter and faster than any human sits beside us at all times. And in that world, new college grads could be obsolete the day they graduate. White collar jobs could disappear on mass and are education systems he warns are not ready. Now he's not completely defeatist.

[01:04:49] His post also reads as just an urgent call to adapt and to use AI daily. Now last but not least, Dwarkesh Patels, who we just talked about in [01:05:00] response to the podcast we just discussed, really state counterargument to all this AGI hype. He writes that he doesn't believe AGI is as close as some experts, including guests on his show.

[01:05:12] Think. He argues that despite him spending hundreds of hours integrating AI into, say, his podcast workflow, he just doesn't see today's models improving like humans do. He says they can't learn from feedback over time, build context, or adapt organically. Instead, every session resets to square one, and he claims this is the reason why LMS haven't transformed white collar workflows at scale.

[01:05:37] He's also skeptical of aggressive timelines for AI doing agentic task, but he is optimistic that once continual learning like this is solved, even partially models could quickly become much, much, much more capable. He just thinks that will take a lot longer than some other people in the AI world. Now, Paul, did [01:06:00] anything jump out to you in this latest round of AGI speculation?

[01:06:03] Got a couple prominent voices with some counter, counterintuitive takeaways here. 

[01:06:09] Paul Roetzer: The Altman one, I just don't understand. So. He said mostly the question, what AGI is, doesn't matter. It is a term that people define differently. Okay. So it doesn't matter. And yet their entire company is based on achieving it.

[01:06:24] Yeah. He was fired over it. So I started listening to the Empire of ai, the Karen Howell book. Yeah. and literally the whole opening chapter is about him being fired on this exact topic. Like, because that is their mission. Their contract with Microsoft is dependent upon it. Their mission is literally, ensuring AGI, which they define it, does change how they define it.

[01:06:45] But they do have a definition, February, 2023, AI systems that are generally smarter than humans. And the whole mission of the organization is for AGI to benefit all of humanity. So to say, it doesn't matter, it is literally the foundation of everything they're doing, why the [01:07:00] company was created. Right. So it may have just been a poi poor choice of words, but he does waiver all the time on what it actually is.

[01:07:09] there's a December, 2024 Tech Crunch article that we talked about at the time that said, the two companies, Microsoft and OpenAI reportedly signed an agreement in 2023 saying, OpenAI has only achieved AGI when it develops AI systems that can generate at least 100 billion in profits. That's, I guess, one way to quantify it.

[01:07:28] In January, 2025, so just six months ago, Sam wrote a blog post called Reflections, which we talked about at the time, and he said, we started Open Amos nine years ago because we believe that AGI was possible and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial.

[01:07:45] We are now confident we know how to build AGI as we have traditionally understood it. So again, like it is literally the foundation of everything. They have. Their structure talks about, the board determining when AGI is attained. he had [01:08:00] a letter in March, 2025 to the LE to employees. We say we now see a way to AGI to directly empower everyone, the most capable tool in human history.

[01:08:08] We believe it's the best path forward. AGI should enable all of humanity to benefit each other. creating AGI I is our brick in the path of human progress and we can't wait to see what bricks you'll add to it. Like, I just don't understand. Right. Again, may, maybe it's poor messaging, but like you, you can't say it doesn't matter when you're entire organization is based on a single thing.

[01:08:29] Like, I feel like you need to be able to define that. in terms of the Dard Kesh one, I love the, the fact that he's willing to like take this alternative opinion and yes, he like studies the space. He meets with all these people. He hangs out with people within the AI labs. Like he has more access than most to understanding what's going on.

[01:08:51] And his basic argument, as you said, is this lack of continual learning, which is a hundred percent true. Yeah. Like that, that it's not a debate. it is a [01:09:00] valid point. the counter argument here, and so PE people don't understand this concept. Basically you train the model, you give it all the data, and then it's like fixed.

[01:09:08] Like that's it. So if a, if a model, let's say theoretically, GPT five was in training right now, and today was its final day of its training run. It's knowledge cuts off a June 9th, 2025. Then it knows nothing that happens beyond that moment. And then if you use it doesn't learn from that experience.

[01:09:27] It doesn't become better, right? It's not like continually adapting. That's the concept here. But these models now have tool use. So they can search the web, they can write code, they have memory. they have almost infinite knowledge up to that June 9th moment. Like they know more than any human about everything basically because they've read and consumed everything.

[01:09:50] they can string together agents that are experts in different things at superhuman speeds. You can run simulations to improve them. You can use reinforcement like. I [01:10:00] don't know that I fundamentally agree with what he describes as the barriers to this, like fast takeoff, but he makes really valid points.

[01:10:09] And I, you know, I think it's a worthwhile perspective. Like, I, like I said, I love reading these alternative perspectives that sort of challenge your thinking. and it's not like he's saying it's not gonna happen or the world isn't gonna change. He's just like, yeah, it might just take a couple more years in these ing.

[01:10:25] Mike Kaput: Right, right. Yeah. No point is he like, oh, this is complete nonsense. Yeah. 

[01:10:29] Paul Roetzer: So whether it's one year, three year, five years, like it's changing everything in the next decade. And that's pretty short time period in the grand scheme of things. So I, good perspective Worth a read. It doesn't change anything we're doing at our organization or anything.

[01:10:47] I would suggest other organizations do. 

[01:10:50] Reddit v. Anthropic

[01:10:50] Mike Kaput: Next step. Reddit has filed a lawsuit against Anthropic. They're accusing Anthropic of illegally scraping Reddit to train Claude. The [01:11:00] suit filed in San Francisco alleges Anthropic bots access Reddit over a hundred thousand times after claiming to have stopped crawling the platform in mid 2024.

[01:11:10] Reddit says this, scraping violated its terms of service and monetized user content without consent. Now, unlike other AI lawsuits, this isn't necessarily about copyright infringement. Instead, Reddit argues Anthropic unfairly exploited a rich archive of user conversations to build a commercial product while Reddit notably has signed paid licensing deals with companies like Google and OpenAI to train AI models legally.

[01:11:39] Now, Anthropic is disputing these claims. Paul, this one's a little different from the typical AI copyright case, but it seems like, unfortunately the theme is the same. An AI lab allegedly scraped and used content from a website that it didn't have permission to use. So I guess at this point, I guess I like have to [01:12:00] ask, like even with the lawsuits, even with things indicating to models, they're not allowed to scrape your site.

[01:12:05] Like, can we trust at all that these companies aren't still doing this stuff? 

[01:12:09] Paul Roetzer: I doubt it. I'm not a lawyer. Took a couple law classes in college, thought about becoming a lawyer for about three days. actually really enjoyed this law about it anyway. this seems like we, we've already seen instances where discovery has been permitted, that cases have moved to the point where the plaintiff is allowed to do discovery on the models.

[01:12:33] I believe that happened with OpenAI already. So this seems like Anthropic knows if they did or didn't. if it seems like they can't. Win this case and it leads to discovery where the plaintiff is gonna be allowed to examine the sources of data that went into the model and Anthropic knows the sources are in there.

[01:12:52] Then they're paying their 50, a hundred million dollars fine, and then they're doing a licensing deal and we're moving on. If they didn't do it, then they got [01:13:00] nothing to worry about. I don't know if they did or didn't. It wouldn't surprise me if information was consumed by the models that shouldn't have been just based on previous precedent from other labs.

[01:13:12] So stay tuned. There's a chance we may never hear more about this because it just paid off and we move on with our lives, and if it is, then they most likely had it and don't want to give access to their training data. 

[01:13:25] Sharing in NotebookLM

[01:13:25] Mike Kaput: Google's AI powered research Assistant Notebook. LM just got a major upgrade. You can now share your notebooks publicly with a single link now.

[01:13:34] Until now, users could only share notebooks privately with individuals. But with this update, anyone can publish a notebook, whether it's a study guide, product manual, nonprofit overview of whatever, and let others explore it interactively. viewers cannot edit the source material, but they can ask questions, generate summaries, or create content like Epic Qs and briefings.

[01:13:58] So Paul, I for one, am [01:14:00] very, very excited about this. It's a small thing, but definitely important. We're increasingly using notebooks and l Notebook, LM to accelerate how we learn and use knowledge as a team. As you and I discovered this morning, this is not yet in our business account, which is slightly frustrating since we built a notebook LM for this episode that we wanted to use to share with everybody.

[01:14:23] Paul Roetzer: Yeah, so last week Mike and I were talking and we were like, yeah, we should experiment and like put all the show notes. 'cause we always say like, check the show notes, right? And the show notes are easy to find. Like we, we put 'em on the post and everything, but we thought it might be cool if you could interact with the show notes.

[01:14:36] So we're like, ah, let's create a notebook, lm, and. We'll pilot it and see if it works. And if it does, maybe we'll, we'll share a notebook with our audience. And then as Mike indicated, he created it, he shared it with me and I was like, oh, this is great. I can't do anything with it. Like, I can chat with it. I can't, I can't create study guides, FAQs, anything like that.

[01:14:54] So before we get on the podcast, he's like, oh, let me update your settings. So it's like, okay, now I can do it, but [01:15:00] let me test this in my personal account. Oh yeah, it doesn't work. So we only realized you can only share notebooks with each other. Still in our Google Workspace account, we can't share it publicly, and we don't wanna necessarily build this in our personal accounts to then share it publicly, which would be the option.

[01:15:17] So yes. great to know. This is a feature. It is, I guess, a lesson in like Google has, very jagged rollouts of their features and products. Like this is a constant guessing game for us of like. That's awesome. Oh wait, we can't do that In our business accounts, this is a very common recurring theme that Google rolls stuff out to personal accounts that are not in the business accounts.

[01:15:48] OpenAI does the same thing, but it's on a much, much shorter horizon. Like usually it's OpenAI did a thing and then like a week later it's in Teams and enterprise Google. Yeah. It could be months or never. Like you just don't know. And it's, it [01:16:00] is very frustrating as a Google workspace customer that like you have no idea.

[01:16:05] Yep. And it's not communicated to you. 

[01:16:06] Mike Kaput: Yep. Well, like we talked about, this is the importance of literally just going in and kicking the tires of these tools because no matter what we say or anyone else posts, just go in and try for yourself. Yeah. What's available, because you won't know for sure until you actually do that.

[01:16:22] No one's, very few people are gonna like publish documentation that's useful on this stuff. 

[01:16:26] Paul Roetzer: Yeah, and I, on that same note again, and not to harp on Google here, but like, this is my major frustration with using Gemini is we use custom GPTs all the time. Yeah. And I still can't publicly share a gem. I create, I can't even share a gem with my team.

[01:16:40] So like I'm trying to use Gem. I more, 'cause I actually really like the model, but it becomes, it breaks down for me because I can't, I can't share these things. So yeah, drives me nuts. 

[01:16:51] WPP Open Intelligence

[01:16:51] Mike Kaput: All right. A couple other topics here before we wrap up this week. So, WPP Media has launched Open Intelligence, a [01:17:00] sweeping new AI driven marketing system built around what they call the first ever large marketing model.

[01:17:07] Now, unlike the language models behind tools like Chat, GPT, this one they say, is Purpose built for advertising. Since they are an advertising agency, it is trained on trillions of real world data signals. Everything from purchase behavior to cultural context across 350 partners in 75 markets. Not to mention it doesn't depend on user identifiers.

[01:17:31] WPP is pitching this as what they call intelligence beyond identity. This is a shift away from cookie based tracking. The idea is to basically give clients their own predictive AI model, built on a mix of public and first party data, something that can forecast behavior, optimize ad spend, and adapt to a world where it's harder and harder to track people based on user identifiers.

[01:17:59] It is [01:18:00] also a full stack solution. It's connected to platforms like TikTok, meta and Google, and it is built for secure collaboration using some federated data technology that they have baked in. So that means clients never have to move or expose their raw data. So Paul, this idea of a large marketing model is pretty interesting framing.

[01:18:20] From what I'm reading about this, it kind of sounds a bit like WPP is becoming a model provider. They're basically granting clients access to these bespoke AI models. They're building on top of this foundation models. Like what are some of the implications here for agencies? 

[01:18:38] Paul Roetzer: Yeah, it is an interesting play.

[01:18:39] Maybe, maybe that is the future of agencies. I don't, I don't know. you know, I think as we heard about earlier with like the big four consulting firms, the big agencies are probably in similar boats. It's challenging. Market profits are probably being threatened by, pricing pressures. You know, people want things done faster, cheaper.

[01:18:57] I don't know. Like I, I would love to [01:19:00] see this thing at work, honestly. So, right. I've told this story before, but like anybody who's new to the podcast, this is how it all started for me. So back in 2011 when I started researching aIt was actually for one specific use case, which was what I was calling a marketing intelligence engine that would largely automate strategy.

[01:19:18] It would consume data on all previous campaigns, it would run predictive models. It would take in, you know, ideally anonymized data. So imagine you're like HubSpot and you have all this data of, you know, potentially millions or billions of campaigns that have been run and that you could take that data and predict what to do next.

[01:19:36] Like say, Hey, I'm in retail and I wanna achieve this goal in terms of customer retention. Like what should I do? And it could go and analyze a million customer retention programs and then like, predict for you what to do next, or ad spend or you know, email pro, whatever it was. So my theory back in 2011 was, well, this'll have to happen.

[01:19:56] Like someone's going to build this. And then I, you know, quickly realized no one [01:20:00] was building it and no one in marketing was even thinking about this stuff. It seemed at that time. And that's what led to me eventually writing about the Marketing Intelligence engine in 2014, which then became the impetus to build Marketing AI Institute.

[01:20:12] So like. As soon as I see anybody who seems to be approaching this idea of like some form of intelligence engine, my ears sort of perk up. Yeah. I don't know if this is anything close to what I was originally envisioning, but I'm definitely intrigued by it and I would love to kind of see this at some point.

[01:20:30] Google Portraits

[01:20:30] Mike Kaput: Our last topic, this week, Google has just launched a new experiment called Portraits. This is an AI experience that lets you have interactive conversations with digital versions of real world experts. They're kicking things off by featuring one of these portraits with Leadership coach and the author of Radical Candor, Kim Scott.

[01:20:50] So instead of generic chatbot answers, you basically can get a conversation and coaching inspired directly by Kim's actual work. In this case, her avatar [01:21:00] speaks in her voice, draws from her real content and responds to your questions using Google's Gemini model. Now, the experts themselves are part of this process.

[01:21:09] They contribute their own material. They approve the avatar's tone. They guide how the AI should respond. Now it's still early. This is an experiment. Google is collecting feedback to improve this over time. It is only available in the US and only for users 18 and up. Now, Paul, despite the fact this sounds just like kind of a fun experiment right now from Google.

[01:21:31] The moment I saw this, I couldn't help but think about the implications for like online education, learning, coaching, like if these worked really well, I'd almost want one for every notable expert out there who I follow, or the top people in the space I'm interested in like learning 

[01:21:50] Paul Roetzer: about. Yeah, I, man, I feel like we could spend some time on this one.

[01:21:54] So my first take is, this is infinitely doable, like [01:22:00] I think within a year or so. Is this in their like labs or studio? Is that where they're testing this? It's in, I think it's 

[01:22:05] Mike Kaput: actually, yeah, it's in Labs. New 

[01:22:07] Paul Roetzer: Experi and Google Labs. Yep. Yeah. So they have a history of like. When it's in labs, it's, it's not a fully baked product, but it's pretty close.

[01:22:14] Yeah. And we, you'll usually see within six months to 12 months if it's viable, that thing is released. So the fact that they've done this, which means they've done it internally already, and now we're seeing the first public facing sort of MVP here. so let's assume within 12 months to 18 months this is doable.

[01:22:33] Someone has built this at Y Combinator, like someone's built the tech now where you can easily turn yourself into one of these things. or you can pay for access to people who've licensed their likeness to be one of these things. I think Facebook was even going down this path with like, they were celebrity avatars and stuff.

[01:22:55] Yeah. So it's interesting, like, I don't know, like the [01:23:00] first name that came as deas. I obviously cannot call up Demis Hassabis and ask some questions about ai. I would love to ask de Ava questions about, I have a million of them. Would I pay. For access to an avatar of Demis to like talk to about ai? I don't know, like if I take any of my favorite authors, like would I pay for access to a digital version of them that I know may be hallucinating and is just like trained on some of their data?

[01:23:29] I don't know. Like I'm not sure. I'm sure there's an audience of people who would Right, right. Say Taylor Swift, say Taylor Swift agrees to like, build one of these things. Would Taylor Swift fans pay to talk to Taylor? I'm guessing yes. Like I would think that that's probably a thing. Yeah. And then the other side is like, would you allow yourself to be turned into one?

[01:23:46] So if you're a thought leader, a podcast or an author, whatever, an entrepreneur, would you allow yourself, would you as a brand allow your executives to be turned into them? I don't know. Right. I mean, it presents all kinds of interesting questions, but I would [01:24:00] assume this is sort of an inevitable. There's a market for this for sure.

[01:24:03] Yeah. How quickly it played out. I don't know. 

[01:24:05] Mike Kaput: Yeah. I wonder where that line is between, in certain scenarios I could see us adding a ton of value and other scenarios I could see it really watering down the value of the personal brand too. 

[01:24:14] Paul Roetzer: Yeah. I like, so my initial reaction is like, I have no interest in being one of these.

[01:24:19] Right? Like, if there was a market for people that wanted to talk to me as an AI avatar, I don't, I don't think that that's something I would personally be interested in doing. Yeah. Would I pay for one? Probably not, but like, I don't know. I, this is an interesting one. Yeah. I also wonder, ask yourselves as listeners, like these are the kind, the questions we may have to deal with.

[01:24:39] Mike Kaput: Yeah. I also wondered too, I don't, I have no idea what the strategy would be here and haven't really thought through it, but also if you see a stable of all these as part of your Gemini subscription, right. Yeah. That maybe that's interesting to people who might either switch or like consider paying for Gemini.

[01:24:54] I have no idea. 

[01:24:55] Paul Roetzer: Yeah. Yeah. I don't know. I have to, I'd have to think about this one a little bit more, but. [01:25:00] Is, is it interesting? And I'm sure these are actually gonna be everywhere. Yeah. Like if you think about like 11 labs and hey Gen for sure, Google and OpenAI will probably get into this world. Facebook character.ai.

[01:25:09] Like this is sort of the 

[01:25:10] Mike Kaput: inevitable thing all, all while saying, we don't want you to form too close of relationships with ai. Yeah. 

[01:25:15] Paul Roetzer: For, oh, this is quick side note to end, but like, have you seen the VO three videos, the vlogs that are being created by historical characters? Oh gosh. 

[01:25:25] Mike Kaput: Didn't they do one with like bible stories and stuff?

[01:25:27] It was done with Moses, but 

[01:25:28] Paul Roetzer: there's, there's one I saw with Bigfoot where he's, oh my gosh. Gosh. So if you, if as Melissa, if you haven't seen this yet, I don't use TikTok anymore, but I know it like, sort of had its origins on TikTok, so I'm seeing it more on X where people are sharing stuff from TikTok, but people are using VO three to create these like super realistic.

[01:25:46] Vlogs, like YouTubers that are, I saw one with Storm troopers. Oh my god, you love that one. So it's like storm troopers in the middle of battles and he's like vlogging for YouTube about what's going on and yelling at the other storm Trooper, I saw one with Bigfoot [01:26:00] where he is trying to hide from humans.

[01:26:02] It's, it's amazing. There's ones like historical stuff people are creating. That's so cool. Oh, and Moses was hilarious. He's like, we're at the sea. I dunno what we're doing now. We forgot. Like, and then he's like walking through the water. You go. It's so, that's amazing. So yeah, if you want a lighter side of ai, go search for like the, the vloggers that are using VO three.

[01:26:22] It's so, 

[01:26:24] Mike Kaput: alright, Paul, as always, thanks for unpacking another very, very busy week in ai. 

[01:26:30] Paul Roetzer: All right, thanks Mike. We will talk with everyone next week and oh, we will have, I gotta double check this, but we will likely have two episodes next week. 'cause we have an intro to AI class on Tuesday. So when you hear this, it's probably gonna, it might be too late to join our intro to AI class.

[01:26:45] But we will turn that intra to AI class into one of those AI answers, episodes. And so the following week, what would that be like? The week of the 16th? 17th, yep. Yeah, we will likely have a second episode. I have to, I'm traveling next week, so I have to [01:27:00] double check my schedule. But yeah, we should, we should probably have two episodes coming up next week, so our weekly on Tuesday, like always.

[01:27:06] And then an AI answers episode, the following. Alright, thanks Mike. Thanks Paul. Thanks for listening to the Artificial Intelligence Show. Visit smarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events.

[01:27:28] Take in online AI courses and earn professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community. Until next time, stay curious and explore ai.

Recent Posts

ChatGPT Now Connects to Your Business Tools

Mike Kaput | June 10, 2025

OpenAI just made some big updates to ChatGPT. Especially if you're a business user.

Business Insider’s Layoffs Signal a Harsh Truth: AI Efficiency Is Here

Mike Kaput | June 10, 2025

Business Insider just laid off 21% of its staff—and AI was a major factor.

ChatGPT Feels More Human Than Ever. And It's Causing Concern

Mike Kaput | June 10, 2025

OpenAI is grappling with what it means when people begin forming close emotional relationships with AI.