62 Min Read

[The AI Show Episode 155]: The New Jobs AI Will Create, Amazon CEO: AI Will Cut Jobs, Your Brain on ChatGPT, Possible OpenAI-Microsoft Breakup & Veo 3 IP Issues

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

AI is rewriting the org chart—just ask Amazon’s CEO.

This week, Paul and Mike unpack the New York Times’ list of 22 upcoming roles that AI will create (from “AI auditors” to “personality directors”), weigh Andy Jassy’s memo that generative AI will mean leaner teams, and dissect the viral MIT study about what ChatGPT might be doing to your brain. Rapid-fire hits include Meta’s billion-dollar talent raid, Apple’s rumored Perplexity bid, and fresh OpenAI-Microsoft friction. Listen or watch below, and grab the full show notes and transcript.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:05:41 — The New Jobs AI Could Create

00:26:11 — Amazon CEO on AI Job Disruption and AI Underemployment

00:39:28 — Your Brain on ChatGPT

00:52:22 — Fallout from the Meta / Scale AI Deal

00:55:27 — Meta and Apple AI Talent and Acquisition Search

01:05:59 — The OpenAI / Microsoft Relationship Is Getting Tense

01:08:53 — Veo 3’s IP Issues

01:12:09 — HubSpot CEO Weighs In on AI’s SEO Impact

01:15:29 — The Pope Takes on AI

01:18:39 — AI Product and Funding Updates

Summary:

The New Jobs AI Could Create

We are finally starting to see the beginnings of some serious work being done to determine which jobs (and skills) that AI will actually create, not just destroy or devalue.

The New York Times has just published an in-depth report from a former editorial director of Wired magazine called “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.”

In it, Robert Capps lays out three major arenas where humans will stay essential: trust, integration, and taste.

Trust is about accountability. That’s where new roles like AI auditors, ethics officers, and “trust directors” come in—professionals who can explain, verify, and take responsibility for what the machine does.

Integration is technical. It includes AI plumbers, trainers, assessors, or people who understand both the tech and the business. These folks decide which models to use, fine-tune them with company data, and even shape the AI’s personality.

Then there’s taste. In a world where AI can generate anything, what really matters is knowing what’s good. Expect more “designers” in unexpected fields, where they're not just making things, but choosing wisely from infinite options.

At the same time, the nonprofit 80,000 Hours has published a guide called “How not to lose your job to AI,” which deep dives into the most future-proof skills you can cultivate in the age of AI.

The most future-proof skills fall into four categories: things AI can’t easily do, like long-term planning or physical tasks; skills needed to deploy and manage AI systems; outputs society needs much more of, like healthcare and infrastructure; and rare expertise that’s hard to replicate.

The takeaway? Don’t avoid AI, but rather ride the wave. Use AI to learn faster, scale your impact, and build skills AI makes more valuable. And maybe skip that decade-long training program unless you’re sure it’ll keep pace with the tech.

Amazon CEO on AI Job Disruption and AI Underemployment

Amazon is now joining the chorus of companies saying the quiet part out loud: AI is going to cut jobs. 

In a memo to employees, CEO Andy Jassy confirmed that as the company rolls out more AI tools and agents, it expects to need “fewer people doing some of the jobs that are being done today.” 

The shift is framed as an efficiency gain—not a mass layoff, but a rebalancing toward different kinds of roles. He writes:

“Today, we have over 1,000 Generative AI services and applications in progress or built, but at our scale, that’s a small fraction of what we will ultimately build. We’re going to lean in further in the coming months. We’re going to make it much easier to build agents, and then build (or partner) on several new agents across all of our business units and G&A areas.

As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.”

He encourages employees to “get more done with scrappier teams” and to become “conversant in AI” if they want to stay relevant.

Your Brain on ChatGPT

A major new study from MIT has taken a hard look at what ChatGPT might be doing to your brain.

Researchers compared three groups: one using ChatGPT to write essays, one using search engines, and one using only their own memory. They tracked brain activity and analyzed the essays with AI and human judges.

The main finding? Using ChatGPT led to the lowest cognitive engagement. Brain scans showed that participants relying on AI had significantly weaker neural connectivity across key areas responsible for focus, memory, and decision-making. 

Their essays were also more uniform and less original—and participants were far less likely to remember or quote what they wrote just minutes earlier.

When those same participants were later asked to write without AI, their brain activity didn’t bounce back fully.

Meanwhile, those who started without AI and later switched to using it showed more active, engaged brains — suggesting it’s better to learn first, then augment.


This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.


This episode is also brought to you by our upcoming AI Literacy webinars.

As part of the AI Literacy Project, we’re offering free resources and learning experiences to help you stay ahead. We’ve got one more live session coming up in June—check it out here.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Majority of business professionals and leaders don't understand what AI is capable of today. So it becomes very abstract for them to envision roles, skills, and traits that will be difficult for the AI to do in the future. So this base premise that like, well, we just gotta figure out what the AI can't do well, most people aren't capable of doing that.

[00:00:18] Like, yeah, we think about this stuff all the time, and sometimes I struggle to think about what it can't do. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Rader. I'm the founder and CEO of Smarter X and Marketing AI Institute, and I'm your host.

[00:00:39] Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.[00:01:00] 

[00:01:02] Welcome to episode 1 55 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host as always, Mike Kaput. We are recording Monday, June 23rd, 10:30 AM Eastern time. there was a lot to talk about last week related to jobs and CEO memos and. acquisition attempts like it, it was kind of like a soap opera esque week in AI last week.

[00:01:32] So, no major model news. I don't think Mike last week, but we have a lot to cover when it comes to like, what has just becoming a, a pretty crazy period in AI with the efforts by all these labs to drive acquisitions and of talent of companies. It, it's just kind of crazy. So we're gonna do our best to unpack all that.

[00:01:58] Give you a little background [00:02:00] on some of the people that are now in the AI news that maybe you haven't heard of before, or maybe some names that we haven't talked about too much on the podcast, but we'll do our best to provide some perspective because I think a lot of these, these people matter. The companies that these labs are going after matter.

[00:02:17] And we'll try and help you understand what is going on. I know when I was preparing this morning, I was like, geez. Oh man. Like, just digging back into like, trying to explain who these people are and why they're significant and the different relationships they have going back over the last 15 years, and who knows who.

[00:02:34] It's pretty wild. Okay, so with all that, this episode is brought to us by Macon 2025. This is our flagship in-Person event. This is part of Marketing Institute's event portfolio. So this is happening, October 14th to the 16th in Cleveland. Again, this is the Marketing AI Conference. I started this event in 2019.

[00:02:54] So Marketing Institute, I created in 2016. And then, [00:03:00] marketing AI Conference or Macon was our first big, flagship event that we launched in 2019. So it is back for, its what, my sixth year? Sixth sixth annual. yep. Minus one year in the middle there for CD. But we are back. We'll be in Cleveland at the Cleveland Convention Center right across the Rocker Hall of Fame and Lake Erie and Cleveland Brown Stadi at least for the time being.

[00:03:23] We'll see if the Brown Stadium gets moved in the next couple years. but you can check it out at mayon. Do ai. It's M-A-I-C-O n.ai. It is a beautiful time to be in Cleveland. I've said this before, I think we were talking about this. It is my absolute favorite time in Cleveland is fall. so if you haven't been to Cleveland during the fall, it's an amazing time to come and visit.

[00:03:44] so you can go learn about the agenda, the speaker lineup. There's a good portion of it already live. There's still some big announcements to be made about some keynotes and other featured main stage talks, so there's more to come. You can go check that out. Rates go up at the end of each month, so now's a great time [00:04:00] to get in before the next rate increase.

[00:04:02] So again, go to Macon ai that is M-A-I-C-O-N, do AI join me and Mike and the rest of our Smarter X and Market Institute team. In Cleveland, along with about 1500 or so of your peers. also this episode is brought to us by our AI literacy project, which is a collection of resources and learning experiences where we're trying to accelerate AI literacy and a couple free upcoming events to note related to the literacy project.

[00:04:29] We've got the AI deep dive webinar that I'm hosting on, I guess this is coming up on Wednesday, June 25th. So this is Google Gemini Deep Research for Beginners. I'm gonna walk through a research project that I actually did for the podcast and show how it worked, show some of the features of deep research.

[00:04:47] So if you haven't done a deep research project yet, this is a great kind of intro for that. And then our next intro to AI class, which we do every month, is coming up on July 9th. That will be the 49th edition of [00:05:00] Intro to ai. We've had over 35,000 people register for that series since 2021. Hard to believe.

[00:05:05] We've been doing that for almost four years now. But. That's come up on July 9th. So you can find links to both of these in the show notes. So again, we've got AI deep dive on June 25th, and then on July 9th we've got Intro to ai. And then the next scaling AI class is gonna be in August. We'll, we'll share that date, on a future episode.

[00:05:25] All right, Mike, let's, let's get started with the job stuff. And this is actually, I think we're gonna start on a positive note. There's a great New York Times article that we're gonna walk through that I think really helps to set the stage for some of the things that might be possible. 

[00:05:41] The New Jobs AI Could Create

[00:05:41] Mike Kaput: Yeah, for once Paul, we've got positive, not negative job news.

[00:05:46] To kick things off, we're finally starting to see the beginnings of some serious work being done to determine which jobs and skills, you know, AI will actually create or that will be valuable in the age of ai. Not [00:06:00] just which jobs and skills will be destroyed or devalued. So like you mentioned, first up.

[00:06:05] Is a in-depth report in the New York Times this past week from a former editorial director of Wired Magazine, and it's called AI. Might Take Your Job. Here are 22 new ones it could give you. So in it, the author Robert Kas lays out three major arenas where humans will remain essential in the age of ai.

[00:06:25] And these three areas are trust, integration, and taste. So First Trust is in his words about accountability. As AI starts doing things like writing legal contracts or corporate reports, someone basically has to be responsible for what's inside those, and that's where these new roles could come in, that he names things like AI auditors, ethics officers, and even something called a quote, trust director.

[00:06:48] These are basically professionals who can explain, verify, and take responsibility for what a machine does. Now the second category integration is technical. This basically [00:07:00] includes ai, what do you call ai, plumbers, trainers, assessors, people who understand both the technology and the business in which it's being used.

[00:07:08] So these folks would decide which models to use. They'd fine tune them with company data, and they might even shape AI's personality. And then finally, there's taste. So in a world where AI can generate anything, what really matters is actually knowing what's good. So you can expect Hessa more designers in kind of unexpected fields.

[00:07:30] So they're not just making things, but they're helping brands or companies in a variety of fields to choose wisely from infinite AI generated options. Now, at the same time, the nonprofit 80,000 Hours, which we've mentioned several times on this podcast, has published a guide called How Not to Lose Your Job to ai, which deep dives into the most future-proof skills you can cultivate.

[00:07:54] Amidst kind of the AI disruption that's coming. And so the way they categorize this is [00:08:00] these future-proof skills fall into kinda four big buckets. So there's things AI can't easily do, like long-term planning or physical tasks. There's skills needed to deploy and manage AI systems. There's outputs that society needs much more of like healthcare and infrastructure.

[00:08:18] And then there's rare expertise that's hard to replicate. So specific high leverage skills they suggest focusing on include AI deployment, leadership judgment, communications, and hands-on technical trades like data center constructions. The takeaway is basically don't avoid ai but rather ride the wave.

[00:08:39] Use it to learn faster, scale your impact and build skills, that AI actually makes more valuable. So that whole report's worth a read as well. I. Paul, kind of to kick things off here, I found it refreshing that we're getting some very real conversation about this. I mean, so many people say that AI will create new jobs, but [00:09:00] like we've talked about, there's very few that are giving in-depth answers about what these jobs could actually look like.

[00:09:06] what did you think of some of the roles and skills they're predicting in these two pieces? I, 

[00:09:12] Paul Roetzer: I was surprised actually how much I enjoyed the New York Times article when I first saw it. And I think when we first put it in the sandbox for a topic for this week in, in the subject line, like 22 new jobs, I would, I just kind of like didn't blow it off, but I just sort of set it aside to read it later.

[00:09:29] and then when you put the, you know, curation together of like recommended main topics and things I look at on Sunday night and I was like, I don't know. And then I dug into that article and I was like, oh, this is actually really good. Yeah, so I, I, I'll kind of unpack a little bit and go through some of these roles that you highlighted, Mike, and share a little perspective.

[00:09:49] 'Cause I think this is super helpful for people as they start to envision how this is gonna impact them and start to maybe think about how their own roles may evolve. So in [00:10:00] the article, you know, it starts off with, it's already clear that AI is more than capable of handling many human tasks. But in the real world, our jobs are about much more than the sum of our tasks.

[00:10:09] They're about contributing our labor to a group of other humans, our bosses, colleagues who can understand us, interact with us, and hold us accountable in ways that don't easily transfer to. So I thought that was like a really nice, broad perspective to start off. 

[00:10:23] Mike Kaput: Yeah. 

[00:10:24] Paul Roetzer: And then the author said, it's not just a question of where humans want ai, but also where does AI want humans?

[00:10:29] And then the areas you had a highlighted trust, integration and taste. Now I will say that the article leans very, very heavily on a professor at New York University, stern School of Business, who studies the economic consequences of AI named Robert Siemens. So there's lots of citations, throughout the article for Siemens.

[00:10:51] So the first, the trust one, this gets to in episode 1 52, we talked, about the, this idea of an air AI verification gap. [00:11:00] And so the article leads off with this story about how the author tried first to write this article using ChatGPTs deep research, and that the deep research product produced a pretty good output, something that might actually be enjoyable for a reader to read and propose some potential, new jobs that could be created.

[00:11:23] But then the author wrote quote, you're being paid. Like, why He didn't use that. Basically he said, you're being paid to be responsible for them. The facts, the concepts, the fairness, the phrasing. This article is running with my byline, which means that I personally stand behind what you're reading. By the same token, my editor's responsible for hiring me and so on a type of responsibility that inherently can't be delegated to a delegated to a machine.

[00:11:48] So this goes to what we talked about on, I think it was 1 52. We said like, if you are going to publish something under your name, under your company's name, you have to be able to stand behind that. You have to take [00:12:00] responsible for everything within it. And so that becomes foundational to this idea of trust.

[00:12:05] The author went out and say, everyone who tries to use AI professionally will face a version of the problem. The technology can provide astonishing amounts of output in an instant, but how much are we supposed to trust what it's giving us and how can we know? So under the trust umbrella, he writes that there's a whole new breed of fact checkers, compliance officers for legal documents, reports, product specifications.

[00:12:25] I would add analytics reports, research reports, contracts. All of these are going to be written or supported by ai, but humans have to verify them. So you identified a couple of these, Mike, but some of the jobs specifically related to trust and I, I, there wasn't a single job that the author put in here that I didn't see the potential for.

[00:12:45] Like, I think that's important to say, right? And it's, and I, again, I think you look at it through the lens of what your profession is. So you may look at these as sales, customer service, marketing, executive, whatever it is, but they actually apply to everybody. I think, [00:13:00] like they're not like so specific that you couldn't imagine some element of this.

[00:13:03] So AI auditors or people who dig into the eye to understand what it's doing, why, and then can document for technical explanatory and liability purposes. An AI translator, someone who understands AI well enough to explain its mechanics, trust, authenticator, trust director, an AI ethicist, build chains of defensible logic that can be used to support decisions.

[00:13:25] So the more we rely on these things for decision making. Someone can verify why we made the decision we did and how AI supported that decision. A legal guarantor, I think this is gonna be critical, especially in like, you know, organ, highly regulated industries, legal industries, things like that. Someone who provides the culpability, that the AI cannot, consistency coordinator.

[00:13:47] So, the author writes, AI is good at many things, but being consistent isn't one of them. So you have to kind of oversee that consistency. And then an escalation officer where the preferences, writes, the preferences will almost certainly [00:14:00] also require someone to step in when AI just feels inhuman, which I actually really like.

[00:14:04] It's the idea of, you know, if you're relying on these things from a customer service perspective to interact with your customers and the AI isn't providing the level of empathy or understanding that's needed, somebody's gotta step in. And so these might not be the actual titles, but you can start to see the importance of these things.

[00:14:22] On the integration side, the author rights, given the complexity of ai, many of the new jobs will be technical in nature. There will be a need for people who deeply understand AI and can map that knowledge into business needs. This is a hundred percent something we're seeing. It's something I've been actually looking for for our own company, that technical expertise that can kind of like take that lens across all aspects of the company, every department.

[00:14:44] So in this one, the author, talks about AI integrators, experts who find, how to use the best AI in the company and then implement it. Concept of ai, plumbers definitely not a title that I see in many organizational charts, but you get the premise here is something goes wrong. [00:15:00] Someone has to be able to figure out why the AI did what it did and how to fix it.

[00:15:04] And this is gonna become very problematic with agentic systems where you have agents working with other agents and like someone's gotta figure out what's going on and why you have AI assessors where they evaluate the latest and greatest models and figure out how to impact operations, product services.

[00:15:20] And again, you can start to see. This may be like a head of ai, a chief AI officer, and these may actually be part of their job description to like fill these specific roles, not individuals necessarily doing each of them. an AI trainer that, you know, finds the best models and figures how to integrate data into it.

[00:15:41] A personality director, I think this one's actually kind of interesting on the marketing and customer service side in particular, where you're gonna have ais that interact with customers, prospects, partners. What personality does that AI take on? Is it friendly? Is it sarcastic? Is it helpful? is it [00:16:00] very professional and formal?

[00:16:01] Someone's gotta decide these things because you can steer the AI to behave in certain ways. And then ai, human, EV evaluation specialist where someone who determines where AI performs best, where humans are either better or simply needed, and where a hybrid team might be optimal. Now in the integration front, one interesting thing from over the weekend was.

[00:16:22] Adam D'Angelo, who's the co-founder, and CEO of Quora, actually tweeted something along these lines where he was hiring an AI automation engineer. So I think this, I'll, I'll play this out for a minute, Mike, because I think this is kind of interesting to show where this goes. So this tweet got a lot of attention from some of the AI people that I follow closely on X.

[00:16:42] And so I was like digging into it over the weekend. So, Adam D'Angelo, and as I let off this podcast, we're gonna throw some names at you that may not be super familiar, but it's important the context to all these people. So, Adam D'Angelo joined the OpenAI board in 2018 and voted for Sam to be [00:17:00] ousted as the CEO in 2023.

[00:17:02] And then Remarkably was actually the only surviving board member after Sam Altman returned to OpenAI. So he sits on the board for Asana, where he, which is run by Facebook co-founder Dustin Moscovitch, which is a friend of his. D'Angelo is a high school friend of Mark Zuckerberg who actually joined Facebook shortly after it was founded in 2024.

[00:17:24] So February, 2024, the facebook.com launched. D'Angelo joined in June, 2024. He went on to found, became the CTO of Facebook for a couple years from 2020 or 2006 to 2008, and then he founded Quora in 2009. So this is a major player in Silicon Valley, heavily involved in lots of the AI components that are going on.

[00:17:48] And so he shared the job posting, and I think this is a posting you're going to see a lot of, you're probably gonna see these people hired in your company. So he said, we are opening up, this is his tweet. We'll, again, we'll put this in the show notes. We are opening up a [00:18:00] new role at Quora, a single engineer who will use AI to automate manual work across the company and increase employee productivity.

[00:18:06] I will work closely with this person he's saying as the CEO. about the team and role. If you go to the link, it says, we're hiring our first AI automation engineer to lead how we apply AI internally across the company. This is a unique opportunity to shape how LLMs become embedded in our daily operations.

[00:18:24] Your goal will be to automate as much work as possible, increasing our productivity, and improving the quality of products, decision making, and internal processes. You'll work closely with teams across the organization to identify high impact problems and solve them continually assessing new potential as frontier model capabilities instantly improve.

[00:18:46] Also says, this role is ideal for an engineer who's curious, pragmatic, and motivated by real world impact, not just research. You will lay the groundwork as for how we approach internally applications with a focus on utility trust and [00:19:00] constant adaptation then goes into talking about how they're gonna collaborate with the different teams and integrate this stuff and act as a high trust owner of systems systems.

[00:19:07] Stay updated on the latest models and tools. So the way this actually caught my attention, I don't get alerts from D'Angelo, I don't, I don't think I actually saw from Aaron Levy of Box was the first time I saw it and he replied to that post and said, companies going, AI first should dedicate some talent that knows what AI is capable of to be in the trenches to design next gen workflows.

[00:19:30] AI moves fast, it's hard to decentralize this knowledge yet. but people are gonna jump on this. And then I actually replied to Aaron and he replied to me where I was like, Hey, this is great, but we can't just centralize this on individuals. This has to be, we have to empower leaders and professionals through education and training.

[00:19:47] plus change management is essential. and Aaron actually replied and said, yeah, I do. You know, a hundred percent. Right. So that's the integration side sort of played out and I think that's a role that you're gonna see. And then the final one, Mike, was taste, and this is something [00:20:00] you and I just talked about.

[00:20:01] I, I think it was last week we were talking about this idea of taste. And so the author says It will remain a human's job, of course, to tell the AI what to do. But telling AI what to do requires having a vision for exactly what you want. In a future where most of us have access to the same generative tools, taste will become incredibly important.

[00:20:20] Says when creative options are nearly limitless. People with the ability to make bold stylist choices will be in demand. Knowing what you want and having a sense of what will resonate with customers will be a core human role in developing products. And then they relate to like designers and people who have to like marshal creative choices to desired outcomes.

[00:20:39] And then he talks about this idea of designers for products, articles, the world models, HR and the role it'll play in creative decision making. they talk about a differentiation designer, when everybody has access to the same tools, how do we execute it differently? And it says, designer may end up being the preferred nomen may not end up being the preferred [00:21:00] nomenclature.

[00:21:01] But it's useful, signifies the shift. More and more people will be tasked with making creative and taste decisions, steering the ai, where they want it to go. and then a couple quick thoughts on the how not to lose your job thing. as you said, like what I, what I really like Mike, is that we're starting to see people being proactive now.

[00:21:22] Yeah. About trying to figure out what comes next. So this is why, like in our jobsGPT tool, where I built in the forecast new jobs function. And if you're not familiar with that, we'll drop the link in, but it's just smarter x do AI slash jobs GPT. And so the whole premise is to try and actually project out where this goes.

[00:21:44] So in, in this article, talk about AI drives down the value of skills it can do, the AI can do, but it drives up the value of skills. It can't because they become the bottlenecks for further automation. Now, the note I had when looking at this one, Mike, is. The majority of people, [00:22:00] the majority of business professionals and leaders don't understand what AI is capable of today.

[00:22:04] So it becomes very abstract for them to envision roles, skills, and traits that will be difficult for the AI to do in the future. So this base premise that like, well, you just gotta figure out what the AI can't do. Well, most people aren't capable of doing that. Yeah. Like we think about this stuff all the time, and sometimes I struggle to think about what it can't do.

[00:22:24] So the few skills that they, I thought were universal here is deploying ai. So AI makes people who can direct it more powerful. The messier parts that AI can't do become the bottlenecks. Leadership skills management strategy and research taste our messy tasks AI struggles with, but AI gives leaders more influence than before.

[00:22:44] communications and taste. Again, taste is like gonna be like the word of 2025. I'm starting to feel like, They talk about content creation gets automated, but dis discernment and trusting relationships with your audience become more valuable. So like Mike and I could literally just run a GPT or a [00:23:00] weekly search and say, what are the 20 things we should talk about this week on the podcast?

[00:23:03] Pick those things and then have AI write summaries on it. Like, right, this is the example. I'm, I'll give it super practical and I can promise you there are podcasts right now that are probably doing quite well that do that exact thing. Guaranteed. They literally just have AI tell them what to talk about.

[00:23:18] We do not do that. This is literally me combing through 250 sources a week. My taste of like, here's the 50 things I think we might want to talk about Mike's taste of here's the three things I think are the main topic and the seven to 10 rapid fire items, and then what context we provide to those things.

[00:23:36] Like it is completely human curated stuff. 

[00:23:39] Mike Kaput: Mm-hmm. 

[00:23:39] Paul Roetzer: And so that ability becomes more and more important when everyone has access to the same technologies. And then the complex physical skills is another area. So, Overall, like I think the articles are, are both really good. Like these are really good things to get you thinking about [00:24:00] where this goes and what some things might be relevant to your job, your company, your industry.

[00:24:05] But it also shows like you can't wait for someone else to show up and figure this out. Like you've gotta deeply understand what AI is, what it's capable of today, where it's going in the next couple years. You have to experiment with the new models as they come out. Play around with deep research, you know, test a reasoning model.

[00:24:22] If you haven't build a GPT, build a notebookLM, like you've gotta do these things and challenge yourself to keep learning and growing so that in your profession, in your company, you're at the frontier of figuring out what comes next and ideally maybe like creating your own path that brings enormous value to the company you're at.

[00:24:43] Or you leave and you do your own thing. But this is, I think as we started off, Mike. The idea that people are now more proactively writing about this and thinking about it across different industries, I think is fundamental to us being proactive as a [00:25:00] society and a business community to like moving toward the best possible outcome here and is exactly what we've been like calling for, for the last couple years.

[00:25:08] And I just, I love to see it and we'll definitely do our, you know, do our job to try and spotlight these, this kind of thinking and hopefully stimulate and inspire people's minds to, you know, figure out what's next in their career. 

[00:25:21] Mike Kaput: Yeah, absolutely. And I loved just how in depth and how detailed both these articles are.

[00:25:26] You can start solving this for yourself right now. I would go drop both of them into something like O three with context about here's my role, here's what I'm thinking about my job, here's what my skill sets are. And I bet you you could pretty quickly start triangulating on which of these skills might be.

[00:25:45] Complimentary to what I already do. What might I be really good at in kind of these AI forward skills and jobs and start building out your own kind of roadmap. 

[00:25:53] Paul Roetzer: Yeah, I agree. I think it's a great point. You can just take the 22 jobs from the New York Times thing with the Yeah. You know, little [00:26:00] descriptions and say I'm a marketer, like what is, what could this mean to me?

[00:26:04] I'm A CEO. How should I be thinking about building out my staff and org chart? Like, yeah, this is the kind of stuff that's really helpful 

[00:26:11] Amazon CEO on AI Job Disruption and AI Underemployment

[00:26:11] Mike Kaput: and you don't need, I would argue to like nail it perfectly. Maybe these job titles, we get 'em wrong or something, or it looks a lot different than we're talking about now, but you can be directionally correct with I think a lot of the material in these articles alone.

[00:26:24] Yep. All right, Paul, that's enough positivity here. So get back to the job first. Our, our second topic, yeah. Is also related to jobs, but is a bit more in the vein of the negative news we've seen recently because Amazon is now joining the chorus of companies. Saying the quiet part out loud, they are saying AI is going to cut jobs In a memo to employees, Amazon, CEO, Andy Jassy confirmed that as the company rolls out more AI tools and agents, it expects the need, quote, fewer people doing some of the jobs that are being done [00:27:00] today.

[00:27:01] Now, this is being framed as an efficiency gain. They're not announcing as of right now, mass layoffs due to this, but they are talking about kind of rebalancing towards different kinds of roles. So he writes in this memo today, we have over a thousand generative AI services and applications in progress or built, but at our scale, that's a small fraction of what we will ultimately build.

[00:27:22] We're going to lean in further in the coming months, we're going to make it much easier to build agents and then build or partner on several new agents across all our business units and g and a areas. As we roll out more generative AI and agents. It should change the way our work is done. We will need fewer people doing some of the jobs that are being done today.

[00:27:41] More people doing other types of jobs. It's hard to know exactly where this nets out over time. But in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company. And then he encourages employees to get more done with scrappier [00:28:00] teams and to become conversant in AI if they want to stay relevant.

[00:28:06] So Paul, this is just the latest in this trend we are talking about more and more. It seems like sometimes we're a little bit of a broken record on this, but it is just so critical to talk about these warning signals that continue to flash among leading companies. Because it seems like, I mean, do you agree that we're seeing more and more of these indicators?

[00:28:26] Paul Roetzer: Oh yeah. Yeah. We're, we're now, we're at the leading edge now of the world waking up to this for sure. So I. I, I'll, I'll kind of end my thoughts here on the Jasi memo, but I'll start with, the day after the Jasi memo, wall Street Journal published an article titled The Biggest Companies Across America are cutting their workforces.

[00:28:47] In the article, it says it isn't just Amazon. There's a growing belief that having too many employees will slow a company down and that anyone still on the payroll could be working harder. Corporate America is convinced fewer [00:29:00] employees means faster growth. US Publicly com public companies have reduced their white collar workforces by a collective 3.5% over the past three years.

[00:29:09] The workforce cuts in recent years coincide with a surge in sales and profits heralding a more fundamental shift in the way leaders evaluate their workforces. The cuts go beyond typical cost trimming and speak to a broader shift in philosophy. Adding talent once a sign of surging sales and confidence in the future now means leaders must be doing something wrong.

[00:29:30] New technologies like generative AI are allowing companies to do more with less. But there's more to this moment from Amazon in Seattle to Bank of America and Charlotte, North Carolina, and at companies big and small, everywhere in between. There's a growing belief that having too many employees is itself an impediment.

[00:29:46] The message from many bosses, anyone still on the payroll, could be working harder than it shared. Examples of Procter and Gamble said this month they would cut 7,000 jobs or 15% of its non-manufacturing workforce [00:30:00] to create broader roles in smaller teams. they cited Estee Lauder and a dating app. operator Match group recently said they had each jettison around 20% of their managers.

[00:30:11] Microsoft meanwhile plans to lay off thousands of employees in its sales department and other teams in coming weeks. As it looks to thin out, it ranks its ranks. And they quoted, tech Advancer and former Adobe Executive Jason Lemkin. he said on a venture capital podcast last month, everyone with 500 employees and up that I talked to off the record, including public companies, says, I don't need 30 to 40% of my team.

[00:30:40] That's a pretty significant number. Big number workers are contending with much bigger workloads, more responsibilities, and a nagging fear about their job security and future prospects. Quick pause here, Mike, before I finish on this is why I keep stressing. If you just talk about ai, [00:31:00] people are already afraid for their jobs.

[00:31:02] Mm-hmm. Like, you can't just say, we're going to do AI without doing change management and being transparent as a leader about what's coming when people we already know are seeing these headlines. okay. Back to the article. It says, managers have been an especially ripe target for cutting, though.

[00:31:22] Live data technologies, data show public companies have, paired back their non managerial ranks. Recently too, the number of managers dropped 6.1% between May, 2022 and May, 2025. Executive level roles fell 4.6%. So on episode one, what number are we on now? 1 55. Five mean 1 1 54. This came up 'cause somebody asked a question about like who was gonna be most impacted by ai, I think was in our AI answers episode that I did with Kathy and I said sort of off the cuff, I thought manager managers were screwed.

[00:31:56] and I hadn't actually like, deeply thought about [00:32:00] this yet. Like, but, but the more I started thinking about it, I was like, well, managers don't have taste yet. Right? Like oftentimes at the manager level, you've like progressed through, but you're not like director level and above, which I generally think of as like someone who can really own strategy and has like.

[00:32:20] Deeper experience and expertise that can evaluate the quality of the outputs of these models that can give better direction for what they do. Mm. And so my kind of like in, actually, I'd be really interested to get your take on this. My, my instinct, and this has shifted, this was something that like started shifting me mentally last week, was maybe entry levels gonna have a little bit better time in the near term because they can work with the models to do the outputs, but they need someone with taste and expertise to tell them what to have the models do.

[00:32:52] Yeah. And then you need someone who can assess the output, which needs to be someone with taste and expertise. Yep. And so who gets squeezed in that is like [00:33:00] the middle manager who maybe doesn't have that yet. Like I Do you have any reaction to this Mike? Like who, who do you think might be most impacted 

[00:33:06] Mike Kaput: that, that makes perfect sense to me.

[00:33:08] I tend to think of it at least in the near term as almost a barbell, right. You know, those entry level people on one end. Who with the caveat, as long as they're actually mastering AI and bringing that to the table, it's just inherently cheaper to have them do all the stuff with AI that we'd wanna enable.

[00:33:24] And then at the other end of the barbell, yeah, there's people that have the intangible, the taste, the strategic outlook that can, that can be the AI verification people, right? For what's being produced. I think it makes perfect sense to me. I think the middle gets squeezed very, very hard. 

[00:33:41] Paul Roetzer: I mean, maybe, and again, I'm completely thinking out loud here.

[00:33:44] so here's an example. We did a deep research project, the, actually the one that I'm gonna demonstrate during the upcoming webinar. And it output, I think it was a 35 page, 30 to 40 page, deep research product that on first [00:34:00] glance looked phenomenal, looked great, but it had dozens of sources and I didn't have time to vet them.

[00:34:06] So I actually gave that project to an intern who knows how to vet sources. She is a sophomore in college. And I said, I just want you to go through and verify the legitimacy of the sources that are in here. You've been trained to do that through writing classes. you, you can go through and do that and leave comments.

[00:34:27] So we had her do that. Mm-hmm. Then I turned it over to Mike and I said, Hey, we wanna build a research arm. We want to do more real time research. You now need to go through this document and you need to vet it the way we would vet it as if someone else on the team wrote it. Yep. I couldn't give that second part of that workflow to a manager.

[00:34:50] It has to be Mike. It had to be me or Mike. It was the only two people that we could verify and then stand behind it and be confident in the out. Hmm. [00:35:00] And that maybe my, I don't know, like I'm, now that I'm thinking about that, like that might be a perfect example of how anybody can do the first part as long as they're trained to do some basic verification, but the expertise has gotta come from somebody on high.

[00:35:13] Mike Kaput: Yeah. Yeah, that, I think that the exactly an example of kind of what I'm getting at, that that low end and high end is, going to be almost in tandem. Pretty important here. I think, 

[00:35:24] Paul Roetzer: and maybe there's just, maybe the management arm is largely just literally the management of the AI agents when it isn't a high risk, high liability mm-hmm.

[00:35:32] Environment where it's really just managing workflows and, I don't know, workflow 

[00:35:37] Mike Kaput: management. Yeah. In a lot 

[00:35:38] Paul Roetzer: of cases. Yeah. I'd almost have to go back to that, this New York Times thing we started with and like re-look at that. 'cause I almost wonder if man management isn't more of like those kinds of roles where they don't have the final say and can't maybe approve the final output, but they're there to sort of keep things flowing.

[00:35:56] Question is just like, do you need as many of those people? I don't know. Right. 

[00:35:59] Mike Kaput: [00:36:00] Right. And how much of those, I wonder too, how much of the verification or trust related skills just get baked into every job. Yeah. 

[00:36:08] Paul Roetzer: Right. Yeah. It's just literally a part of your job description. Bam. Okay. Well, so then one other, one I'll throw out Mike here is that caught my attention last week is, Vista Equity Partner, CEO.

[00:36:20] Robert Smith said last week that 60% of the 5,500 attendees at the super return conference will be out of work next year. He said, quote, we think that next year 40% of the people at this conference will have an AI agent and the RA remaining 60% will be looking for work. Now, I don't, I don't know Robert Smith, I don't know his deep understanding of ai.

[00:36:46] That quote on its own sort of makes me question slightly. Like, and he might just be broadly applying AI agent to mean something bigger, but like to boil it down to, you'll have an AI agent, Mike, and so you're not gonna need to teach. That's not how this [00:37:00] plays out. But, maybe let's assume. In the spirit of this conversation, he understands he probably means a network of AI agents and like something much more, versus just a provocative headline to, you know, stir up the audience.

[00:37:12] But, okay. So we think the next year 40% of people at this conference will have an AI agent in the remaining 60% will be looking for work emphasized in his remarks at the event that all of the jobs, quote unquote, currently carried out by 1 billion knowledge workers today would change due to ai that is a global number.

[00:37:30] Mm. In the US there's about a hundred million knowledge workers. So I assume he's referring to some larger global number. He then said quote, I'm not saying they will all go away referring to the billion knowledge work jobs, but they will change. You will have hyper-productive people in organizations and you will have people who will need to find other things to do.

[00:37:48] Now why would we share this article? and Robert Smith's opinion? Well, Vista is one of the largest private equity firms in the world with over 100 billion in assets under man. [00:38:00] And what have I said time and time again, if it is a publicly traded company, if it is a venture backed company or is a private equity owned company, efficiency and productivity is what they seek.

[00:38:11] It is how you get higher margins and you provide returns to your stakeholders, your shareholders. It is required. They have a fiducial res fiduciary responsibility to do exactly what he's saying. So that brings us back to the Jassy memo. I applaud Andy, Jesse and Amazon for doing this. I think we have to have way more transparency, but what was missing from it, and what I hope we get more of is a commitment from Amazon around AI education and training, re-skilling and up-skilling workforces and change management.

[00:38:47] Otherwise, all that memo is, is a pr move to soften the blow when they announce a 20% layoff. Mm-hmm. In the next 12 months, with the, I told you it was coming. And so I want, I want to [00:39:00] see more of these memos. I do think by the end of this year we will see a flood of CEO memos with here's our, you know, vision for what's gonna happen in the future of work and the future of the workforce.

[00:39:10] But if those memos don't come with a plan to prepare the workforce for that future than, it's nothing more than PR and not great PR at that. 

[00:39:20] Mike Kaput: Hmm. Yeah. We'll have to keep an eye on if Amazon makes any announcements from the next six to 12 months on that front. Yeah. 

[00:39:28] Your Brain on ChatGPT

[00:39:28] Mike Kaput: Alright, so our third big topic this week, a new study from MIT is getting a lot of attention because it has taken a look at what ChatGPT might be doing to your brain in this paper.

[00:39:42] In this research, researchers compared three groups. One, using ChatGPT to write essays. One, using search engines to write an essay and one, using only their own memory. They track brain activity during this and analyze the essays with AI and human judges. Their main finding they [00:40:00] claim is that using chat, GPT led to the lowest cognitive engagement.

[00:40:04] Brain scans showed that participants relying on AI had significantly weaker neural connectivity across key areas responsible for focused memory and decision making. Their essays were also more uniform and less original, and participants were far less likely to remember or quote what they wrote just minutes earlier.

[00:40:22] When those same participants were later asked to write without ai, their brain activity didn't bounce back fully. Meanwhile, those who started without AI and later switched to using it showed more active and engaged brains suggesting it's better to learn first and then augment with ai. Now Paul, the reason we wanted to mention this, this study's getting a ton of attention.

[00:40:45] A lot of people are jumping on it as proof of whatever their kind of perspective is on ai. A lot of people are pointing to it, saying, of course AI is harmful. But it's important to note there's some criticism of this study and [00:41:00] how people are interpreting it. So Ethan Molik actually wrote about this, saying, this new working paper out the MIT media lab is being massively misinterpreted as AI hurts your brain.

[00:41:11] It is a study of college students that finds that those who are told to write an essay with LOM help, who unsurprisingly less engaged with the essay they wrote, and thus were less engaged when they were asked to do similar work months later. Now, he says the misinterpretation isn't helped by the fact that this line from the abstract is very misleading.

[00:41:31] Over four months, LLM users consistently underperformed at neurolinguistic and behavioral levels. Molik then says, but the study does not test LLM users over four months. It tests literally nine or so people who had an LOM help write an essay in an experiment writing a similar essay four months later.

[00:41:52] So basically it goes on to say this is not a defense of blindly using AI across education, but it doesn't mean that [00:42:00] LMS rot your brain. So Paul, what did you make of this? I feel like there's, this got tons of attention, but there is a little more going on when you start scratching beneath the surface.

[00:42:10] Paul Roetzer: Yeah, this is one of those that just seemed to like catch fire on, on x and mm-hmm. LinkedIn. I, my, my initial reaction is like overall good research direction, but people were definitely just running with a provocative headline without taking the time to understand the data. This is a, I guess there's a couple good things can come out of this.

[00:42:29] I. It's a good example of why you need to be very critical of the people you follow and listen to in the AI field. So if there were AI experts, quote unquote, that like, were portraying this as some groundbreaking study, that's probably a good indication that they don't vet the stuff that then sharing online very closely.

[00:42:51] Because anyone could have looked at this very quickly and said, well, yeah, of, of course, [00:43:00] like it, it's like saying, Hey, we gave a control group calculators who didn't know how to do math, and we found that the people that relied on the calculator to do math didn't actually learn math. It's like, okay, like if, if you have the LLM do the work, of course it's going to impact your learning of the material, your near term memory of the material.

[00:43:23] Like I. It's just one of the most obvious hypotheses of a research study that you didn't need the research to tell you. Yeah. Yeah. So first thing that could come good out of this is that people learn to be more critical of the people they follow online. the second is it for people who weren't aware that we need to teach AI as a learning tool and as assistant, maybe this was an impetus for them to realize the importance in schools and in business, that we teach responsible use of these things to accelerate learning and comprehension, not to replace critical thinking.[00:44:00] 

[00:44:00] So this might led to a couple thoughts that I, you and I haven't even talked about yet. this sort of formed last week while I was traveling, running a workshop and doing some other thinking, and then some things we were experiencing within our own company. And so we've talked about this idea of an an AI verification gap where someone needs to validate and edit AI content for accuracy.

[00:44:24] And when I started thinking, and this is pretty raw thinking, I haven't fully like developed this in these maybe terrible names, but I realized there's actually like a few gaps happening that are starting to emerge. So one is the verification gap, the other I was calling Mike, the AI thinking gap.

[00:44:41] Mm-hmm. It's the capacity to apply critical thinking to AI outputs. And this actually goes back to the example I just gave of that deep research project. So anyone on the team at any level can create endless strategies, papers, research reports, articles, social shares, and copy. They can create anything, but we are [00:45:00] still limited by our human capacity of time and brain power to assess them.

[00:45:05] And so this thinking gap exists where we just can't, we don't have enough time and brain power as leaders to think through everything that it's outputting. Yep. and then the third one, and maybe the most important, and this is I think what this research report gets to is what I was calling an AI confidence gap.

[00:45:22] Which is the ability, ability to confidently comprehend and present the material contained in the AI outputs. So I have personally experienced this numerous times in the last month where I use AI to create something, a strategy outline, a research document, and then I share it with the team as a starting point.

[00:45:44] Like, Hey, I don't have the time to like fully verify this, to apply a full layer of critical thinking. But like, here's a starting point. Now, Mike, if you came to me the next day and said, Hey, I want to drill into the thing you shared with the team yesterday and I wanna [00:46:00] like push on a couple of items here, I don't actually have the confidence to have that conversation and ask your answer, your critical questions.

[00:46:08] 'cause I didn't actually do the hard work, right? I just output the thing with a prompt or two, got the output. So I started realizing like, we can use these tools to create these strategies, these research ports, whatever. I may as someone with some domain expertise, read it and realize this is really good.

[00:46:26] And like I have now kind of verified it's legitimate, but because I didn't do the hard work, it's basically like reading the cliff notes of something, right? And so you don't have or retain that same level of confidence in the material. And the same thing for me, I've found like happens with like meeting notes.

[00:46:44] Like I know people love these meeting note takers. Everybody's got their AI note taker. I actually don't use them. Mm-hmm. Like we, we have them for our internal purposes. They take notes. I find that I still type out everything in every meeting I go to. And [00:47:00] what I, the reason I do it is because I actually remember it.

[00:47:03] Once I type it. If, if I just have the note taker, take the notes and then do the action items, there's less cognitive load. But that cognitive load is actually what. Embeds it in my memory and like makes it so a year from now, be like, Hey Mike, you and I were like that one time we were in that meeting, we were talking about that thing.

[00:47:21] It's because I actually wrote it down that my brain processed it. And so I don't know, like it's kind of also then on the podcast example, it's why I actually read, listen to or watch every single thing we talk about, right? Because if I just said, Hey, here's an article about what this dude from, what was he from Vista said, or whatever, throw the thing in, say, Hey, Chad Petit, gimme some talking points on this.

[00:47:46] And then I just sit here, regurgitate the talking points. I have no retention of that information. It's just gone after we talk about it. So I actually still read everything. I copy and paste excerpts. I look at those episodes. I bold face key components to make sure I [00:48:00] say on the podcast because the retention of the information, my ability to connect the dots on the related data is near zero if I don't actually do the work.

[00:48:08] And so. AI verification, AI thinking and AI confidence gaps start to become like these fundamental things that actually, impact me working with the ai, this like human plus AI concept. So I don't, I don't know, again, I'm just like sharing this out loud, but I don't know if you have any take on that, Mike, or like disagree or, I love, I love anything else.

[00:48:28] This framework a lot. 

[00:48:30] Mike Kaput: And I think like, you can apply it, it, it goes really well with what we were talking about. If you're one of those people that are like in that manager category that we're suspecting could be in real trouble here, I'd pay really close attention to this. Because even like this study alone is a microcosm of it.

[00:48:50] Like if you were the intern, you might be really good at using AI to give me a brief about this paper, which is 200 pages long and [00:49:00] like do some basic verification, but. It's my job to then say, well, no, I'm gonna go read the methodology and it turns out this study is based on 54 people and like, that's okay.

[00:49:13] But using the pattern matching and the taste or critical thinking, whatever we wanna call it, that I've developed, having read through dozens, if not hundreds of these studies, and have to parse through them at different times in my career, I can then bring that to the table and say, well, okay, let's take some things with a grain of salt.

[00:49:30] Let's realize the AI influencers are using this for headlines and clicks and engagements, and let's take a step back and integrate some more perspectives here. Now the manager has, that manager level has no role in that process. Yeah. At the moment. So I think your ways of looking at these gaps, I think you can almost line up these gaps with that manager class of roles and say, okay, you need to figure out how to, how to fit into this process.

[00:49:56] Paul Roetzer: Yep. Yeah. And I would say like from a work environment, something to think [00:50:00] about as a, like a leader. Is if you're getting AI generated strategies and documents presented to you by your team, tell 'em to put the screen away and have a 10 minute conversation about it. Okay? Like, I want to, I wanna know you critically thought through everything you're recommending to me right now, and I want you to be able to stand behind that the same way we would've before generative ai.

[00:50:22] And so it's just something to think about. It's like, we want our employees using these tools a hundred percent. Like we want the speed and we want the, you know, the outputs. But more important to me is that I actually have employees who can do the critical thinking without the ai because I know they're gonna be better at using the AI if that's the case, and that they're at that stage where I can trust them, that I can have a level of confidence in them.

[00:50:43] And I can know that they're kind of filling that AI thinking gap. But if all I'm ever getting is AI generated outputs, I don't, I don't know that's the case. And the same is gonna apply in schools like you. You have to test for actual critical thinking ability by knowing they have confidence in the material they've presented.[00:51:00] 

[00:51:00] So, I don't know. I mean, it's just, yeah, I don't know. I might build on that at some point with like a course up, you know, upcoming course for academy or something. 

[00:51:06] Mike Kaput: I think there's really something to it. I think that's worth revisiting. And you know, one final note here, I don't know if it's helpful, but just thinking through this, it kind of hits on to some of the frustrations I've had, not with our employees, but people who have clearly given me some type of deliverable that is like AI generated researcher strategy, right?

[00:51:24] Which is all really good, but it's like, guys, I could have done this myself. You just gave me 12 pages that I'm not really inclined to read through. Like, your job is to pick the right one and tell me right. What we should be doing here, you know? 

[00:51:37] Paul Roetzer: Right. Like if I go to you, Mike, on a, on a Monday morning and say, Mike, why did you pick these three main topics?

[00:51:42] Well, and you say, well, because ChatGPT told me to, I'm sorry, I'm finding a new co-host. Like, for sure that's not, that's not your value here. Your value is because you can critically assess these things. And you can stack them and order them in a process. That makes a ton of sense. Yeah. That I have confidence in your ability [00:52:00] to do that better than anybody.

[00:52:01] And like that's the value that can't be replaced by ai. Yeah. 

[00:52:06] Mike Kaput: Yeah. No, I think there's, I think there's a whole framework or system here through kind of evaluate jobs through this lens is really useful to consider moving forward. 

[00:52:15] Paul Roetzer: Alright, cool. 

[00:52:16] Mike Kaput: We'll keep going. I won't bury it. Alright, let's dive into rapid fire topics this week.

[00:52:22] Fallout from the Meta / Scale AI Deal

[00:52:22] Mike Kaput: So first up, we reported on episode 1 53 that meta bought a 49% stake in AI data labeling, company scale ai. They hired away, essentially its CEO, Alexander Wang to head up their new super intelligence lab. And now there appears to be some fallout from that deal. So, according to some new reports from Reuters and Bloomberg, Google Scales, biggest customer is starting to cut ties with them.

[00:52:49] Microsoft OpenAI and Xai are also now pulling back from their relationships with the company. And the reason may be because of Scale's Business. IT supplies [00:53:00] highly specialized human labeled data that companies use to train their most advanced AI models. So that means this company gets a deep visibility into what AI Labs are working on.

[00:53:11] And with Meta now basically owning half the company, it seems like competitors feared their research pipelines could be in some way exposed here. Now OpenAI for one, says its split was already in motion, but Meta's deal has kind of sealed it. Meanwhile, at the same time, there's a big surge in demand for customers, for scales, competitors, companies like Surge, ai, label, box, and Handshake.

[00:53:36] So Paul, I can't say this surprises me. this can't be surprising to either meta or scale, I would imagine either I can't help but wonder here, like did Scale's, CEO just basically like. Exit his company because it seems like their customers aren't gonna be wild to work with them after this. 

[00:53:55] Paul Roetzer: Yeah. I There's no way that Meta and Scale didn't know [00:54:00] the other labs would leave.

[00:54:01] Yeah. Like that. That is again, kinda one of the most obvious things you could possibly connect the dots on here. My question is, what is the $29 billion valuation for if you knew all those companies were leaving? 

[00:54:13] Mike Kaput: Yeah. If all your revenue is gone eventually. 

[00:54:15] Paul Roetzer: Yeah. So if you had a $15 billion valuation, you throw the 14 billion investment from Meta, get you the $29 billion valuation of what, like what is gonna be left of the company when all the major labs are gone.

[00:54:29] So I have no idea like that. Is it, it's just bizarre at that point. So, I don't know what Meta was buying other than just the CEO and some of the other top leaders and. I, I, again, my guess here is like they just couldn't do the acquisition Yep. Because of, you know, regulations and oversight from the government.

[00:54:48] And so they were willing to just basically run the company into the ground and acquire the top talent for $14 billion. the other thing I'll add here is I'm still, I think I'm on like chapter 12 or [00:55:00] 13 of Karen Howe's Empire of ai, which just gets better with every chapter. And if you wanna understand scale, AI's business model and how these models are trained, Karen has an entire chapter dedicated to it.

[00:55:15] it's extremely enlightening if you're unaware of how this all works and what their business model is. So, I would just highly recommend Empire of ai if you want to go deeper on this stuff. 

[00:55:27] Meta and Apple AI Talent and Acquisition Search

[00:55:27] Mike Kaput: All right. Next up some more meta related news, but not just meta. We've got meta making. Some big AI talent and acquisition moves beyond scale AI and Apple is considering a big move as well.

[00:55:40] So first meta. According to a recent interview with Sam Altman, Altman said Meta had been trying to lure OpenAI top talent with offers that went up to a hundred million dollars signing bonuses. He said, so far, none of the company's best people have taken the bait. Meta has also reportedly in advanced talks to hire Nat [00:56:00] Friedman and Daniel Gross, two of the more respected investors in AI as part of that deal, which could likely be over a billion dollars.

[00:56:08] Meta would also buy out a chunk of their venture fund, which holds stakes in some of the most valuable AI startups in the world. Gross would leave his post as CEO of Safe Super Intelligence, the startup he co-founded with former OpenAI Chief Scientists, ias Berg. And interestingly, it has also come out that Meta tried to acquire safe super intelligence outright and fail.

[00:56:30] Now Apple, apple is also exploring a bold move here they are considering, according to some reports buying Perplexity, which is the AI powered search engine. According to Bloomberg, top Apple execs have discussed making a bid, but it's still early days and no offer has been made yet. So one big reason here for their interest.

[00:56:51] Apple's $20 billion a year deal with Google, which makes Google the default search engine on iPhones is under threat from a US antitrust case, [00:57:00] case if that falls apart, apple needs a backup plan. Buying perplexity could give Apple not just new AI talent, but also a shot at building its own AI search engine.

[00:57:10] Apple's also floated the possibility of just a partnership which would integrate perplexity directly into Siri. Now, apparently Meta tried to buy Perplexity earlier this year and ended up investing in scale AI instead. Samsung is reportedly close to a deal of its own with perplexity. So there's definitely some moving pieces here.

[00:57:31] So Paul, first, what do you think of Meta's attempts to go so far as to spend all this money to acquire AI talent? And then what about Apple and the others trying to buy perplexity? 

[00:57:41] Paul Roetzer: I mean, it doesn't speak very well to the existing talent within Meta and Zuckerberg's previous confidence in their ability to be a major player.

[00:57:48] So I think it, I mean, it just, it looks like a desperation of we're just gonna spend whatever we spend to get the right people. And yeah, I, who knows if that works. Like in professional sports, it usually doesn't work that you [00:58:00] just go get like the four highest paid guys and throw 'em together and hope they figure out how to work together as a team.

[00:58:06] These are huge, huge egos. There's a lot going on here, reporting to one of the biggest egos in Silicon Valley and Zuckerberg, like, I don't know. So there's a lot of questions just around their overall strategy. At the start of the podcast I alluded to like the AI soap opera and let's dissect that for a minute and get your scratch pad out here if you wanna follow along at home.

[00:58:28] So, CNBC reported, as you said, like meta recently tried to acquire Safe Super Intelligence, the AI startup launched by OpenAI co-founder Ilia sva. according to sources familiar with matter, when Sve Rebuff rebuffed the offer, which by the way, I can never imagine, SVA working for Zuckerberg. Zuckerberg moved to recruit the startup CEO and co-founder Daniel Gross.

[00:58:49] Instead, meta now plans to hire Gross and former GitHub, CEO, Nat Friedman, as you said, and, take a stake in their venture fund to beef up the [00:59:00] company's a IT. Okay, so who is Daniel Gross? Let's start, start there. In 2010, gross was accepted into the Y Combinator program. At the time, he was the youngest founder ever accepted.

[00:59:12] I. just for a little background here, Sam Altman became the president of Y Combinator in 2014, but already had a, a relationship, with Y Combinator back in 2010. So there's some crossover there. Gross launched a company called Grelin, a search engine, along with a guy named Robbie Walker. Grelin was designed to allow users to search online from one location without checking.

[00:59:37] In 2012, grelin became q it was rebranded, CUE and launched additional predictive search features. Now, this is an important note. In 2013, apple acquired Q for a nondisclosed amount of money reported to be between 40 and 60 million. They then shut Q down, [01:00:00] and shortly after, gross joined Apple as a director focused on machine learning.

[01:00:05] So now we have a. gross Creates Q sells it to Apple, becomes an executive at Apple, focused on, or director focused on machine learning. In 2017, gross joined Y Combinator as a partner where he focused on ai. So 2017 is the year the transformer was invented by the Google Brain team, which became the basis for generative pre-trained transformer, one GPT one at OpenAI.

[01:00:31] Altman was running Y Combinator at that time, so in 2017, OpenAI was two years old, but Sam was still functioning as the president of Y Combinator. He had not had his blowup yet with Elon Musk. That led to him becoming the CEO of OpenAI 2021 Gross and Nat Friedman start making significant investments in the AI space, as well as running a program to build AI native companies called AI Grant.

[01:00:57] And then in June, 2020, he, before he co-found Safe Super [01:01:00] Intelligence with Ilya. So that's gross. Who is Nat Friedman? In 2011, Freeman co-founded XRM. I don't how to say that. where he became the CEO in 2016, that company was acquired by Microsoft. Then in June, 2018, announced a Microsoft $7.5 billion acquisition of GitHub.

[01:01:23] The company simultaneously announced that Freedman then a Microsoft Corporate VP would become GitHub's, CEO. So these are two major players over the last 15 years in the AI space with connections to Apple OpenAI, Microsoft Meta. So then the information reports that Freedman has been involved with Meta's AI efforts for at least the past year.

[01:01:46] In 2024 May, he joined an advisory board to consult with META'S leaders about the company's AI technology and products after Early runner in GitHub from 2018 to 2021. earlier this year, Zuckerberg asked Friedman to lead Meta's AI efforts [01:02:00] altogether. Someone disclosed to the information he declined, but helped brainstorm other candidates including Alexander Wang.

[01:02:08] While Zuckerberg was skeptical, Wang would leave scale. Friedman convinced him a deal was possible, so they obviously know each other back channel some stuff. so he is currently expected to report to Wang. So here we have Freeman now reporting to Alexander Wang, who's overly in his thirties, if I'm not mistaken.

[01:02:25] I think he may actually be 28. Right. Okay. Yeah, he's super young. So, so Freeman is 20 years. Wang is 20 years his junior. Mm. Both men will be part of a group of meta leaders that Zuckerberg refers to as his management team or m team. Friedman and Gross have invested in some of the busiest AI startups, including perplexity.

[01:02:46] So that leads us to Apple. So Apple, it came out in Bloomberg, maybe in the market for an acquisition as well. I've said many times, I thought Apple had to make an acquisition. This is like, it's just not working with only Apple [01:03:00] homegrown technology. So this article reports that Apple and Meta have been waging a broader fight for talent.

[01:03:06] Meta recently engaged in discussions to hire Daniel Gross, the co-founder of Safe Super Intelligence. While discussions between Meta and Gross are advanced, apple has attempted to persuade him to join it instead. Mm-hmm. So Gross, who sold his company to Apple in 2013, apple is trying to recruit against Zuckerberg.

[01:03:25] So in 2013 he sold Q, but when he joined Apple, that purchase of Q helped form the basis for the early AI features in iOS, the operating system of the iPhone. And then his co-founder, Robbie Walker, who we talked about earlier, actually oversaw the Siri voice assistant until this year when he was, I think, pushed aside.

[01:03:46] Just wild. So, and then again, there was one other article we'll drop a link to. And again, I wanna keep this rapid fire ish, but just so you understand the background of [01:04:00] Apple. So they historically don't make big acquisitions. Their biggest acquisition ever was 3 billion for beats. Dr. Dre and Jimmy Levine.

[01:04:07] Right? Jimmy Levine, yeah. apple has only made three transactions totaling 1 billion or more in its entire history. And as we know, these AI startups aren't going for a little bit of cash, but who has money? Apple does 130 billion in cash. Actually, the article in Bloomberg says they don't think Anthropic or OpenAI are logical targets just given their valuations.

[01:04:32] Yeah. Plus Anthropic is deep with Amazon and Google. but perplexity, this is why it might make more sense. And then the other one that I actually flagged, I dunno if I said this last week or not, but cohere might make a ton of sense. Cohere was founded by and is, the CEO is Aiden Gomez, who is one of the authors of the Google paper.

[01:04:52] Attention is all you need that created the transformer. Hmm. Mistral is another potential target. And then the name I would watch for, I don't [01:05:00] understand why we're not hearing more about him, but Andres Carpathy, like, I don't see his name being talked about anywhere in these acquisitions, but I have to imagine he's one of the people getting people throwing a bunch of money at him.

[01:05:10] So he was led AI at Tesla, he was at OpenAI for two different stints and he is relatively a free agent right now. He is got his own thing he's doing, but he's not connected to any major ones. And then the other name that I would keep an eye on is Nolan Brown at OpenAI, who I believe is one of the people who got the a hundred million dollar offer to go back to Meta, which is where he was before OpenAI.

[01:05:31] So there's like. 10 to 20 major AI researchers and everybody's up for grabs right now, basically like, or they're trying to throw as much money as possible at these people. it's wild. And then you have they came out that Apple actually tried to go after Meir Mirati, startup Thinking Machines Lab, which just raised $2 billion.

[01:05:52] Like it has truly become a soap opera and it is hard to keep track of all the players. 

[01:05:59] The OpenAI / Microsoft Relationship Is Getting Tense

[01:05:59] Mike Kaput: Well, [01:06:00] speaking of soap operas in another topic this week, the OpenAI Microsoft Partnership is in some tension it seems at the moment. So OpenAI is deep in negotiations with Microsoft. Its biggest investor as it prepares to restructure and raise up to $40 billion.

[01:06:17] But things are getting a little complicated, so there's some conflict around who controls what. So Microsoft has sweeping rights, OpenAI IP, preferred access to IT models, and the exclusive right to sell them via Azure. OpenAI wants to instead diversify as cloud partners and keep Microsoft from getting access to tech.

[01:06:36] IT views as strategically sensitive. So one high profile example of this is there's kind of a battle over the code and models and IP behind OpenAI planned acquisition of Winder. OpenAI wants Microsoft to trade its share of future profits that are in place at the moment for a 33% equity stake in its new non for its new for profit entity.

[01:06:59] It wants to [01:07:00] cut Microsoft's cloud exclusivity, renegotiate their revenue split and exempt completely this possible $3 billion acquisition of winder from IP sharing. So Microsoft does not necessarily want all these things. It wants access to open AI's tech even after AGI arrives and they cannot. Even agree on what AGI means in the first place, because under their deal Microsoft's rights and when OpenAI reaches AGI.

[01:07:28] But it seems like there's some confusion or some misalignment on what that term actually means. Now, what's kind of crazy here is tensions over these negotiations have grown so bad that OpenAI reportedly considered accusing Microsoft of antitrust violations, potentially going public with claims of anti-competitive behavior tied to their exclusive contract.

[01:07:52] So Paul, that last bit seems particularly extreme. Are we headed for a messy OpenAI Microsoft breakup? [01:08:00] 

[01:08:00] Paul Roetzer: It definitely does not appear to be what Sam and Satya presented as when they're together. part of this, so interestingly, the windsurfer one, just to go back to the previous conversation, the friction there is that the Windsurf acquisition competes directly with Microsoft's GitHub co-pilot, which is Nat Friedman was the CEO of GitHub.

[01:08:21] yeah, I mean, we could probably spend a bunch of time on this one. I won't right now, but again, I'm not getting paid to plug this book, but Empire of AI actually has a whole bunch of information related to the Microsoft OpenAI deal and relationship that I had never heard before. And so if you wanna understand the friction happening between those two companies today, I would go read the origin story of how that relationship came to be and some of the challenges they've been facing.

[01:08:49] It, it does a really good job of reporting on it. 

[01:08:53] Veo 3’s IP Issues

[01:08:53] Mike Kaput: Definitely worth checking out. So next step. Google's veo three video generation model is stunning the [01:09:00] world with its ability to create hyperrealistic AI generated videos, but it is also waking up many YouTube creators to a jarring realization. Their content may have helped train it, and they had no idea.

[01:09:12] CNBC reports that Google has quietly been using its massive YouTube video library to train models like via three. Google says it's only using a subset of videos and honors agreements with creators and media companies. But there's also no way for individual uploaders to opt out of this. And the issue, at least according to CNBC, is that creators never really got a heads up here.

[01:09:35] Many experts think this could trigger a major IP backlash because the platform's, terms of service do give YouTube broad rights to use uploaded content. But clearly the communication here was not very clear at all. And creators, I think at least a lot of them, did not expect that to mean that Google was going to train AI that can ultimately compete with them.

[01:09:58] So Paul, we've already started to [01:10:00] see the effects of this play out. Veo three is absolutely able and willing to produce content that's a clear violation of ip, at least as of today. For instance, we were talking about, you know, offline venture capitalist. Olivia Moore posted a ton of examples of VO three.

[01:10:18] Producing well-known characters from Disney Properties. And we talked on episode 1 53 about Disney also suing Midjourney. Now for doing that exact same thing. I mean, it's certainly possible YouTube has all the rights to use the YouTube content, but that doesn't mean they can just reproduce IP like this.

[01:10:37] Right? 

[01:10:37] Paul Roetzer: Yeah, I don't understand what's going on here. I think I said this on the last episode. You know, I thought that they were trying to make an example out of Midjourney 'cause it was an easier target initially. Yeah. But I don't, and I haven't seen any comments from either side. Like, I haven't heard Disney comment about VO three's capabilities.

[01:10:55] I haven't heard Google address the fact that they're able to do these things. It, it's, [01:11:00] it's quite bizarre, honestly, like I and I follow a number of like IP attorneys online, and I, I, everyone just basically has the same approach of like, yeah, this seems totally illegal, but like Google's just doing it and nobody's stopping them, and.

[01:11:16] I don't know. It's so bizarre. But I assume as this year goes on, we'll start to get a little bit more clarity into what's going on here. I'm sure there's a bunch of legal stuff happening behind the scenes. Maybe there's licensing deals being hammered out, and nobody's gonna talk about it until they just knock out a licensing deal.

[01:11:33] I don't know. I mean, it's, it is a fascinating topic, but we don't have any crazy insights right now. More than, you know, what you can kind of read online. We're observing it like everybody else. 

[01:11:44] Mike Kaput: Yeah. If someone have any more info there, I'd love to hear it because, you know, in my research so far, I've not been able to find how they're allowed to do this slash how they're not getting sued for it.

[01:11:55] Paul Roetzer: Yeah, and the answers you get from like the leadership in public is, it's [01:12:00] just like non-answers. Yeah. They're just like these PR talking points where they talk around the question. It's very kind of like political in nature how they answer these things. 

[01:12:09] HubSpot CEO Weighs In on AI’s SEO Impact

[01:12:09] Mike Kaput: All right. Next up, HubSpot, CEO. Yamini Rangan has published a really great post on LinkedIn about AI's impact on search.

[01:12:16] Now it's not very long, but I think she hits on some interesting points here. She said stuff like website traffic was a valuable metric correlated to growth. Now, it may be a vanity metric. Search has been disrupted. Visits to your website are declining. She cites how AI overviews appear in 43% of Google searches, and when they do organic CTR drops by nearly 35% AI mode from Google Audio AI overviews, those are coming.

[01:12:42] They will cause clicks to collapse. Further, more buyers are using LMS to find information, so she basically sets up this argument and then gives advice to marketers on what to do about it, including things like Be everywhere and diversify your channels. Be specific with context, which means making your content [01:13:00] deeply relevant and personalized to buyers.

[01:13:02] And starting to optimize for conversions, not clicks, which means focusing on how to convert more people and not focusing as much on how to get a ton of traffic. So definitely go read the whole post. It's a few paragraphs, but Paul, I thought this was like pretty sound advice. I think it's refreshing to see more leaders talking about this because I know it's a hot topic, but not everyone wants to admit that traditional search in terminal decline.

[01:13:29] Paul Roetzer: Yeah, and I mean they obviously have a ton of data. Like this is the key, is like people that have access to lots of data, lots of anonymized data from customers, you can start to like truly see the impact. And for a company that has built itself around the idea of inbound traffic to a website and then, you know, converting that traffic for, for her to come forward and say, this is kind of where it's going.

[01:13:49] I think it's important that people are listening and you know, people that work with brands, people that work agencies. You need to, as you start really moving into like late 2025 and into [01:14:00] 2026 planning deal with this reality. Mm-hmm. And you start evolving your strategies as a result of it, diversifying your channels where your audiences go, go there.

[01:14:08] It's, you can't all just have everything at the home base anymore and assume people are gonna find you or you're gonna be able to drive them there through organic traffic and paid search. So, yeah, I mean that's like, for us, like, it sort of serendipitously like we, we fell into like the podcast as our primary platform.

[01:14:25] 'cause we just were wanted to talk about it and apparently, you know, other people eventually wanted to, to listen to it and talk about it too. And so the podcast became our fastest growing audience by far. So yeah, I, I've said it in the past podcast, like, I'm not even really focused on organic traffic now.

[01:14:42] I kind of gave Mike the directive of like, we don't, I don't even care. Like we should track it for sure. Yeah. And watch the trend. But let's just assume it goes to zero and let's, yeah. Accommodate, you know, from there. So I think that's, that's an important thing for people to kind of start to accept. 

[01:14:57] Mike Kaput: Yeah, I definitely sympathize with brands where this [01:15:00] is a huge shift to navigate and like, it's not gonna happen overnight and you might not want to even admit it's happening.

[01:15:06] But I do like her advice that what do you have to lose by focusing more on conversions? Like you don't have to like, overhaul everything overnight. I would focus folks, start there. I mean, that's not gonna hurt overall, so, and that's gonna be very relevant to your bottom line. So, you know, that's maybe a good, I guess baby step to start riding the ship, I guess, in this respect.

[01:15:29] The Pope Takes on AI

[01:15:29] Mike Kaput: All right, next up. The newly appointed Pope Leo the 14th is making AI a moral issue at the center of his papacy. So just days after being elected, the American Board Pontiff stood before the College of Cardinals and drew a historic parallel. Like his namesake, Leo the 13th, who defended workers during the Gilded Age.

[01:15:50] Pope Leo says this is a new industrial revolution driven by AI and demands a firm response to protect human dignity, justice, and [01:16:00] labor. Now for years, tech giants like Google and Microsoft have courted the Vatican hoping to align their ambitions with the church's moral authority. But now it sounds like this Pope is calling for a binding international treaty to regulate ai, which is a move that many in the tech world believe could stifle innovation.

[01:16:19] So Paul, you mentioned to me offline that this topic could be kind of an indicator of a, maybe a potential societal backlash coming against ai. Could you maybe unpack that thought? 

[01:16:29] Paul Roetzer: Yeah. So I've talked about this a little bit. when we talked about the namesake and like why he picked the name he did.

[01:16:34] Yeah. And the church's relationship with, you know, Silicon Valley to generally connect to like the technology world and my. Kind of like assumption here is, as I've said, I think AI becomes a very political issue going into the midterm elections in the United States next year. You know, so probably like spring of 2026, it starts to become a very real issue, potentially sooner [01:17:00] if the negative, effects of AI start to take hold.

[01:17:03] I could see that happening sooner. We may see it played out through, things like how people are reacting to like Waymo's and mm-hmm. Tesla robo taxis and you know, it might happen in more prominent technologies at first. but you know, you start to see it in terms of the impact these data centers have on different communities and, the impact on the environment, all that stuff.

[01:17:24] So I think it matters to know what's happening at Catholic Church. Catholic Church accounts for 1.4 billion people. Mm. Like there's 1.4 billion Catholics in the world, and the largest portion, well within the Americas. Is, 47% of the world's Catholics belong in, in the Americas. So 27% reside in South America, and then 6.6% in North America and 13.8% in Central America.

[01:17:53] So those are from a Vatican, like their actual data. So when we think about the ability to influence how [01:18:00] society feels about a topic that's 1.4 billion people that can be influenced by, by what the Pope says about ai. And so then if you mix that with the political side, like we're heading into the next 12 months where we may actually see shifts in public perception and sentiment around AI being driven by politics and religion, it's a very real possibility.

[01:18:20] So yeah, we don't wanna like go deep on this right now, but I think again, it's just important to, for people to realize this is a much bigger topic and it's now at the levels where religious leaders and government leaders are going to make it a fundamental part of, their own platforms. Right. 

[01:18:38] Mike Kaput: All right, Paul.

[01:18:39] AI Product and Funding Updates

[01:18:39] Mike Kaput: So in our final topic, we've got some AI product and funding updates. I'm gonna run through and feel free to chime in on anything here. but first up, you had alluded to this before, six months after launching Thinking Machines Lab X OpenAI, CTO, Mira Meti has secured a jaw dropping $2 billion seed round that catapults the company to [01:19:00] a $10 billion valuation, even though it has not released a product or a revenue plan.

[01:19:04] Some people believe the company may be pursuing AGI, but her team remains strategizing behind closed doors. AI video Generator Hagen has launched a new feature called Product Placement, though with product placement, you upload your product photo, choose one of their AI avatars, drop in your script, and it turns it all automatically into a user generated content ad.

[01:19:29] This feature is now available to everyone in Hagen. A new type of AI company in the legal space just came out of stealth. It's called Crosby. And what's interesting about it is it combines custom AI software with human lawyers to deliver their service, their product and service offering, which is contract review in under an hour and sometimes in minutes.

[01:19:51] The idea here is they own the whole legal workflow from software to service delivery, and they say that allows them to actually reimagine from the ground up how [01:20:00] legal work gets done. The co-founders Ryan Daniels and John Han have roots in both law and tech. Daniels practiced at a law firm and ran legal ops at fast scaling startups.

[01:20:11] Han helped build the tech startup Ramps Engineering team chat. T's new record mode feature is now available for Pro Enterprise and EDU users in the Mac OS desktop app specifically. It was previously launched a few weeks ago for team users in that app. record mode captures meetings, brainstorming, voice notes, whatever you know, vocal material you are interacting with.

[01:20:36] And it'll do that right within ChatGPT. So then you can use that material with chat GBT in any way. You want to prompt it, 

[01:20:43] Paul Roetzer: Mike, on that one? I, they, I thought I saw too that it was like totally rolled out, but I still don't have it in our team account. Like, I don't 

[01:20:49] Mike Kaput: Well, it's gonna, it's gonna, it's are you in, in the Mac OS app?

[01:20:54] Oh, it's because that's the critical thing here is like, it's getting a lot of attention, but I think people sometimes [01:21:00] underreport like it is just in that app at the moment. I assume it's coming. Use the app I just told you. 

[01:21:05] Paul Roetzer: Use the app. It used the website. I actually honest, I didn't even honestly know there was a Yeah, I confess.

[01:21:10] I, I do not 

[01:21:11] Mike Kaput: use the Mac OS app. I would imagine though this is rolling out to other accounts or other platforms. Huh? Okay. Then last, but not Gemini's, Google's Gemini models just took a big leap in enterprise territory. The 2.5 versions of Gemini Flash and Gemini Pro are now officially production ready on Vertex ai.

[01:21:34] And there's a new ultra efficient flash light version in public preview designed for high volume cost sensitive tasks. There's also a new API for real time audio and supervised fine tuning is now generally available for flash, which means businesses can adapt the model to their own data and domain with less effort and more precision.

[01:21:55] Paul Roetzer: Alright, one final note on the episodes. We've got a second episode this week. [01:22:00] So, episode 1 56 is gonna be an AI answers episode. As a follow up to our Scaling AI class that we did last week, I think we had like six or 700 people maybe registered for that one. So this is, if you haven't heard AI answers before, it's a new series we're doing where after we do our intro AI class and scaling AI classes each month.

[01:22:19] We then do an AI answers, episode where we go through all the unanswered questions. We usually get dozens of questions and we try and answer as many as we can. So Cathy McPhillips and I will be, back with you out for episode 1 56 on June 26th, and then Mike and I will be back for episode 1 57 on Tuesday, July 1st.

[01:22:37] That will be our regular weekly episode. Great, 

[01:22:40] Mike Kaput: Paul, thanks as always for breaking everything down for us. 

[01:22:43] Paul Roetzer: Yeah, thanks Mike. And hope everybody enjoyed the AI soap opera. We'll be back with another edition next week. Thanks for listening to the Artificial Intelligence Show. Visit smarter x.ai to continue on your AI learning journey and join more than [01:23:00] 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community.

[01:23:16] Until next time, stay curious and explore ai.

Recent Posts

Does ChatGPT Make You Dumber? What a New MIT Study Really Tells Us

Mike Kaput | June 24, 2025

A provocative new study out of MIT has ignited headlines claiming that ChatGPT might be harming your brain. But the truth is far more nuanced.

Amazon CEO's New Memo Signals a Brutal Truth: More AI, Fewer Humans

Mike Kaput | June 24, 2025

Amazon CEO Andy Jassy wrote that the company expects its corporate headcount to shrink in the coming years as it leans deeper into AI.

AI Might Take Your Job. But These Roles Could Be Your Future

Mike Kaput | June 24, 2025

Two new reports are finally focusing on what jobs AI might actually create, rather than eliminate.