Marketing AI Institute | Blog

[The AI Show Episode 147]: OpenAI Abandons For-Profit Plan, AI College Cheating Epidemic, Apple Says AI Will Replace Search Engines & HubSpot’s AI-First Scorecard

Written by Claire Prudhomme | May 13, 2025 12:15:00 PM

Search is changing, college students are cheating, and OpenAI just hired a new CEO of Applications. This week, Paul and Mike dissect OpenAI’s latest moves, discuss how AI tools are fueling a cheating crisis in education, and explain why our relationship with search is headed for a hard reset. Rapid-fire hits cover AI-first CEO memos, new product launches, new funding and more.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:05:52 — OpenAI Abandons Plan to Become For-Profit Company

00:14:39 —  AI Is Causing a Cheating Epidemic in Schools

00:30:21 —  Apple Says AI Will Replace Search Engines

00:41:06 — OpenAI Hires CEO of Applications

00:46:33 — Sam Altman Testifies Before US Senate

00:53:53 — Fiverr CEO’s Blunt AI-First Memo and More Quiet AI Layoffs

00:56:34 — AI-First Scorecards

01:00:51 — The AI Diffusion Rule Is Dead

01:04:22 — AI Product and Funding Updates

01:07:40 — Listener Question

  • How much hands-on technical experience do you need to be to build your own custom GPTs?

Summary:

OpenAI Abandons Plan to Become For-Profit Company

OpenAI says it is hitting pause on its plan to become a traditional for-profit company, a dramatic reversal that keeps its original nonprofit in control.

This decision comes after pressure from civic leaders and legal scrutiny from the attorneys general of California and Delaware, who oversee nonprofit compliance. The move also hands a partial win to Elon Musk, who’s still suing OpenAI for allegedly straying from its nonprofit roots.

Instead of going full for-profit, OpenAI will convert its commercial arm into a Public Benefit Corporation, a legal structure designed to balance profit with purpose, similar to what Anthropic and xAI use. Investors will now hold standard equity with no cap on returns, ending the previous 100x limit imposed on returns.

CEO Sam Altman, who still doesn’t hold equity in the company, says the old structure made sense when there was only one big AGI bet—but not in today’s world, where multiple labs are chasing the same goal. He now says OpenAI will need “hundreds of billions, maybe trillions” of dollars to bring its vision of AGI to life.

Despite the structural shift, OpenAI’s nonprofit will stay in charge. It will also hold equity in the new PBC, allowing it to grow its resources.

AI Is Causing a Cheating Epidemic in Schools

The AI cheating crisis in higher education has officially hit a breaking point.

A powerful new exposé in New York Magazine reveals just how deeply generative AI has upended college life. Students across universities—elite, public, and community—are now using ChatGPT and other AI tools to handle everything: note-taking, studying, data analysis, and especially writing. For many, it’s not just a shortcut. It’s the default.

One Columbia student admitted AI wrote 80% of his coursework. Another launched a startup to help others cheat on coding interviews—and got suspended. A freshman who opposes cheating still uses ChatGPT for essay outlines “every time.” The irony? Her latest paper was about how education helps us think critically.

Educators are scrambling. Some try AI detectors, others plant “Trojan horse” phrases like “mention Dua Lipa” in prompts. But, according to the report, nothing seems to stick. Detection is unreliable, policy is murky, and enforcement is often discouraged. One TA was told to grade AI-written work “as if it were a real paper.”

The result? A growing sense of despair. Professors are quitting. Writing is viewed as obsolete. And a generation of students is gliding through college without ever fully engaging in the learning process.

The consequences may not hit until these students graduate—ill-equipped, uninspired, and easily replaceable in an AI-driven workforce. But by then, according to at least some of the educators interviewed here, it may be too late.

Apple Says AI Will Replace Search Engines

Apple may be preparing to end one of the most lucrative partnerships in tech—its $20 billion-a-year search deal with Google—as it eyes a future powered by AI.

During testimony in the DOJ’s antitrust case against Google, Apple services chief Eddy Cue revealed that Apple is “actively looking at” integrating AI search engines like ChatGPT, Perplexity, Anthropic, and even Elon Musk’s Grok into Safari. While Google may remain the default for now, Cue made clear: the era of traditional search is ending. AI is the new frontier.

Safari search traffic just declined for the first time ever, which Cue attributes to users turning to AI instead. That data point, small as it seems, could signal a massive behavioral shift. Cue also hinted that we might not even use iPhones in ten years—the next tech shift, he said, is already underway.

The implications are huge. Google’s dominance, and its ad revenue, rest heavily on being the default search engine. Alphabet shares tumbled over 7% after Cue’s remarks. Apple shares dipped too, signaling the financial hit both could take if their deal collapses.

This episode is brought to you by our 2025 State of Marketing AI Report Findings Webinar. 

Join us this Wednesday, May 14th at 12 PM ET, as we unveil the findings of our 2025 State of Marketing AI Report. This is our fifth-annual report, and it’s our most in-depth look yet at how marketers and business leaders are adopting AI.

Register for live and on-demand access, plus an ungated copy to this years report, at www.stateofmarketingai.com

This episode is also brought to you by the AI for B2B Marketers Summit. Join us on Thursday, June 5th at 12 PM ET, and learn real-world strategies on how to use AI to grow better, create smarter content, build stronger customer relationships, and much more.

Thanks to our sponsors, there’s even a free ticket option. See the full lineup and register now at www.b2bsummit.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: I don't think traditional search exists in the near future. Like I don't know why I would ever go to a traditional search engine. Now, welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Rader. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host.

[00:00:24] Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.

[00:00:47] Welcome to episode 147 of the Artificial Intelligence Show. I'm your host, Paul Rader, along with my co-host Mike Caput. We are recording this in an unusual time. It is Friday, May 9th at 3:40 PM Eastern time. we usually record these things on Monday mornings, but I have been traveling all week. I just got back home about a half hour ago.

[00:01:10] Paul Roetzer: I prepped for this on the plane ride. Luckily the United wifi was working. but the reason I'm not here on Monday is I'm actually doing something that is somewhat crazy that has become an annual thing for me. So my friends Joe and Pam Pulitz created something called the Orange Effects Foundation back in 2014, I think it officially started.

[00:01:28] So the Orange Effects Foundation makes sure children with speech disorders receive the speech therapy and technology equipment they need, especially for families, children's, families that don't have the financial means otherwise provide for it. So a few years back, Joe, in his brilliant, innovative, entrepreneurial mind, okay, decided to create a hundred hole golf marathon, where about 30 or 40 of us who are crazy enough to do this every year, play 100 holes of golf [00:02:00] in a single day.

[00:02:01] So we tee off at 7:00 AM it usually takes until, I'm normally done about 7:38 PM but I know there are some people who, you know, race the sunset and just take the time and enjoy a few drinks around the course. But we play a hundred holes in the day to raise money. So it's the golfer autism outing. so for the past 17 years, they've been doing a variation of these things.

[00:02:23] They've raised over $600,000, delivering services to 419 children in 39 states through these programs. So just an incredible organization. If you know Joe and Pam, you know, they're just the best people. and they've done an incredible job with this cause and this foundation for years. So that's why we'll be on Monday and why we cannot do this, I will be in traction on Tuesday morning, I'm sure.

[00:02:47] 'cause I have not swung a golf club since October of last year. So it's always just like a survival of the fittest kind of day. So, if you do have any interest in this organization, you can go to theorangeeffect.org and learn more about it. We'll drop that link in the show notes and then if anybody's interested in supporting it, I will drop my personal link zero obligation.

[00:03:06] but if anybody wants to support the cause of this is something that's near and dear to your heart, we'll put a link in there for my, my page, for the event as well. And you can take a look at that. Alright, so, this, this, this episode is brought to us by two upcoming events this week. So if you're listening to this prior to Wednesday, May 14th, we're gonna kick off the week with a state of industry webinar.

[00:03:32] So we just completed our 2025 state of marketing AI research. We are going to be releasing that. Mike and I will be hosting a webinar. We're actually gonna go through the key findings of that report on Wednesday, May 14th. You can go to stateofmarketingai.com and click on the webinar link at the top and you'll register for it.

[00:03:51] There it is free. we're gonna be releasing the report that day. So not only can you come and hear about the findings, we will actually make the report available for free [00:04:00] download. That webinar is gonna be at noon eastern time on Wednesday. and it'll be provided on demand to anyone who registers for it.

[00:04:07] So this is our fifth annual report. We had what, over 1800, 1900 people this year? 

[00:04:12] Mike Kaput: Yeah, about 1900. It's the most we've ever had. 

[00:04:15] Paul Roetzer: Yeah, and it's, it's awesome. Like I read it, about two weeks ago. I went through and saw the draft of everything. There's some really cool insights. So check that out. Again, that is Wednesday, May 14th.

[00:04:25] If you're listening to this after the 14th, don't worry. You can go to the page, still down the report and, register to get the on-demand webinar. And then also this week we have another free session. This is our Scaling AI monthly, class that I teach. So this I think is the eighth month in a row.

[00:04:42] I've been doing this one. I  do it live every month. It's through Zoom webinar. I think we're in Zoom, right? Yeah, that one's on Zoom. So you can go to scaling ai.com at the top of the page. That that page has information about our Scaling AI course series. But right at the top there is a, register for the webinar link and that'll [00:05:00] actually take you to the free webinar that is called Five Essential Steps to Scaling AI in your Organization, where I will walk through the five core steps that we teach every enterprise we talk to, regardless of size of what steps they should go through.

[00:05:12] So that is taking place Thursday, May 15th, at noon. That also will be available on demand to anyone who registers. So again, you can go to scaling ai.com and click on the webinar link at the top of the page and that will take you to register for free. And like I said, while you're there, you can learn about the Scaling AI Certification course series as well.

[00:05:33] Okay, so we got a big week of education coming up. we have continuing craziness in the AI world and so even though we're on a short week here doing this on Friday, we have to move like 15, 20 things to the newsletter because there was just a lot of big stuff going on. So I will let Mike take it away.

[00:05:52] OpenAI Abandons Plan to Become For-Profit Company

[00:05:52] Mike Kaput: Thank you, Paul. So, first up, some big news in the sense that OpenAI says it is [00:06:00] hitting pause on its plan to become a for-profit company. This is a bit of a dramatic reversal that will, if it goes through, keep the original nonprofit in control. This decision comes after pressure from civic leaders and legal scrutiny from the attorneys general of California and Delaware, whoever see nonprofit compliance.

[00:06:22] This move also, at least in public narrative hands, a partial win to Elon Musk, who is still suing OpenAI for allegedly straying from its nonprofit roots. So instead of going full for-profit, as of right now, OpenAI says it will convert its commercial arm into a public benefit corporation, which is a legal structure designed to balance profit with purpose.

[00:06:45] It's similar to one anthropic and xAI itself. our structured as investors will now hold standard equity with no cap on their returns. Previously, they had. A 100x limit on whatever they had [00:07:00] invested as their returns. CEO Sam Altman, who still doesn't hold equity in the company, says the old structure made sense when there was only one big AGI bet.

[00:07:10] But not in today's world where multiple labs are chasing the same gold. He now says OpenAI will need hundreds of billions, maybe trillions of dollars to bring its vision of AGI  to life. And this is now the best way to do that. so OpenAI's nonprofit, at least as they have this framed, will stay in charge and will hold equity in the new public benefit corporation allowing it to also grow its resources.

[00:07:36] So Paul, there's a lot to unpack here, and I think first, you know, regardless of what Sam has written about why they're doing this about the details of it, like why is there. This sudden about phase, because I don't think this was the original plan. 

[00:07:53] Paul Roetzer: Yeah. I mean they may have certainly like learned that it was just a better structure through their research, or they might have just been told by the attorney [00:08:00] generals that they were never going to get approval to do this.

[00:08:03] so they obviously just learned there was some barrier that just wasn't gonna make this worthwhile or there was just a better alternative path. I think the key thing for me is it's not a done deal per se. Like they still need approval from Microsoft. They still need, it sounds like the blessings from the Attorney Generals of California and Delaware.

[00:08:23] So Bloomberg, you know, had an article about this, we'll put in the show notes that Microsoft, which has invested 13.75 billion in OpenAI remains the biggest holdout among investors as the ChatGPT Maker tries to restructure. The software Giant wants to make sure that any changes to the structure adequately protects Microsoft's investment.

[00:08:42] Microsoft is still actively negotiating details of the proposal. And then it also said that Microsoft isn't the only part that OpenAI needs buy-in from the State Attorney Generals of California and Delaware responsible for overseeing the conversion. So they OpenAI needs to do a fair market valuation on the nonprofit [00:09:00] stake in the future for profit entity and is asking the state Attorney Generals for input.

[00:09:05] Then the information, said that Delaware Attorney General Kathy Jennings, said in a statement that she had expressed concerns to OpenAI about its earlier reorganization plan and would review the new plan. So they're moving in this direction. Now, I was aware of public benefit organizations. I've heard the term plenty of times.

[00:09:23] I'm aware that philanthropic was one, but I honestly like don't know a hell of a lot about them to be able to explain them to, to people like what actually makes 'em different. So I just went and had this conversation with ChatGPT about it, and I thought you gave a pretty good synopsis. So the PBC as you had talked about Mike said, of type of for-profit, but it's legally required to consider both financial profits and the broader social or environmental mission.

[00:09:48] So their legal duty is they have to balance profit and public mission. So when I looked at, I was like, okay, but like, who monitors that? How's it governed? How is it measured? Right? Like, I just have more questions about this. And [00:10:00] so it said, in essence, like there's internal oversight to the board of directors sort of takes responsibility for this.

[00:10:07] there's benefits officers or committees which are optional from a governance perspective. There's incorporation documents that sort of lay this out. There's fiduciary duty for the directors and officers who are legally required to balance the financial, financial interest and the state of public benefit.

[00:10:23] so yeah, kind of in short it's monitored by leadership and sometimes dedicated staff. It's governed by legal obligations and then it's measured through regular reporting, sometimes using independent standards. So just kind of background for people. when I, when I look at like, why are, are they doing this?

[00:10:40]   You know, I think that it cleans things up. They had the 30 million investments from, from SoftBank was contingent on them converting to this for-profit organization by the end of the year. And they may have realized that wasn't gonna happen due to the lawsuits with Theon Musk or the Attorney Generals pushing back that their only path to [00:11:00] do this was, to go with this, structure.

[00:11:03] And there it was, I think it was the Bloomberg article. Yeah. Bloomberg said that SoftBank has basically already given their blessing to this, like that, that this will satisfy their desires. Yeah. And the 30 billion, you know, be cleared. I think it also probably makes it a lot quicker for them to get to an IPO, which I assume they're heading toward.

[00:11:27] so I  think that they, they just need to accelerate this. They need to get the structure right, but they, they just need to move things along. And I think if they kept on the path they were going on, it was gonna get really messy. And it may be years before they could actually do this, where maybe they had a cleaner path through the Public Benefit Corporation and maybe they just thought it was a better structure.

[00:11:48] Maybe they learned more about it and decided this was the way to go. 

[00:11:51] Mike Kaput: Hmm. So let's say this does move forward as outlined. Is this going to really change [00:12:00] anything about how large or successful they're able to become? Will it impact the path to AGI , I mean, will it change anything about how we experience OpenAI products?

[00:12:09] Paul Roetzer: I think it'll just clear the pathway for them to accelerate what they're envisioning. And I think it'll, accelerate the building of the nonprofit into maybe the most powerful nonprofit in the world. The most well-funded nonprofit in human history. Right. So. Then what do they do with that? You know, then you start getting into, and again, I  didn't really think about this until this second, but, they understand that what they're going to build, what they assume they're going to build with AGI  is going to change society.

[00:12:39] And the economic structure. And the educational system, not just of America, but of the world. And they have a responsibility to be doing more to prepare for that. You know, Sam is, they did a UBI study Universal Basic Income, what, seven years ago? They started that study? I think. So I think they have to, their nonprofit would most [00:13:00] likely start getting far more involved in thinking about things like that.

[00:13:03] I could actually see a scenario where their nonprofit maybe plays a role in providing that. UBII, I wouldn't actually be shocked at all if they didn't envision a world where that nonprofit was a trillion dollar nonprofit and that trillion dollar nonprofit kicked off xAI year, basically.  To provide income to people like, I, my guess is they're thinking that big, that, that, that they need to actually solve for the impact of AGI  on society, which comes with education, financial ramifications, lots of other things.

[00:13:34] And that's why they're basically gonna say this not profit is gonna be the most well funded thing in human history because what it needs to do is going to be massive. 

[00:13:43] Mike Kaput: That's, so, just as a funny aside, if you think of all the sci-fi predictions and books and movies, it's usually in the far future, like it's some huge corporation that's the most powerful important thing.

[00:13:55] And what if it ends up just being a nonprofit with 

[00:13:58] Paul Roetzer: 20 years trillion dollar [00:14:00] corporation? Yeah. I, that again, my, we don't know much, but I would, I would actually be more surprised if that wasn't what they were thinking. If they weren't looking at under the assumption that AGI  is here and that within.

[00:14:13] Five to 10 years, it has taken hold throughout society and it's truly just changing everything.   That nonprofit needs to be planning today for what that looks like. And it actually would make a little more sense on some of the hires and initiative they've done recently around, you know, AI literacy and the studies around UBI.

[00:14:32] Like. I think they've been laying probably the groundwork when we zoom out. You can probably actually see the groundwork being laid for those sorts of things.   

[00:14:39] AI Is Causing a Cheating Epidemic in Schools

[00:14:39] Mike Kaput: Our second big topic this week is about the AI cheating crisis in higher education. So there is a powerful and report in New York Magazine that's getting a ton of buzz that shows just how deeply generative AI has started to upend college life Students across universities, [00:15:00] elite public community colleges are now using ChatGPT and other AI tools to handle everything, note taking, studying data analysis, and especially writing as the report finds.

[00:15:11] It's not just a shortcut, it is the default. One. Columbia student admitted AI wrote 80% of his coursework. Another launched a startup to help others cheat on coding interviews for jobs, and got suspended from punches from his school and, blacklisted from a bunch of others. A freshman who opposes cheating according to their reports, still uses ChatGPT for essay outlines every time she writes.

[00:15:36] And the irony was they featured one of her papers that was about how education helps us think critically. So as a result of all these kind of anecdotes, reported here, educators are scrambling. Some have been trying AI detectors, others put like Trojan horse phrases that, that are in their assignments, that the AI picks up on 'em, and they know they were [00:16:00] AI written.

[00:16:01] But according to New York Magazine, nothing seems to stick. Detection is, as we know, unreliable and enforcement of any type of policies is sometimes discouraged. One teaching assistant was told to actually grade AI written work as if it was a real paper. So they kind of weaved together all these different anecdotes and unfortunately they kind of paint this bigger picture among.

[00:16:25] Higher ed professionals have a growing sense of despair. Professors are quitting. Writing is increasingly being viewed by some of them as obsolete, and a generation of students is gliding through college without ever fully engaging in the learning process. these consequences may not really hit until students graduate, but some of the educators interviewed think it may just be too late and they may be ill-equipped and just uninspired and easily replaceable in the AI driven workforce.

[00:16:55] So Paul, first up, whether you know you [00:17:00] agree with this story or not, I think everyone should go read it. We'll obviously link to it in the show notes. It paints a really dire picture and it's not so much them complaining that students are using ai, but more the ways in which it's basically being used in the stories they relate to completely hack the education system faster than that system can adapt.

[00:17:21] So. As we're looking at this, we've talked about the importance of AI in schools. How big of a problem is this actually, when you go beyond the headlines? 

[00:17:31] Paul Roetzer: Yeah, I think it's, way bigger than most people realize. Most parents, I  think most teachers and professors, like they're seeing it firsthand now, but I've, I've spent time with deans and provosts and I, I'm not sure that the totality is being comprehended right now, so I  flagged this one for us.

[00:17:53] Beginning of the week. This was kind of blowing up, or over the weekend. I forget. When it first came out, it was all over Twitter. [00:18:00] and, I  read it on the plane ride home and it was like, it's like a 30 minute read. Like it's a really long article. Yeah. And I like you, Mike, I would highly recommend people go read this.

[00:18:10] Like, you, you really need to spend some time with this. So, the way I wanna do this is I, you, you know, I tried to like, summarize in like three points and I just can't, so I'm just gonna read some excerpts and then if there's anything, Mike, that you wanna react to here, like, jump in. But I  think that the writing is so good.

[00:18:29] Like, the storytelling was so good. That it, I, it would do it in injustice to not just like take the excerpt and react to it. So I'm, I'm just gonna go through some here. So, this is straight from the article. generative AI Chat Bots ChatGPT, but also Google's, Gemini, Andros, Claude, Microsoft's co-pilot and others take their notes during class, devise their study guides and practice tests, summarize novels and textbooks and brainstorm, outline, and draft their essays.

[00:18:56] STEM students are using AI to automate their research and data [00:19:00] analysis and to sail through dense coding and debugging assignments. Quote, college is just how well I can use chat. CPT at this point. A student in Utah. Recently captioned a video of herself copying and pasting a chapter from her genocide and mass atrocity textbook into ChatGPT.

[00:19:17] So that starts to give us a little context of the scope. this is the one that I was just, I I was just laughing. I didn't know what else to do. I was just laughing. so they were telling the story as a philosophy professor across the country at the University of Arkansas, at Little Rock caught student, students in her ethics and technology class, ethics and technology class using AI to respond to the prompt quote, briefly introduce yourself and say what you're hoping to get out of this class in an ethics class.

[00:19:50] They needed ChatGPT's help to introduce themselves and say what they wanted to get out of the class. Yeah, that is representative. How dependent people [00:20:00] are on these things. How it's, it's such a shortcut that literally the easiest thing you should be able to do without thinking you can't do. another example, and this one I think sums it up really well.

[00:20:12] quote, this is right from the article. It isn't as if cheating is new, but now as one student put it, the ceiling has been blown off. Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI generated papers, Troy Jamore, a poet philosopher and Cal State Chico ethics professor has concerns, quote, massive numbers of students are going to emerge from university with degrees and into the workforce who are essentially illiterate.

[00:20:42] He said, both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, must less anyone else's. That future may arrive sooner than expected when you consider what a short window college really is. Already roughly half of all undergrads [00:21:00] have never experienced college without easy access to generative ai.

[00:21:04] Think about that for a minute. If a student has been in college in 2023 or since, which is at least half of all college students, undergrads, they don't know a world where generative AI didn't exist. And then he said, we're talking about an entire generation of learning, perhaps significantly undermined here.

[00:21:25] It's short circuiting the learning process, and it's happening fast. When on another excerpt, school administrators were stymied, there would be no way to enforce an all out chat, ChatGPT ban. So they're talking about like, what do we do about this? So most adopted an ad hoc approach, leaving it up to professors to decide whether to allow students to use ai.

[00:21:45] I. Some universities welcomed it, partnering with developers rolling out their own chatbots to help students register for classes or launching new classes, certificate programs and majors focused on gen ai. But regulation remained difficult. How much AI help was acceptable? Should [00:22:00] students be able to have a dialogue with AI to get ideas but not ask it to write?

[00:22:03] Actual sentences goes on to say, these days professors will often state their policy on their syllabi, allowing ai, for example, as long as students cite it as if there it was any other source or permitting it for conceptual help only, or requiring students to provide receipts of their dialogue with the chat bot.

[00:22:20] Students often interpret those instructions as guidelines rather than hard rules. another one that's super illustrative and funny, so,   I  just excerpted this. Okay. Asked Wendy if I could read the paper. She turned in, so as he was talking to a student and the student had used AI to do a paper, and so the writer says, can I see the paper that you turned in?

[00:22:43] When I opened the document, the writer says, I was surprised to see the topic, critical pedagogy, the philosophy of education pioneered by pa, by Paolo Re The philosophy examines the influence and listen closely. The philosophy examines the influence of [00:23:00] social and political forces on learning and classroom dynamics.

[00:23:04] Her opening line quote, to what extent is schooling hindering students' cognitive ability to think critically? Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy, but one that argues learning is what makes us truly human. She wasn't sure what I, what to make of the question.

[00:23:27] She replied, I use AI a lot, like every day. And I do believe it could take away that critical thinking part, but it's just now that we rely on it, we really can't imagine life without it. My gosh. The article, like everything as I kept going on, it's like more and more this was just like, oh my God. then yeah, some of these ones you talked about were like, these teachers are basically just giving up.

[00:23:49] Like every time I talk, talk to a colleague, this comes up. Retirement, when can I retire? When can I get out of this? Hmm. there was one that said it'll be years before we can fully account for what all of this [00:24:00] is doing to students' brains. Some early research shows that when students offload cognitive duties on the chat bots, their capacity for memory, problem solving and creativity could suffer.

[00:24:11] Multiple studies published within the past year have linked AI usage with a deterioration in critical thinking skills. One found the effect to be more pronounced in younger participants. So again, go read the whole thing. It it's probably. What, like 15,000 words. I, it's a long one, but you can listen to it too.

[00:24:29] so I wanted to like, real quick frame this. So implications to business. You are hiring this next generation right now.   If you have interns, if you have people coming out of undergrad school, you've got someone who's been in their MBA for the last two years. Like these are, this is who you're hiring.

[00:24:45] They have had access to these tools. So you know, you need to test for, or you should in your HR process, start looking for prompting abilities and the ability to work with these machines. But you also actually have to figure out how do we test for critical thinking [00:25:00] skills with no devices? So if you're conducting interviews over a computer, there's a reasonable chance that these students are using AI while you're talking to them to answer you.

[00:25:11]   There, there's technology out there you can get that listens to your questions and tells them what to say to you. Everything they write probably was with the support of ChatGPT. So how do you assess somebody's abilities when AI's. Just there all the time. They might not consider it cheating, it's just what they do.

[00:25:28] So in-person interviews become, I was having this conversation at the event, I was just speaking up like, AI glasses have to be outlawed in interview processes. They can't be wearing meta glasses 'cause who knows what that thing's doing. so you have to now start thinking like, what questions do we ask to show critical thinking and reasoning Without AI, you have to get clear on your generative AI policies, including the fact that they're all gonna go around whatever internal blockades you put up.

[00:25:53] So if they tell them they can't use ChatGPT, they're just gonna do it on their phones, on their personal account. Like they don't know how to do [00:26:00] it without it. Right. And I'm not, I'm generalizing here, not every kid is like this. You also have to meet, you know, be aware of what it means to your kids. So if you have teenagers, or in my case like I have 12 and 13 year olds, you have to understand that they have to be taught how to still think critically and be creative without always using it at a crutch.

[00:26:18] It needs to be there as an, as an augmenting tool, not as replacement to these things. That's gonna have to be intentional. This is gonna be like social media to them, like where you just, if you give them Instagram, like you gotta be really, really careful. They don't get sucked in and stuck on that thing like five hours a day and chat bets me the same way.

[00:26:36] You give 'em that tool and it's like, man, everything's just easy. You gotta think about education. you gotta think about the impact on higher edu, you know, what, what's going on with higher education? You gotta think about the impact on your kids' education. I, man, I don't even know if I wanna go on this path.

[00:26:52] so I  was at an event this week where a, a leading economist, who is a top advisor to the current [00:27:00] administration was talking about the value of, of a college degree. And I  will just say he wasn't, he, he wasn't overly. Supportive of the value of higher education. I will say that he didn't necessarily see the value of like a sociology degree.

[00:27:17] For example, a journalism degree doesn't view that as additive to society in essence. And that was before ai. That was like, when they look at how much it's costing to go to colleges, this is kind of the view of the administration of like, you know, they're going after Harvard right now for their $60 billion endowment fund.

[00:27:34] 'cause they, they wanted to pay tax on that money, like other things. And so they start to question even the value of these institutions. So, higher ed's like already having its challenges and when you mix in the fact that this stuff's going on, it's just like, whoa. But to most people who are listening to this show, I know we have people in education space to listen to, but like, this is your, your employee base.

[00:27:56] This is your workforce of the future. They're gonna come in having used [00:28:00] all these tools and you have to like. Understand that and prepare, and I  haven't talked to many corporations that are prepared for this, that are training their HR team how to even test for this stuff. 

[00:28:09] Mike Kaput: Right? My God, when I read some of these anecdotes too, I couldn't help but thinking, if you are hiring any of this generation that is gone through college like this, you have to have a real clear AI policy from second one that they walk in the door.

[00:28:28] Even if you saw for everything else, like it wouldn't even occur, I think, to some of these people that there were ways you shouldn't be using ai. 

[00:28:39] Paul Roetzer: Totally. That it has to be trained in day one. Like it's gotta be one of the first things. And then we've talked about before, you're gonna get to the point where these students are going to ask what your general VA policy is like, will I have access to chat?

[00:28:51] GPT is like, no, you're gonna have access to like a version of, of copilot. It's like, mm. I really like my ChatGPT. Like that's, you know, and so again, like there, I don't know, like again I've seen no research on this yet, like this incoming generation of the workforce and how companies are gonna deal with this.

[00:29:13] So again, I always tell people part of our role here is to like set the stage and provide this like general knowledge base so that people can take it and go figure things out in their field. So if like this, what is the role of this in higher education? If that's your thing, like if you're an educator or if you're an administrator at a higher education institution or like go like we, we need people to like go think about these things.

[00:29:40] We often on the show just pose questions 'cause we don't have the answers. But my hope is that people get inspired to like, go solve for this. And if you do like, let us know. Like, I love to hear from our listeners. and our viewers, like, when you're working on cool things, like sh shoot me a note on LinkedIn.

[00:29:57] I   tried my best to like look at all that stuff. [00:30:00] So I, I'd love to hear what people are doing in this space. 

[00:30:03] Mike Kaput: Yeah. And as a silver lining to this, there's never been a more exciting time if you do have a interest in this to go solve it because nothing is solved and you're the person that has to do it because OpenAI is not going to go 

[00:30:15] Paul Roetzer: solve this.

[00:30:16] Right. And if you want job security, be the one that's figuring all this out. Yeah. 

[00:30:21] Apple Says AI Will Replace Search Engines

[00:30:21] Mike Kaput: Right. Our third big topic this week, apple may be preparing to end one of the more lucrative partnerships in tech. At some point it's $20 billion a year search deal with Google as it eyes, a future powered by ai. So during some testimony in the Department of Justices antitrust case against Google, Apple's exec, apple executive Eddie Q revealed that Apple is actively looking at integrating AI search engines like chat, GPT, perplexity, anthropic, and even Elon Musk scro into Safari.

[00:30:53] Now, while Google is not going anywhere for now, Q did make it clear that in his opinion, [00:31:00] the era of traditional search is ending and AI is the new frontier, as a proof point. Safari search traffic just declined for the first time ever, which attributes to users turning to AI instead. And that could just signal a massive behavioral shift, even though it is a single data point and.

[00:31:20] Just to show you how much a single data point matters, Google's dominance and ad revenue obviously rest heavily on being the default search engine. And Alphabet Shares actually tumbled 7% after Q'S remarks. So Paul, this is just one anecdote. It's getting a lot of attention. It's not the first time we've seen signals that AI may be disrupting search, but it is a pretty stark one that had some real financial implications for Google.

[00:31:50] so I kinda wanted to get a sense of what you're seeing right now when it comes to AI in search. How serious is this at Apple's kind of even coming out and saying this, what's [00:32:00] going on here? 

[00:32:01] Paul Roetzer:  I do think that this is gonna blow up really fast. So this was all happening. So I was in, I was in Tampa this week, and so this was, Thursday morning, I guess.

[00:32:14] So I go to this, my talk was Thursday morning. And the economist I referenced earlier was right, right before me about like one session between us. And so I went to the talk with the Economist, who was brilliant by the way. Like it was an amazing session. I  learned a ton and I   appreciated greatly the perspective and the insights as to like why the administration's doing what they're doing and why they are approaching the economy this way.

[00:32:37] but one of the audience members asked the Economist a question which was like, concerns around AI's impact on jobs. And I've mentioned this on the show before, but like, I have personally had conversations with two of the leading economists in the world on this exact topic. and ba I [00:33:00] wouldn't say like blown off, but like the one literally told me job loss from AI is not even in the top 10 things he thinks about or cares about.

[00:33:11] And that was last year. That was last fall. And he, he's an influential economist. And so now, this economist who, who, is heavily influential in the current administration's economic policy. He just kind of like, he diverted the question and basically said, listen, as an economist, I'm more concerned with deficits of talent.

[00:33:34] Like not enough nurses, not enough accountants. You can go through a number of industries and you see where they just don't have enough people. And so this is, this is the fourth economist now that I have personally heard, say that they're far more concerned about the talent deficit than they are about the impact AI's gonna have on them.

[00:33:54] So I  go back to my room in between his session and my session, and I'm like, I [00:34:00] don't even, I don't know what to say. Like, I don't, I either, I'm crazy here or like, I'm just not making sense. Like, my thought process here doesn't make sense. So I go into o3. And I'm like, listen, I've now talked with different economists.

[00:34:13] They always come back with this. the talent gap is a bigger problem than the impact of that. What am I missing? Like, how could I have a conversation with them? Maybe I'm just wrong. And like, how do I have an educated conversation with a leading economist on this topic? like a, a, an honest debate.

[00:34:31] Like I, again, I'm happy to be wrong here. And so  o3 starts, like, if you've used  o3 or deep research, you know what I'm talking about. It shows you it's chain of thought, like it shows you it's work. And so I'm, I'm, I ask the question and I'm sitting there and so for two minutes it's like thinking, thinking, but then it's showing you what it's thinking.

[00:34:48] It's showing you the websites it's going to. And as I'm watching it do this, I'm looking at the sites it's going to, and you can see 2005 like Bureau of Labor Statistics Studies. Like, oh, I didn't, I didn't even know they did a study on [00:35:00] this. Like, oh my gosh. And so I'm watching these different WebP pages kind of flying by of like what it's doing.

[00:35:06] And in that moment I had this realization of like, oh my God, it's better at search than I'm like, I  have, and I  put it on LinkedIn. I was like, maybe other, everybody else has already thought of this. And like, look, to me, this was like this profound moment where I realized like in two minutes it found a bunch of websites that I probably would've actually not found or taken the time to get to in Google search results.

[00:35:26]   And for the first time, I realized I don't think traditional search exists in the near future. Like, I  don't know why I would ever go to a traditional search engine now. maybe in like Google Maps I'll still use and stuff. But the point I made on LinkedIn that ended up getting quite a bit of traction and engagement was, I think all search in the future just happens through your app of choice, like your assistant of choice.

[00:35:53] And maybe it's at GPT, maybe it's Gemini. Like I  still think Google can win here. I. But it is not gonna be going to [00:36:00] Google. And like the traditional search engine, it'll take a while to diffuse through society until like other people start to realize what, what happens and how the search works. But I just started having this moment where I realized like, I don't think search looks anything like it does.

[00:36:13] What does that mean to SEO? What does it mean to publishing? What does it mean to content creation and marketing? I have no idea, but I  think it's something we're gonna have to grapple with way faster than maybe I was originally thinking, because you can now see what it's doing and you realize it's better and it's like a hundred to a thousand times faster than you at doing this.

[00:36:34] And that's gonna start to just change consumer behavior of how we seek information. So I don't know, like the fact that that kind of came out at the same time as I was sort of having this realization. But, yeah, I, and then I started thinking like, when have I gone into Google search lately? And I  maybe AI mode's gonna be awesome and.

[00:36:52] A overuse could be it, but I  just think I'm gonna do it in the app I'm already in. I think it's gonna be Gemini or ChatGPT, or, I know some people still use perplexity. [00:37:00] I know Apple's talking with perplexity. I just think the future of search lives within the apps themselves. And I  don't think people go eventually to their, and look at links like, I  don't know what, how about you?

[00:37:12] Like I  don't know. Are you seeing like city? Like what are you doing? 

[00:37:15] Mike Kaput: I'm, I don't know if I'm an outlier or what, but I can't remember the last time I've touched a Google search to be perfectly honest with you. maybe I would assume for like, the hours of a restaurant or something. and I realized like there are different types of searches and I'm probably doing many more that are heavier oriented toward what LLMs are really good at.

[00:37:38] But I've even gotten into the point with deep research where I would run reports on local vendors or businesses or how to get stuff done and obviously verify. Information, but Google is not the first touch for me anymore. For the most part. 

[00:37:53] Paul Roetzer: Yeah. Yeah. And I think the other thing that started jumping out to me is, I  hope OpenAI doesn't screw this up with ads.

[00:37:59] Yeah. [00:38:00] But they, they, whatever their algorithm is to surface the best answers. You, you almost like when you think about the chat, like say, say chatGPT being better at search than me. You can imagine a center where OpenAI not only builds the initial search, but then it has an AI that evaluates the strength and value of the links that are being used, used.

[00:38:22] And it's, it's its own critic of the value of the search conducted in the, those sites found. And it just keeps grinding until it finds exactly what it needs, like age agentic way basically. And until it then can give you the brief based on the absolute best sources it can find without being hindered by who's the ad from and is it sponsored and all these things, it's like, it's this pure.

[00:38:44] True. Like, I just want the best answer. I want the best output. I don't want links, I don't want ads. I don't want any of that. I just want an answer. And right now, because it's not ad supported, I feel like that's what I'm getting there. [00:39:00] now we'll talk a little bit, well, about the new CEO to OpenAI. and maybe that's not gonna be that way for long, but at the moment I like that it's pure and it's not ad supported.

[00:39:09] Now it's not great for their business model, but personally not, and we may be outliers here, like, I don't know, but based on the responses to that post they put up on LinkedIn, I don't think we are. 

[00:39:20] Mike Kaput: Right. 

[00:39:20] Paul Roetzer: Right. 

[00:39:22] Mike Kaput: Well, I mean especially, well, I mean, just two quick points there. Even if we are outliers now we're seeing related to the college story, like we're not the only pe.

[00:39:32] There's a whole generation growing up that is not operating in the traditional manner of. Information consumption. So even if we are outliers now, I mean time, it's only a matter of time. But also, I mean, I just wonder when you're saying that I was thinking about this today, this afternoon, while preparing too, it's almost like the next step for deep research for me is like, don't give me the whole brief, gimme the one piece of information or gimme an option to get the one answer after you've gone through a hundred links and read them and vetted [00:40:00] them based on criteria I give you.

[00:40:02] Give me the one thing I need to know from the best source. I think that's not that far away. 

[00:40:07] Paul Roetzer: Yeah. And I, I, you know, like, I think, I mean Google knows this, like there   In notebook, lm, you know, which traditionally you gave it the sources. I  think it now will go find sources you can, can recommend.

[00:40:19] Mike Kaput: So it's, yeah, 

[00:40:20] Paul Roetzer: it's knows that, that it's probably better at search than you. But again, the challenge for Google is that that search and ads is their business. Yeah. That's like the dominant component. And so how do you make the shift that you probably know you need to make an injection of AI mode.

[00:40:36] And AI overuse is like a logical thing, but they, they ha they're probably gonna have to win with Gemini. Like I think you, you gotta get, like, it's gonna have to probably live in the app. I would, I would think, or these individual products like NotebookLM, or it's just embedded within the, you know, Google Workspace and Gmail and like the other platforms.

[00:40:57] They already have the distribution. But I yeah, I just [00:41:00] don't know that going to google.com is a thing the next generation's going to ever do. Hmm. 

[00:41:06] OpenAI Hires CEO of Applications

[00:41:06] Mike Kaput: Alright, let's dive into some rapid fire. And the first topic does have quite a bit of relation to this because OpenAI just made a pretty serious leadership move.

[00:41:16] By hiring Instacart, CEO, Fiji, CMO to run its applications division. This is a new role reporting directly to Sam Holtman. So CMO will be CEO of applications and will lead key parts of OpenAI's growing business, including product operations and monetization. Now, Altman remains open, AI's overall CEO, and says that Semos appointment will let him focus more on research, safety and infrastructure, which are the core pillars of building super intelligence.

[00:41:47] Now, this kind of shows how OpenAI has evolved in tandem with our, you know, first topic here. It's now part research lab, part product company, part infrastructure provider, possibly soon of [00:42:00] the world's largest nonprofit cmo has been on OpenAI's board since early 20, 24, and brings deep experience from Instacart and meta.

[00:42:09] Her background, interestingly, is in scaling businesses and launching ads, and that's especially relevant as OpenAI Eyes is $25 billion in hopefully future revenue from Products beyond ChatGPT. So Paul, maybe give us kind of an overview of why CO has chosen for this role. What does this mean or what can we tell if anything, about OpenAI's product strategy?

[00:42:33] Like are we about to see ads everywhere in ChatGPT? 

[00:42:37] Paul Roetzer: Yeah, it

[00:42:38] interesting. So I was the event I was, I was the Global Retail Marketing Association event. And so this was like top executives from a lot of the top retails and restaurants, retailers and restaurants. And so it was an interesting time to be there because there was a number of people I talked to who know her, worked with her at Facebook, worked with Instacart closely as part of [00:43:00] their operations.

[00:43:01] And so I was able to get a little bit of a pulse. 'cause this happened Thursday morning, oddly enough, it was like 2:00 AM that they announced. Yeah. Like the tweet from Sam was like, 2:00 AM Eastern time or something like that. I woke up to this at 6:00 AM I'm like, when did this get announced? And I was like, oh, four hours ago.

[00:43:15] That's interesting timing. I assume I maybe completely off on this. I assume some media outlet had the story and was gonna run with it at 8:00 AM and they had to just get out ahead of this because it's a publicly traded company and you can't have your CEO being announced as leaving when you know.

[00:43:33] Right. No one knows about it. So the only thing I can do is go back and look and see like, okay, so what is her background, as you mentioned? So, CEO and Chair since August, 2021 for Instacart. Board member at OpenAI since March, 2024. So over a year now. Board member at Shopify since December, 2021. If you recall, last month, Shopify and OpenAI partnered to interject, e-commerce into ChatGPT, that [00:44:00] that's an interesting connection there at Facebook.

[00:44:02] She was the head of the Facebook app for two and a half years, from March, 2019 to July 21. She was the VP of video games and monetization at Facebook, 2017 to 2019, and director of Product Management for three years prior to that. So, deep experience at Facebook. And Meta, as you mentioned, monetization is a, a potentially big play as is e-commerce.

[00:44:28] She came up initially as a strategy manager at eBay for four years. Mm, at least that's far back as their LinkedIn profile goes.   So, and I'm sure there's some other stuff before that. So, yeah, I mean, it's, it's really intriguing to see where does this go. I think the splitting of responsibilities where Sam is gonna have a major focus now, and we'll talk a little bit more about this in a couple minutes on the infrastructure, the compute, the data centers, the funding, things like that.

[00:44:55] and it sounds like she's gonna focus more on the product, the app, you know, the monetization [00:45:00] strategy behind it. So definitely intriguing, something to kind of keep an eye on. I  can't help but find myself. Oh, and by the way, the people that Jeremy a, that I spoke with said she's insanely impressive.

[00:45:11] Like, one of those people, like there was a lady I was talking with who worked with her at Meta and she said literally just like the kind of person that walks in the room and just commands attention immediately and just has immense like, respect of her team. So I don't personally know her, but the few people I talked to said she's a superstar.

[00:45:31]   So, yeah, it sounds like a great get for them. And then the weird transition of her also being the ceo, Instacart, for the time being. but I assume that's until they find a replacement. my only thought was immediately like, I don't know if Sam's gonna stay the CEO Like, it's almost like this is to stabilize.

[00:45:50] but my, if I was like putting some odds on this, I  would guess this transitions her in [00:46:00] into this and Sam gets them through this conversion into the public benefit corporation and then he has some optionality at that point, you know, six months down the road. And maybe he, you know, he doesn't want to be CEO anymore.

[00:46:13] I  don't know, having two CEOs is weird. Yeah. it's just a weird choice of title. so yeah, I   wouldn't be surprised if there's some additional information later this year of like some other potential adjustments to how everything's structured there. 

[00:46:33] Sam Altman Testifies Before US Senate

[00:46:33] Mike Kaput: Next up, the United States Senate just held a big hearing on ai and the message from lawmakers and tech leaders was pretty loud and pretty clear.

[00:46:43] If America doesn't move fast on AI infrastructure and regulation, China will take the lead. Now, this included testimony from Sam Altman, from of, from Lisa Sue of a MD, the chip company, and executives from Microsoft and Core Weep some other AI focused [00:47:00] companies, and they testified alongside senators from both parties.

[00:47:04] Now, Altman was kind of the, you know, spotlight here given his prominence, and he warned that America's edge in AI is not a huge amount of time ahead of China and could slip if heavy handed regulation. Slow innovation. Senator Ted Cruz blasted what he called the Biden Administration's regulate first approach, comparing it to the eus policies, which he says strangled European Tech.

[00:47:29] Cruz and others argued for a light touch framework, like the one that helped the US dominate the early internet era. And it was pretty interesting when you juxtaposed this with Altman's testimony a couple years ago. The lawmakers largely embraced Altman, during this testimony, but two years ago, Altman emphasized AI safety dozens of times to the Senate, and this year it was barely mentioned.

[00:47:58] Paul, can you maybe walk [00:48:00] us through what jumped out to you, in Altman's comments or in general from this hearing? 

[00:48:05] Paul Roetzer: Yeah, so he was with, a M D's CEO, Lisa Sue Kwe, CEO, Michael Intra, intra and Trader, and then Microsoft President Brad Smith. So it wasn't just Sam, but Sam was obviously headlight in this thing.

[00:48:17] it was definitely different than 2023 as you highlighted, where he was calling for regulation.   He, he said that with AGI , the future can be almost unimaginably bright, but only if we take concrete steps to ensure an American led version of AI built on democratic values like freedom, transparency prevails over an authoritarian one, which again is like this US versus China thing.

[00:48:45] that has been kind of the talking point and then talked about this requiring more chips, training data, energy and supercomputers, which is what his evolved CEO role is all about. Now it's like him focusing on infrastructure. He literally said infrastructure is [00:49:00] destiny and we need a lot more of it.

[00:49:02] and then he talked about the restructuring. So the for-profit arm is a bene a public benefit corporation with the same mission would make it possible for us to raise the capital needed to deliver these tools and services at the quality level and availability level that people want to use them, but still stick to our mission.

[00:49:18] I will again, add a a little bit of context here from. The Economist, that I heard speak. Stephen Moore is the economist name by the way. I didn't, I didn't mention it, initially, but he's an advisor to, the administration. And so he, he was talking about regulation within his talk. And, you know, I've said this before, this isn't, you know, verbatim from him, but this administration, this is non-political.

[00:49:43] I could possibly make this. I'm just factually, they hate regulation. Like, they, they, they want to get rid of as much regulation as possible. Only the things that are essential stay and they don't care about the environment. Like, [00:50:00] I  will not repeat how it was phrased in the session, but what I believe to be true, I can a hundred percent confirm I.

[00:50:10] The net zero emissions, and I, I'll bring this up 'cause I get asked all the time now. It talks about this like what, what is happening with AI's impact on, on the environment. What I also tell people is the current administration doesn't care. This is all about winning. and they, to build the infrastructure they need to build, nuclear's not gonna be ready till 2050.

[00:50:29] Like just this morning it was talking about like they wanted to accel like deregulate so they could accelerate the building of nuclear, facilities. But it was like between now and 2050, you can't just throw up a nuclear facility and have it online in 2027. So they're racing to build this, but racing is decades to compete with, with China on this.

[00:50:50] So the way they think they compete is through coal. Yeah. and so they plan to rapidly scale up the use of coal and the investment in [00:51:00] coal, which the private administration was pushing against.   So net zero emissions, they, they literally just laugh at it. Like they, they think it's a dumb idea. and so that was what all these tech companies were racing toward was net zero.

[00:51:14] And current administration could care less about net zero. So that's just the facts of like, they're gonna try and race forward on infrastructure and data and it's going to be, the byproduct will be if you personally care about the environment. I don't know what you're gonna do about it, but like they, they don't share that concern.

[00:51:35] And we'll put a, a Politico article in the show notes about this. there was a article that just came out, beginning of the week, the title, How come I Can't Breathe. And it was actually talking about the massive data center that Elon Musk XI built in Memphis and how they were running on gas generators, gas turbines with zero oversight.

[00:51:57] Like there was no, there was nothing to protect it from getting the [00:52:00] environment. And it's like, well, who's gonna stop it? And this, I think that's,   They had no clean air acts, permits, like nothing. And it's like this. So that's where we're at, when it comes to this stuff. So they're, you know, they're gonna race forward, they're gonna build all this stuff, and the environment is, gonna be very, very secondary, I 

[00:52:18] Mike Kaput: would say.

[00:52:18] So infrastructure at all costs. 

[00:52:20] Paul Roetzer: All costs. And they, like, literally, I  mean, I, I, I've seen the charts that have compared China's investment in coal and infrastructure to the us and I  know that those influence the decisions and policies that are being made, and they, they have to compete in their minds with what China's doing.

[00:52:43] Now, I'll say also Elon Musk go to his Twitter feed from May 9th, like, it's like seven tweets about solar energy. So Elon doesn't share this view. Like, again, Elon is as crazy as I  didn't, this is not a, a, a main topic, as. [00:53:00] Elon, I  think in some senses is still very true to his initial vision. Like Elon is not a tariff guy.

[00:53:05] Like he, he hates tariffs. Elon is not a coal guy like Elon believes in like the environment and like doing these good things. So Elon's all about like, solar is absolutely the greatest thing we can do, and the current administration does not agree. So there's lots of things where Elon like takes a lot of hits for what he's doing and people think he's just like a yes man and just like doing whatever.

[00:53:29] He's still very public about these things that he disagrees with, and I  oftentimes probably am a little, you know, hard on him. So I  just also want to give credit where it's due. Like he is also voicing his opinion on things he does not agree with the administration. So yeah, it's, it's fascinating to watch, but it's, it's very complicated and the more you listen to the people on the inside, the more you realize what's going on and yeah, it's wild 

[00:53:53] Fiverr CEO’s Blunt AI-First Memo and More Quiet AI Layoffs

[00:53:53] Mike Kaput: In our next topic, the CEO of five or Misha Kaufman just delivered one [00:54:00] of the bluntest internal CEO memos yet about ai, and it's gotten some attention for a reason. So in a note to his team, Kaufman skips the pleasantries, just saying, straight up, AI is coming for your jobs. Heck, it's coming from mine too.

[00:54:16] He doesn't say that everyone's doomed, but that those who don't adapt fast definitely are, and he warns that the boundary is between easy, hard, and impossible tasks are collapsing because AI is pushing expectations higher at a pace that just most teams are not ready for to survive. He says himself and the team, everyone has to become exceptional, which means mastering AI tools working faster and stepping up your game.

[00:54:44] When it comes to prompting, I. So he lays out kind of a to-do list, again, like the other CEO memos, just a few pages. things like learn aggressively, stop waiting for opportunities, help reinvent how the organization works with ai. [00:55:00] He literally has pretty raw tone. He says stuff like, you think I'm full of shit and some other choice comments be my guest and disregard this message.

[00:55:09] But his intent is pretty clear. He wants Fiverr to be on the winning side of ai. So Paul, this is obviously another, what we would call AI first memo, even though we want, you know, to maybe change the conversation around that term. This one's even more blunt than the others. I genuinely wonder if we'll start to see competition for who can be the most blunt and drop the most like swear words, I guess, in each one.

[00:55:34] Whether you love or hate the tone, though, if you go read this, it's very short. We link to it in the show notes. I can't say I disagree with a lot of it. 

[00:55:42] Paul Roetzer: Yeah, I, I, it is just a very direct approach.   I    can't, yeah, I can't really disagree with it either. It's just how you pres you, how you present it.

[00:55:53] I would actually, probably almost rather the honesty and transparency than what a lot of people are doing, which is just [00:56:00] pretending like it's not, it's, yeah. no, I, I'd like it if there was more, but here's what we're gonna do to help you like the human side. But yeah, I mean, and this one came out April 8th.

[00:56:13] It was like when I was, you know, when we did the AI first thing last, right week. I hadn't caught this one yet. And then I  saw this one. I don't remember somebody commented about it, but, yeah, I  think I just saw somebody tweeted, on X. But yeah, again, they're gonna keep coming. I   feel like probably every week we'll have a couple new ones we could add to the list.

[00:56:32] Right. 

[00:56:34] AI-First Scorecards

[00:56:34] Mike Kaput: All right. Next up, HubSpot, CEO. Yamini Rangan has laid out a pretty sharp framework for measuring whether a company is truly becoming, again, what she calls AI first. And her message is that it's not about how many AI tools you buy, it's about how deeply they change the way work gets done. So she outlined five metrics that every company should be tracking to figure out how much closer they are to AI first, and the [00:57:00] first and ultimate one is revenue per employee.

[00:57:02] If AI is working, output per person should be rising. Another is customer satisfaction. Obviously, if customer satisfaction drops, your ai, even if it's being used, isn't helping where it matters. Now she says these two are lagging indicators, and the final three are leading ones. So third is the percentage of teams with access to AI tools.

[00:57:24] Fourth is how often they're actually used daily or weekly, not just occasionally. Most interesting and forward looking is her fifth and final metric, the percentage of work being done by AI agents. So she's saying that when bots handle things like drafting content, scheduling sales calls, resolving support tickets, humans can focus on higher order thinking.

[00:57:48] So Paul, I personally found it really helpful to see a leading CEO like Amini getting specific about AI transformation. what did you think of the metrics she chose to focus on? [00:58:00] 

[00:58:00] Paul Roetzer: Yeah, so Yi's an amazing CEO. I, I've had the privilege of meeting with a virtually, at least once. I personally know the founders, much better.

[00:58:12] So Brian GaN, Dharmesh Shaw, I, again, if people are new to the show, my agency that I sold in 2021 was HubSpot's first partner back in 2007. So I've, I. Worked with HubSpot for the better part of 18 years and known them. and we worked very closely for a long time. So I  love HubSpot. I have a great admiration for the founding team, for everybody who's there now.

[00:58:35] I personally wouldn't have built what I've built today without HubSpot, and we keep working closely with them. We actually have a great partnership with them. It powers SmartetX and Marketing Institute. So we're, you know, customers. so my take on this at the end of Joni's LinkedIn post said like, what are we missing?

[00:58:52] So I'm, I'll offer my, my objective opinion here of what's missing.   Her post ends with the B [00:59:00] bottom line. Becoming AI first isn't about buying tools. It's about changing how work gets done. When you combine these five metrics, you'll get a clear picture of progress and the compounding path forward, higher, productive, better outcomes, and real transformation.

[00:59:10] My question is, where's the human part of this? Hmm. So this is exactly my argument last week for why AI first might not be the right term. HubSpot is the great company it is today because they invested in their people and their culture over many things through the years. So as a partner, I saw it firsthand as someone who has dozens, if not hundreds of friends that have gone through HubSpot over the last, you know, 15 plus years.

[00:59:37] that company is who it is because of its people. And there wasn't a single mention of them in this. And so that would be my argument is like. This is why AI forward might be the better thing. It's like, if you wanna be AI first company, where is the AI literacy part of this? Where's the education and training on your people?

[00:59:54] Where is the responsible part of this? How are we doing this responsibly? So I feel like [01:00:00] this is the, AI first technology and financial scorecard.   Like we covered those bases. Like this is all good for the tech side and the financial side, but where's the customer focus? Where's the human focus, on how it benefits our people?

[01:00:17] And so that would be my only critique of it, is I think that's what's missing. and I get like, she threw this together based on like couple questions. And so it's, you know, kind of quick thought and I actually respect that. It's just like, let's just get the conversation going. So that's what I would challenge HubSpot and others to think about is as you're trying to measure what AI first is, which quantified us if we can, I think that's great.

[01:00:39] Let's layer in the human components of this, because otherwise, you know. You can, they can feel devalued in this whole thing and realize like it's just all about the numbers. 

[01:00:51] The AI Diffusion Rule Is Dead

[01:00:51] Mike Kaput: Next up, the Trump administration is reversing course on a Biden era AI policy. It plans on [01:01:00] scrapping the so-called AI diffusion rule, which is a set of chip export restrictions aimed at curbing the spread of advanced AI hardware.

[01:01:10] This was set to take effect May 15th, and the rule would've categorized countries into tiers with varying levels of access to cutting edge AI chips from companies like Nvidia A MD, and Intel. Chip makers obviously push back hard arguing. This would stifle innovation and hand long-term advantage to foreign competitors.

[01:01:31] Now, Nvidia is very happy about the proposed rollback of this, calling it a once in a generation opportunity to cement US leadership in AI and boost domestic jobs and infrastructure. So the Biden era rule was described by Trump officials as overly complex and bureaucratic. They now plan to replace it with a simpler framework that favors American dominance in AI technology.

[01:01:57] So Paul has definitely hits on the notes you were mentioning [01:02:00] about where we're at in terms of the new administration.   Walk me through why this matters, and maybe do you have any thoughts on, like, the question I guess that comes to my mind is this rule, it hasn't gone away yet. It's likely to go away, but what goes in its place?

[01:02:17] Paul Roetzer: Yeah, this is a tricky one. it's a hot button issue. There's people I respect greatly on both sides of this very passionately. Like, and it's one of those that's hard to even land on your own opinion, per se, of what I think is actually the right choice. I, and I  don't know that I'm there. Like I  don't know that I could sit here and objectively tell our audience, you know, I  think this is a bad decision or, I  can, there's a lot of topics related to this stuff where I  really just wanna sit back and listen more to both sides and try and comprehend it.

[01:02:53] I could easily make a pro and a con list for both sides right now is kind of where I'm at with this. I, [01:03:00] I think it goes back to this thing we talked about last week of this. Finite versus infinite game. And at some point you assume these other countries, and China in particular is going to get access to the chips they need and they're gonna be able to do the thing.

[01:03:13] They're already keeping up. They're already, you know, based on Sam estimates probably three to six months behind the leading models here, maybe six to 12. and so then you're harming like an American company in Nvidia by restricting their ability to sell these things. And so, I don't know, I  would love to see the pitch deck or the talking points that Jensen used to convince them to, to make this pivot.

[01:03:36] 'cause Jensen can be a very convincing guy. and there's this part of me that's like when I had to, if I have to force rank like the CEOs, I have the most respect for today. Like Jenssen's at the top of that list. Yeah. And so there's this part of me that wants to side and say, well, if, if Jensen believes this is for the best of America, best of the economy, then for, I owe a hard [01:04:00] time disagreeing with Jensen.

[01:04:01] He's infinitely more knowledgeable on this stuff than I am. So, I don't know. I    struggle with this one. I think, again, it's like one of those topics, if this is intriguing to you, as a listener, like go do some digging here. 'cause I think it's gonna be a really important decision. I just have a hard time framing it right now as to how it'll play out over the next couple years.

[01:04:22] AI Product and Funding Updates

[01:04:22] Mike Kaput: All right, Paul, so I'm gonna dive into some quick AI product and funding updates, and then turn things over to you for our final segment about listener questions that works for you. Okay. So a few updates this week, OpenAI is reaching an agreement to buy Windsurf, the AI coating assistant for $3 billion.

[01:04:42] Its biggest acquisition. Yet. Now this deal is not yet closed, but if it does, it will probably escalate further. The AI coating wars as OpenAI looks to challenge Anthropic GitHub and companies like Cursor. In the race to build better coding assistance. Now speaking of [01:05:00] Cursor, its Maker any spear. Just raised a staggering $900 million at a $9 billion valuation with a round led by Thrive Capital and Andreessen Horowitz.

[01:05:14] Google's Gemini 2.5 Pro is also heating up the coding scene. A new preview version of Gemini 2.5 Pro just launched ahead of Google io and it is boasting a lot stronger coding capabilities. It has better UI generation, better video understanding, and more reliable tool use. It now leads the web dev arena leaderboard, beating its predecessor by 147 ELO points, which by that leaderboard is a massive leap in performance.

[01:05:45] Meanwhile, Andrew Ungs AI Fund raised $190 million for its new fund to continue co-founding new AI startups. And that was not a misstep in language. You heard that right? It's [01:06:00] co-found. They do not invest in existing businesses quite like a VC firm. They're more a venture builder or venture studio that co-found AI companies and helps to build them.

[01:06:13] And last but not least, in Platform News, OpenAI's Chat. GPT Team License now supports the enhanced memory for organizations. So like you can do in your personal account, you can now turn on memory that remembers details across your own chats. 

[01:06:30] Paul Roetzer: Couple quick side notes, Mike. So Google IO conference come up May 20 to 21st.

[01:06:35] I. So expect some model updates and some big notebook, LM news that day. I saw some potential leaks of some of the things that are coming on the notebook, LM side. I'm not sure the validity of the source, so I won't get into 'em, but they're pretty interesting updates. keep pushing on that product. And then the memory thing, just came out like Wednesday, I think I saw it and I tested it in ours.

[01:06:58] Said like, what do you know about our [01:07:00] organization? And it knows some stuff. Yeah, but I don't know if it knows across, like, all chats of anyone who's in the chat. QB team, according to the 

[01:07:08] Mike Kaput: FAQ that we link to, we'll link to in the show notes, it says, can I share memories from my account with other team members?

[01:07:16] It says they are tied to each individual account. Not transferable to other users, even within the same team workspace, but interesting. I  would be interested to know, I mean, I guess that's possible. That's not being uniformly applied. It'd be good to test out further. 

[01:07:31] Paul Roetzer: Yeah. Yeah. I wonder if it, if you asked like, what do you know about our company across the different users?

[01:07:35] If everybody would get like a similar Right. Thing. Interesting. 

[01:07:40] Listener Question

[01:07:40] Mike Kaput: All right. So we're gonna wrap up this week's episode with our recurring segment listener questions. Every week we answer a question from our audience that seems particularly relevant to either this week's stories and or AI literacy overall.

[01:07:53] So Paul, this week's question sounds really straightforward, but honestly I actually get this quite a bit. 

[01:07:59] Paul Roetzer: Okay. 

[01:07:59] Mike Kaput: And the [01:08:00] question is, how much hands-on technical experience do you need to build your own custom gpt? 

[01:08:07] Paul Roetzer: almost none. Like, yeah, so I, it's funny, like, the event I was at, you know, I was like having a breakfast with a bunch of CMOs, these big brands.

[01:08:19] And I was asked like, who's built these things? And like very few people had, and you're like, how do they work? Yeah. And you're like, literally you just talk to it. Like if you're giving a, signing a project to an associate and you wanted to like explain it, like the project brief, you would write to them.

[01:08:32] It's your custom instructions. Like, it just, so, yeah, like my feeling is the custom gpt is like a really easy way to get going. and to get started. But most people just aren't familiar with how to do it. And so we have, the CO CEO will drop the link in there, that I  actually walk through like building of CO CEO, which is what I, you know, created one of the gpt I created.

[01:08:59] [01:09:00] And, on that page one, you can click and, try a demo of it. So there's a free like demo of g co, co CEO. You can also click and watch the on-demand webinar. And in the webinar I just walked through how I built it. Then on that page is the template of the system instructions that I used to build co CEO.

[01:09:21] And so you can use those system instructions and you can build a, a, a Cox, like whatever you you want. So that's an easy way to do it. I've probably built a dozen GPTs and it's, it's super simple. Like you don't need any coding ability at all. And it's really just trial and error. Like it's, use some words and create it, see what it does.

[01:09:43] But you can spin these things up in minutes. I mean, you've, how many, you've built a bunch of GPTs too, Mike, right? I'm like hundreds 

[01:09:48] Mike Kaput: at this point. I don't, they'll all work well, but there's easily, like, it can't emphasize enough how fast you can get a minimum viable GPT to at least test out. You literally need absolutely [01:10:00] no experience doing it.

[01:10:00] Yeah. 

[01:10:00] Paul Roetzer: Zero technical ability, like it's. it's literally just pops up with the user interface. You drop some instructions and you give it a name, you create an image for it. So yeah, if, if it's daunting to you, you can go watch the co CEO one, you'll learn it in 30 minutes with confidence. And I'm sure there's tons of, you know, YouTube videos out there about how to build a GPT in five minutes.

[01:10:19] Like, yeah, and we're planning on launching like a, we haven't finalized how we're gonna do this, but we're gonna probably start doing, as part of our AI Academy membership, more regular live, kind of, demonstrations of how to build these things so that when people are there, it's like there's always a fresh, you know, live class to take about custom GPT.

[01:10:40] So since Mike and I are both on board of like, they're often like the best way to get started. Yeah. The most value you can get right away. We're working on ways to start infusing more of our education where these, you know, there's these very real tangible things where people can go in and 30 minutes, learn to build it and, you know, collaborate with some other people who are also on the class.[01:11:00] 

[01:11:00] Mike Kaput: All right, Paul, that's a wrap on another busy week, even though it was a bit of a short one for us. I really appreciate you going through all the news this week and helping us demystify it. 

[01:11:10] Paul Roetzer: Yeah, good stuff. Thanks everyone for being back with us again. as a reminder, this was recorded on Friday, May 9th, so if we missed anything crazy, we'll be back.

[01:11:20] you know, we'll catch it for the next one. So thanks, thanks again and thanks Mike for organizing everything. As always. Thanks for listening to the Artificial Intelligence Show. Visit smarter x.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters.

[01:11:39] Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in the marketing AI Institute Slack community. Until next time, stay curious and explore ai.