What if the US built its future on AI factories? And what if AGI arrives just in time to run them?
In this episode, Paul and Mike break down the White House’s aggressive three-part Action Plan, including its call to build more data centers and ban “woke” AI. They unpack what staggering token usage tells us about the pace of AI development—and how that connects to the rumored, unified GPT-5 model that could reshape everything. Then it’s rapid fire: Nvidia CEO’s advice for college students, the first AI for therapy, AI’s impact on tech jobs and more.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:06:23 — White House AI Action Plan
- AI Action Plan - AI Gov
- Winning the Race: America’s AI Action Plan - The White House
- White House Prepares Executive Order Targeting ‘Woke AI’ - The Wall Street Journal
- OpenAI CEO tells Federal Reserve confab that entire job categories will disappear due to AI - The Guardian
- Trump wanted to break up Nvidia — but then its CEO won him over - Nvidia
- Executive Order Fact Sheet: President Donald J. Trump Accelerates Federal Permitting of Data Center Infrastructure - The White House
- NVIDIA CEO Envisions AI Infrastructure Industry Worth ‘Trillions of Dollars’ - Nvidia Blogs
- X Post from Demis Hassabis
- Fact Sheet: President Donald J. Trump Prevents Woke AI in the Federal Government - The White House
00:31:55 — How AI Could Upend the World Economy
- What if AI made the world’s economic growth explode? - The Economist
- Gross Domestic Product, 1st Quarter 2025 (Third Estimate), GDP by Industry, and Corporate Profits (Revised) - Bureau of Economic Analysis
- Situational Awareness - Leopold Aschenbrenner
00:39:37 — GPT-5 Rumors
- OpenAI prepares to launch GPT-5 in August - The Verge
- OpenAI’s GPT-5 Shines in Coding Tasks - The Information
00:47:52 — AI Is Impacting Tech Jobs
00:53:08 — Advice for College Students
00:59:44 — Instacart CEO About to Take Reins of Big Chunk of OpenAI
- Instacart’s CEO is about to take the reins of a big chunk of OpenAI - The Verge
- AI as the greatest source of empowerment for all - OpenAI
01:08:32 — The First AI for Therapy
- Introducing Ash: The First AI for Therapy - Ash
- Can A Chatbot Be Your Therapist? Casper’s Neil Parikh Launches A New $93 Million-Backed Startup To Try - Forbes
- X Post from Neil Parikh on Ash
- A.I. Is About to Solve Loneliness. That’s a Problem - The New Yorker
01:12:31 — AI’s Environmental Impact
- Our contribution to a global environmental standard for AI - Mistral AI
- X Post from Sophia Yang on Mistral Environmental Impact
01:17:04 — AI Search Summaries Result in Fewer Clicks
01:19:45 — AI Product and Funding Updates
- xAI Infrastructure Funding
- Anthropic Investments
- X Post from Perplexity CEO on Comet Update
Summary:
White House AI Action Plan
The White House has released its official AI Action Plan, a strategy document that frames artificial intelligence as a global race for "unquestioned and unchallenged" technological dominance.
The plan is built on three pillars. The first, Accelerating Innovation, calls for unleashing the private sector by removing "bureaucratic red tape" and "onerous regulation." It directs federal agencies to rescind the Biden administration's AI executive order and revise standards to ensure AI systems are free from what it calls "ideological bias." The plan also emphasizes supporting American workers with skills training for an AI-driven economy.
The second pillar, Building Infrastructure, is a massive domestic push under the mantra "Build, Baby, Build!" It aims to streamline environmental permitting for data centers, semiconductor factories, and energy projects, while explicitly rejecting "radical climate dogma" to expand the nation's power grid.
The third pillar, International Diplomacy and Security, focuses on exporting the full American AI tech stack to allies while strengthening export controls to deny adversaries access to advanced chips and manufacturing equipment.
The Action Plan is authored by Michael Kratsios, Assistant to the President for Science and Technology; David Sacks, Special Advisor for AI and Crypto; and Marco Rubio, Secretary of State.
How AI Could Upend the World Economy
What if artificial intelligence doesn’t just disrupt the economy, but detonates it? That’s the provocative question posed in a briefing in this week’s issue of The Economist.
Unlike past technologies, AGI could automate not just labor, but innovation itself, generating ideas, conducting scientific research, even improving its own design.
If that happens, the global economy wouldn’t just grow, it could explode, hitting 20 to 30 percent annual growth rates.
But growth at that scale doesn’t necessarily mean prosperity for all. As AI gets cheaper and more capable, wages could shrink, and workers might be priced out entirely.
Capital—not labor—would capture most of the value, meaning those who own AI or data centers could end up with a staggering share of future wealth. Yet even with these projections, markets aren’t behaving like explosive growth is around the corner.
GPT-5 Rumors
OpenAI is gearing up to launch GPT-5 as early as August, according to a new report in The Verge.
Sam Altman said recently on X that “we are releasing GPT-5 soon,” and he previewed GPT-5’s abilities in a recent podcast with comedian Theo Von.
He told the host that he let GPT-5 take a stab at a question he didn’t understand, and the model answered it perfectly. He called this a “here it is” moment, and said he “felt useless relative to the AI” because he felt like he should have been able to answer the question.
A post on X, right around the time of the podcast was released, revealed that GPT-5 had been spotted briefly in the wild.
The Verge says the full rollout is expected to include three tiers: a flagship model with integrated o3 reasoning, a lightweight “mini,” and an API-only “nano.”
Critically, GPT-5 consolidates OpenAI’s fragmented model lineup into one unified system, which is a move toward the long-term goal of AGI.
If AGI is ever formally declared, it could shift OpenAI’s business relationship with Microsoft in profound ways, including revenue rights.
This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
This episode is also brought to you by our Academy 3.0 Launch Event.
Join Paul Roetzer, Mike Kaput and the SmarterX team on August 19 at 12pm ET for the launch of AI Academy 3.0 by SmarterX —your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Register here.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: People in power wanna stay in power, and if these models from the five companies that are building the frontier models control the power and the trillions of dollars of value, whoever is in power will abuse them. Like that is human nature. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:22] My name is Paul Roetzer. I'm the founder and CEO of Smarter X and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:44] Join us as we accelerate AI literacy for all.
[00:00:51] Welcome to episode 159 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording. July [00:01:00] 28th, 11:00 AM Eastern Time, expecting maybe some announcements this week. So timestamp might be relevant here. this episode is brought to us by AI Academy, by Smart Rx.
[00:01:11] we have our 3.0 launch coming up. It, I think I mentioned this last week that there was an announcement, pending and it is gonna happen on August 19th. So we have spent the last nine months or so re-imagining our AI Academy and our AI Mastery membership program, and it is launching on August 19th, we're actually gonna launch with a collection of new on-demand course series and certifications, a new AI academy live with weekly experiences.
[00:01:41] There, a new Gen AI app series that Mike is taking the lead on creating, which is gonna be weekly, 15 to 20 minute product and or feature reviews. It is a complete reimagination and I, maybe it's something I'll tell the full story of kind of how we got here. I'll, I'll probably actually honestly tell it on the August 19th webinar.
[00:01:59] I still have to kind [00:02:00] of like build that presentation. I'm actually in the midst of finalizing a couple of the new course series, as we speak, taking an hour off in between doing that to do this podcast. So I'll probably tell the story of kind of how this came to be and, and what version one and two were.
[00:02:17] If you're not familiar with AI Academy, we actually launched our AI courses in 2020 in lieu of not being able to have a in-person conference that year. We, we launched online courses, so we've been doing this for five years and this is a complete reimagination of it, so I'm really excited to launch it.
[00:02:34] The team has been working incredibly hard behind the scenes. We've doubled our staff in the last like 45 to 60 days, in preparation for this launch. We expect to continue to grow that staff and the organization as a result of this. We're grateful for everyone who's been a part of AI Academy leading up till now.
[00:02:52] We have, I, I don't even know. there's been probably over 2,500 to 3000 people go through AI Academy through [00:03:00] the years. we anticipated pretty dramatic uptick in that number very soon, based on early demand for what we're launching. So yeah, join us August 19th to hear all about it, the vision, the roadmap, an inside look at everything that's launching that day.
[00:03:16] Any AI Academy members will have access that day too. a lot of the new stuff that's coming out. And then we'll share a little bit of the roadmap for where we're going from here. One of the big features is the new AI Academy will have business accounts, which previously there, that was not a, a feature of it was a lot of individuals.
[00:03:34] so join us August 19th. We'll put the link in the show notes. You can all also go to smarter rx.ai, and click on education. And the AI Academy 3.0 launch event is right there. So again, just go to smarter x.ai. Maybe Mike will put that in the header too. The CTA, I think there's currently like a job openings, header.
[00:03:54] Yeah, we'll maybe we'll swap that out and put that there so it's easy for everyone to find. Great. Alright, so that is AI [00:04:00] Academy launch again, August 19th at noon Eastern time. And then also, Mayon our annual in-person event. This is happening October 14th to the 16th. We've had incredible response to this so far.
[00:04:13] I think we are, I don't know the exact numbers. We had a big week last week. I wanna say we're trending somewhere between 40 and 50% ahead of ticket sales for 2024. So we had about 1100 people at the 2024 event in Cleveland, and we are definitely trending in the direction of 1500 plus. So thank you to everyone who has registered already.
[00:04:34] It's, it is like the best place if you are a marketer business leader to meet other forward thinking. Marketers and business leaders. again, it's happening in Cleveland, October 14th to the 16th. Majority of the agenda is published. I'm working on, finalizing the main stage general sessions as we speak as well.
[00:04:53] I actually, I think three of them we finalized. Three or four of them we actually finalized last week. We won't be [00:05:00] announcing them probably here for a couple weeks, but few more announcements coming, but you can get AGI a general idea of the amazing speakers and sessions and the workshops. The pre the pre-event workshops on October 15th.
[00:05:12] It's all live right now. Go to MAICON.ai. That is MAICON.ai and you can use the code pod 100 for $100 off your ticket. So when you're going through the registration process, make sure to enter the promo code POD 100 for a hundred dollars off. Okay, we had, um. What kind of seemed like a slower week.
[00:05:34] Honestly, at first, like as I was looking through all the links going into the weekend, Mike, it was like, yeah, okay. Nothing too crazy. And then honestly, like, you know, sometimes there's podcasts I prep for where I start to get really excited to talk about the topics. And there is like three, or, I mean they're all great this week, but there's like three or four that ended up becoming probably bigger things to [00:06:00] discuss than I, I initially thought at first glance when I, you know, first put 'em in the sandbox of things to go through this week.
[00:06:06] So we got a lot to talk about. starting with the White House AI action plan. Mike?
[00:06:12] Mike Kaput: Yeah, Paul, I felt the same way. I kind of was like, ah, okay. Might be a little bit of a slow week. And then once we started getting into them, I was like, wait a second. There's some really important things going on. And yeah, like you said, the
[00:06:23] White House AI Action Plan
[00:06:23] Mike Kaput: first one is that the White House has released it's official AI action plan.
[00:06:30] This is a strategy document that frames AI as a global race for unquestioned and unchallenged technological dominance. And basically the way they describe this, is quote, this action plan sets forth clear policy goals for near term execution by the federal government. The action plan's objective is to articulate policy recommendations that this administration can deliver for the American people to achieve the president's vision of global AI dominance.
[00:06:58] The AI race is [00:07:00] America's to Win, and this action plan is our roadmap to victory. So with that in mind, keep, keep that at the forefront while we go through kind of the three policy pillars that they built into this plan. And by they, I mean this is an action plan authored by three kind of key people in the administration.
[00:07:17] Michael Kratsios, who's an assistant to the President for science and Technology. David Sacks, we've talked about before. A special advisor for AI and crypto, and Marco Rubio, secretary of State. This plan is built on three pillars. The first accelerating innovation calls for unleashing the private sector by removing bureaucratic red tape and onerous regulation.
[00:07:40] It directs federal agencies to rescind the Biden administration's AI executive order and revise standards to ensure AI systems are free from what it calls ideological bias. The plan also emphasizes supporting American workers with skills training for an AI driven economy. The second pillar, building [00:08:00] infrastructure is a massive domestic push under the mantra.
[00:08:04] They literally have this in their build Baby build. It aims to streamline environmental permitting for data centers, semiconductor factories, and energy projects, while explicitly rejecting what they call, quote, radical climate dogma to expand the nation's power grid. Now, the third pillar is international diplomacy and security.
[00:08:25] This focuses on exporting the full American AI tech stack. To allies while strengthening export controls to deny adversaries access to advanced chips and manufacturing. Now, Paul, there's a ton to unpack in this. It's like a 28 page policy brief. A couple things that jumped out to me. I mean, we've talked about this a ton of times, but my gosh, like you really can't read this and expect any consideration for AI's environmental impact from this administration.
[00:08:54] I mean, literally they say their mantra is Build baby build. There's a ton of stuff in [00:09:00] there about basically streamlining, which is maybe code for getting rid of or ignoring certain environmental, environmental regulations. I also found some of the commentary around AI's impact on workers. Interesting.
[00:09:12] There's some measures to drive overall AI literacy. There's training for jobs in the trades to support all the data centers and infrastructure. And there's even some discretionary funding to potentially help rapidly retrain displaced workers. So what did you find noteworthy in here?
[00:09:30] Paul Roetzer: There was a lot. So the document, you, you can see the whole thing at ai.gov and, and, and view it.
[00:09:36] It basically what it does is it breaks down a bunch of areas and then provides like a one paragraph summary and then recommended policy actions. So I, I'll kind of go through some of the highlights and then a quick summary of the executive orders that were released to, with this AI action plan.
[00:09:56] So the, I guess the [00:10:00] prelude to wasn't even the introduction, the prelude comes from, it's signed by, Donald Trump. So it says, today, a new frontier of scientific discovery lies before us, defined by transformative technologies such as AI breakthroughs in these fields have the potential to reshape the global balance of power, spark entire, early new industries and revolutionize the way we live and work.
[00:10:20] As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. To secure our future, we must harness the full power of American innovation. So my very, very high level take on all of this is comes down to competition mainly with Chinand it's about national security, the economy and power.
[00:10:44] Now, if you go back to last year, you know, we were talking as a lead up to the election cycle last year that this is what America needed to do. So I'm, I'm kind of all for the fact that we are all in on having a plan for ai. [00:11:00] the devil is sort of in the details and the nuance of, as you were kind of alluding to Mike, what they mean by certain phrases.
[00:11:08] Mm-hmm. And, and if you don't pay close attention to politics, some of this may just sound all amazing and great and, and all where we should be all for. In reality, I think that we have to understand the nuance of, what this administration believes and, and what they're doing and, and kind of the direction they're going and what they've told us previously about their thoughts on some of these key issues.
[00:11:29] So, with all that being said, kind of break this down a little bit. So in the introduction it says, the United States is an race to achieve global dominance in ai. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race is imperative that the United States and its allies win this race Now that on its own, there would be some debating.
[00:11:51] This isn't a win or lose thing. This is like this perpetual advancement of a technology. There is no point where you say, okay, we, we won or we didn't [00:12:00] win. So, you know, again, some of the language you just have to kind of put into context here. It then says, winning the AI race will ru usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people.
[00:12:13] AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy, and industrial revolution. AI will enable radically new forms of education, media and communications and information revolution, and it will enable all together new intellectual achievements, unraveling ancient scrolls once thought on readable.
[00:12:35] That has actually happened. That's why they're alluding to it, making breakthroughs in scientific and mathematical theory that is happening right now. We just had last week with the International Math Olympiads, open eye and Google Gold medal there, and creating new kinds of digital and physical art, a renaissance.
[00:12:49] So again, contextually, I don't disagree with any of this, like this is all what AI is going to enable, and it is nice to see the administration, acknowledging that [00:13:00] and understanding that then says several principles cut across each of these pillars. First, American workers are central to the administration's AI policy.
[00:13:08] The administration will ensure that our nation's workers and their families gain from the opportunities created in this technological revolution. I bold faced this part, the AI infrastructure build out will create high paying jobs for American workers. They're basically referencing the build out of energy and data centers there.
[00:13:24] And the breakthroughs in medicine, manufacturing, and many other fields that AI will make possible, will increase the standard of living for all, all Americans. That is, this is commentary here that is not a given. That is, that is a hope and a vision. I would say at this point. AI will improve the lives of Americans by complimenting their work, not replacing it.
[00:13:44] That is a pipe dream. Mm-hmm. So the administration, and again, this is the context and this is as unpolitical as I can possibly make this, I, I don't care, Republican or Democrat or something in between, like me and Mike don't see our job to have a political view at all in any of [00:14:00] this. Like our job is literally just to report what is happening and what the current administration believes and is doing.
[00:14:06] No administration in the United States can admit that jobs are gonna be replaced. Like they, they can't do that. Like if, if the US government straight up comes out and says, yeah, it's actually just gonna replace millions of jobs, then they would have an uproar and they would lose the next election cycle.
[00:14:22] So nowhere is this administration going to admit millions of people are gonna be displaced or underemployed. They, they can't do it. So again, you have to take all of this within the context of who is publishing this and what their goals are for publishing it. And that's just one area to, you know, really understand.
[00:14:41] So then it gets into the action plan. I, I mentioned, so Mike, you had talked about the three pillars and the way the action plan is organized is within those three pillars. And then I'll just go through like the quick summary and then the highlights of what each of these areas are. So the first pillar accelerate AI innovation.
[00:14:58] It says America [00:15:00] must have the most powerful AI systems in the world. We must also lead the world in creative and transformative application of those systems ultimately is the uses of technology that create economic growth, new jobs and scientific advancements. America must invent and embrace productivity enhancing AI uses that the world wants to emulate.
[00:15:18] Achieving this requires the federal government to create the conditions where private sector led innovation can flourish. So then within that section, these are sort of, imagine these as the subheads, and then underneath each of these that I'm about to list in bullet point form are policy recommendations.
[00:15:35] So the plan itself doesn't mandate any of this happening. It is basically recommending how to achieve these desired outcomes. Okay. So, again, we, we are in the accelerate AI innovation. These are the subheads within that section, remove red tape and onerous regulation. We've talked about how this, administration hates regulation.
[00:15:59] Ensure that [00:16:00] a, that frontier AI protects free speech in American values. The definition in Mar in America of what is classified as free speech in American values has never been more polarized. so again, we have to understand who is saying this, what, what they define as free speech and American values matters, and not just this administration, the next administration.
[00:16:22] So everything within this, and when I talk about being as unpolitical as possible with this, whatever this administration decides, the next administration gets to build off of those principles. So if the next administration decides America has different values or free speech means something different, understand that that shifts the context of this conversation.
[00:16:45] encourage open source and open weight ai. Enable AI adoption, empower American workers and support next generation manufacturing. Invest in AI enabled science, build world, world-class scientific data sets. Advance the science of ai, [00:17:00] invest, invest in ai, interpretability control and robustness. These are all things we talk about on the podcast all the time.
[00:17:06] Build an AI evaluations ecosystem, accelerate adoption and government drive adoption of AI within the Department of Defense Protect commercial and government AI innovations and combat synthetic media in the legal system. So a couple of these Mike just unpacked. So the enable AI adoption is a critical one.
[00:17:25] Their recommended policy action here, to give you an example of kind of the tone of this document. So what they recommend, one of them is establish regulatory sandboxes or AI centers of excellence around the country where researchers, startups, startups and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data.
[00:17:47] So that's an example of a policy recommendation. Maybe the most important one, at least, Mike, based on the stuff you and I talk about on the pod all the time. Empower American workers in the age of ai. So what, what does that mean? so [00:18:00] here's a quick synopsis of some of the policy recommendations. Again, these are not things they're committed to doing.
[00:18:05] These are recommendations advance a priority set of actions to expand AI literacy and skills development. Continuously evaluate AI's impact on the labor market and pilot new innovations to rapidly retrain and help workers thrive in an AI driven economy. I couldn't agree more. That is like right fundamental to everything we talk about.
[00:18:24] So to see the US government saying that is, is good news. the next prioritize AI skill development as a core objective of relevant education workforce funding streams. AGI agreed. Great. issue guidance clarifying that many AI literacy and AI skill development programs may qualify as eligible educational assistance under section 1 32 of the IRS code, given AI's widespread impact reshaping the tasks and skill.
[00:18:51] So in essence, um. The government should support this. They should provide funding, they should provide tax free reimbursements for AI related training. awesome. [00:19:00] Like I, I hope that happens. Like it's, and I hope it happens tomorrow. Like I hope, you know, three months from now we're talking about the forward steps being taken in this one.
[00:19:09] Another one is study AI's impact on the labor market by using data they already collect on these topics. Specifically the Bureau of Labor Statistics, and the Bureau of Economic Analysis. and the Census Bureau. leverage available discretionary funding where appropriate to fund rapid retraining for individuals impacted by ai.
[00:19:27] AI related job displacement. A hundred percent. Like I've thought about doing that ourselves, where we would provide, low cost no cost AI education. We can't, as a private entity the size we are, do that reasonably. It would probably need to be underwritten in some way by sponsors or something like that.
[00:19:44] but I think you're gonna see this from the major AI labs and the nonprofits, like everybody's gonna kind of jump in on this and then pilot new approaches. To workforce challenges created by ai, including retraining needs. the next one was build American infrastructure. This is all [00:20:00] about the grid, you know, increasing energy, building more manufacturing of, semiconductors on site, in the us skilled workforce for the infrastructure, cybersecurity, those sorts of things.
[00:20:11] And then pillar three, is, is, yeah, export ai, to allies and partners. Counter Chinese influence and international government's bodies strengthen AI compute. So, again, those three pillars. I, I would recommend people go read this stuff and yeah, and understand it a little better, but also understand it.
[00:20:30] It is now just a, here's what we think we need to do. Now it comes down to actually putting this into, into action. And then a quick synopsis on the three executive orders that the best I could find there was three related to this. So the first is, export of American AI Technologies. What does this one mean?
[00:20:48] I, I won't get into like breeding the whole thing. it means they don't want China to win. an interesting side note, Mike, I had sent you this one as sort of like a side not originally intended to be in the podcast, but it [00:21:00] fits so well. I, I figure we probably have to address this. So apparently, Donald Trump didn't know who Jensen Wong, or Nvidia was up until recently.
[00:21:09] So Nvidia, if, if, if you're a listener and don't know, is the largest company in the world, they have a $4.2 trillion market cap. And Jensen Wong is the sixth richest person in the world. So I, I think that the Verge didn't give the timing of when exactly this happened, but it it appears to be since Trump came into office the second time, so since January of this year.
[00:21:30] And so they wanted to go after some of the big companies and, and apparently Nvidia was on Trump's list of companies he wanted to break up. Hmm. So Trump told this story himself during the AI action plan. Launch event. So I'll, I'll just give a little context here because this matters relative to this idea of AI dominance, and, and the infrastructure side.
[00:21:51] So this is from Trump. Before I learned the facts of life, I said, we'll break him up. Trump recalled, during his speech about his new AI action plan, he [00:22:00] recounted what seemed to be a conversation between himself and an advisor who he didn't name, who told him it would be very hard to break up. Nvidia Trump said, why, what percentage of the market do does he have referring to Jensen Wong?
[00:22:13] And the advisor said, sir, he has 100%. And he said, who the hell is he? What's his name? His name is Jensen Wong of Nvidia. The advisor replied, Trump said, what the hell is Nvidia? I never heard of it before. he said, you don't know what it is. You don't want to know about it, sir. Trump said he backed away from breaking up a video after he realized it would be counterproductive.
[00:22:34] this is quote from Trump. I figured we go in and we would sort of break them up a little bit, get them a little competition, and I found out it's not easy in that business. So I said, suppose that we put together the greatest minds and they work hand in hand for a couple years. The advisor said No, it would take at least 10 years to catch him referring to Wong if he ran Nvidia, totally and competently from now on.
[00:22:56] So Trump said, all right, let's go onto the next one, meaning let's go break somebody else up. [00:23:00] And then Jensen Wong got to know Trump, and Trump said, and then I got to know Jensen, and now I see why. So what happened was, in the last few months, Trump, who didn't know who Nvidia or Jensen Wong was apparently, according to his own, testimony here, realized the significance of Nvidia and that it's an American company.
[00:23:20] The previous administration had put con export controls into prevent the sale of Nvidia chips to China in the fear that China would catch up to us. And so Jensen Wong went, met with Trump and actually convinced him to remove that export control and allow them to sell chips, maybe not their most powerful chips, maybe a generation or two earlier.
[00:23:41] Mm. Sell those chips into China so that America could dominate and they could make the Chinese reliant on American technology. That's literally the goal there. So this entire part of the I Action plan, the entire executive order is about creating reliance on American technology and accepting that Nvidia is at the frontier of all of [00:24:00] that, and that penalizing Nvidia would be a bad idea.
[00:24:03] This is why NVIDIA's stock jumped back up in the last couple weeks. So that's an interesting executive order. There's another executive order on, um. Accelerating federal permitting of data center infrastructure. So this is like, like you said, Mike, forget any impact on the environment. If it has to do with energy or data centers, we are building it and we are going to win in that space.
[00:24:23] the interesting thing here, I'll put a link in the show notes for, this is from last fall. Jensen Wong was talking about, data centers. And he says, AI is now infrastructure. And this infrastructure, just like the internet, just like electricity needs factories. These factories are essentially what we build today.
[00:24:42] So he's talking about NVIDIA builds data centers, but he actually calls them AI factories. You apply energy to it and it produces something incredibly valuable. And these things are called tokens. So what he's saying is we build energy, we build data centers, those data centers produce tokens, which, basically are the foundation of intelligence.
[00:24:59] And then an [00:25:00] interesting related quote last week from Demis Asaba of Google DeepMind tweeted. You know what's cool? A quadrillion tokens. We processed almost one quadri quadrillion tokens last month, meaning June, more than double the amount from May. And that was in a reply to Logan Kilpatrick who said, Google is processing 980 trillion plus monthly tokens across our products up from 480 trillion in May.
[00:25:26] So the basically doubling every month the number of tokens being output by these data centers, which means we, as business users and personal users of AI technology are using it that much more, that it's now outputting all of these tokens. Even if you don't understand the concept of tokens, it's basically the equivalent of, of words that would be, if it was a quadrillion, or let's say 980 trillion tokens, that's, that's about, I don't know, like, 750 trillion words.
[00:25:57] Like the equivalent of that would, would be roughly what we're [00:26:00] outputting within these models. And then the last one is the most, um. Probably subjectively bias, like depending on your perspective here. the prevent, this is literally the headline of the fact sheet President Donald Trump prevents woke AI in the federal government.
[00:26:16] And so it says they are prioritizing truthfulness and I ideological neutrality. They talk about unbiased AI principles. They say the the large language model shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertain where reliable information is incomplete.
[00:26:35] They say they shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI, and that developers will not intentionally encode partisan or ideological judgments into lms. This is the most absurd of all of them because they're on record saying they want them to output their I ideals.
[00:26:56] So like this administration. the idea [00:27:00] of neutrality is our view of the world. This is what, this is what Elon Musk is doing with X ai. Like he literally said it. We're gonna train these things to represent what we believe to be historical truths. So this goes back to the episode 1 58 conversation about who decides truth.
[00:27:15] And again, in a nonpolitical way, like if you think that this administration knows what truth is and they present facts, then like, okay, but that means you won't believe the next administration. Let's say it's a demo, you know, the Democrats come back into power, then you will believe that the Democrats are being untruthful.
[00:27:37] And if the Democrats control what a large language model says, and there, I mean, literally within this executive order, it says, that LLM companies, AI model companies will not be eligible for federal contracts if they don't adhere to the quote unquote unbiased AI principles determined by a biased government.
[00:27:56] So this is the part, like I just, I don't [00:28:00] understand and I, again, I, I, I go back to last episode's conversation. I don't care who you think knows truth and fact there, the opposite. Administration will always come into power. It it's inevitable in politics. And so we still arrive back at this idea that someone is the gatekeeper of this.
[00:28:20] Whether it's this administration and you like this administration or you don't, or it's the next administration and you like them or you don't, they will determine this. And if this executive order that mandates following the unbiased AI principles determined by a biased body of people, I, I don't, I don't, I don't get it.
[00:28:39] Like, and so this is where I, you then worry about like the whole AI action plan and how much of it actually falls within the true principles that it says it will follow, which I believe in. Almost all of them. Like the i action plan is a fundamentally solid plan, right? It's just, [00:29:00] is it going to be pursued in an objective way or not?
[00:29:03] and I would have the same questions regardless of who is in power. Again, this is all about power and controlling these things. The there is believed these things will drive trillions of dollars of economic impact. We'll talk about that in the next main topic. People in power wanna stay in power, and if these models from the five companies that are building the frontier models control the power and the trillions of dollars of value, whoever is in power will abuse them.
[00:29:28] Like that is, that is human nature. So I don't know what it means beyond that, Mike. I don't, I don't have a, here's how we're gonna make this better kind of ending to this. I just want people to understand this is a very important plan. It is a sound plan. It's well written. Mm-hmm. People who know AI wrote this plan, whether or not it is pursued to the true benefit of Americans.
[00:29:51] At a small scale and more broadly humanity and society. That's the to be determined part.
[00:29:58] Mike Kaput: Yeah, and I like your [00:30:00] point too about showing, it shows where this stuff is going. Whether or not these policies get enacted in the right way, we can make some very reasonably confident bets about the future, right?
[00:30:12] Is that the environmental aspect is not going to be a priority that some type of AI literacy is on the table, but it doesn't address displacement and that I would be betting pretty heavily on anyone that makes data centers moving forward.
[00:30:26] Paul Roetzer: Yeah, I think that's a good synopsis. It is. Everything we've been saying needed to happen or was happening it does just sort of validate a lot of that and, and again, for me and Mike, like we spent a lot of time researching this stuff, thinking about this stuff, synthesizing this stuff, and we always want to like know that we're heading in the right direction, that we're not misleading our listeners and our pursuit of being as objective as we can be about this stuff.
[00:30:51] Then you get a plan from the government that's basically like, literally like in print saying everything we've been saying. It's like, okay, good. Like we're on the right track. [00:31:00] We're interpreting correctly what is going on. And so, yeah, I think like for us it's helpful to just see it said, and I do, I think AI literacy, they, they, they're aware of the jobs impact.
[00:31:11] They don't wanna acknowledge it, you know, directly transparently, but like they're pursuing ways to solve for it. they're be embedding on infrastructure. I don't know that, that it's the right play to think of it as a race, that we have to beat China at. And we didn't, well maybe next week we'll touch on, but like China came out with their own plan like 48 hours later and they were trying to portray it more as like, Hey, let's all work together.
[00:31:33] And I think it was meant to be sort of like a, I don't know, sort of the opposite of the US approach. But again, is it truthful? Is it like actually what it was? Who knows It's politics. Like right, everybody lies, everybody pursues power regardless of what side of the aisle they're on.
[00:31:51] Mike Kaput: All right. Our next big topic this week is about the following question.
[00:31:55] How AI Could Upend the World Economy
[00:31:55] Mike Kaput: What if artificial intelligence doesn't just disrupt the economy but [00:32:00] actually detonates it? And that's kind of a provocative question posed in a briefing in this week's issue of the Economist. So in this briefing, they talk about the fact that unlike past technologies, you know, truly getting to AGI could end up automating not just labor, but innovation itself with AI generating ideas, conducting scientific research, and even improving its own design.
[00:32:23] If that kind of intelligence explosion happens, they posit the economy wouldn't just grow, it would explode. You'd be hitting things like in some projections, 20 to 30% annual growth rates, which are insane the longer they go on. But as the economist kind of unpacks growth at that scale doesn't necessarily mean prosperity for all.
[00:32:43] As AI gets cheaper and more capable, we could see wages shrink. Workers might be priced out of the labor market entirely. Capital not labor would capture most of this value, meaning those who own AI or data centers could end up with a staggering [00:33:00] share of the future. Wealth created. Yet, with these kinds of projections, if you start gaming this out, if that happen, markets are not behaving like explosive growth is around the corner.
[00:33:11] So the economists kind of unpacked, well, why is that? On one hand, it's possible the forecasting models being used by some of the more, optimistic AI labs and economists out there are just wrong. Or maybe just like with AI's capabilities, everyone's underestimating how fast things are about to move.
[00:33:32] But as one economist they talked to put it in the report, he said, once you start thinking about the impact of economic growth when it comes to AGI, it's hard to think about anything else. And I think Paul, that last part really stood out to me here because. When you start thinking creatively about the possible effects of like AGI or even, you know, runaway essentially super intelligence that is improving constantly.
[00:33:57] When you think about how that's going to affect the global economy, [00:34:00] it just becomes kind of a rabbit hole. And I guess my question for you is, are enough people thinking seriously enough about this? I,
[00:34:07] Paul Roetzer: I don't think they are. I mean, we, so we talked about, I was going back on like how many times last year we talked about GDP and mm-hmm.
[00:34:14] Economic impact and episode 1 22 jumped out in particular, when we talked about situational awareness from the Upholded Ashman Brenner, from June, 2024. And that was an episode where we kind of got into this a little bit because that was one of the beliefs within, Ashton Brenner's situational awareness articles was that we could see economic growth rates of 30% per year, beyond quite possibly multiple doublings a year.
[00:34:39] That was just an asinine thing to most people because again, economists, like I say, it's never happened. Like you can't, can't do that. If you look at historical context, it's just not something that occurs. And so it's a hard thing for people to wrap their minds around. And so, you know, it largely just kind of gets ignored, at least by the economist I've talked to, like they don't even [00:35:00] acknowledge this as a possibility.
[00:35:01] So, quick backup, GDP growth, gross domestic product, total monetary value of all finished goods and services that are produced within a country's borders in a specific time period. It's usually measured quarterly or annually. I pulled this morning, as of June 27th was the last update. the United States GDP, decreased in an annual annual rate of 0.5% in the first quarter of 2025.
[00:35:26] So January through March, according to third, the third estimate release by the US Bureau of Economic Analysis, which is the authority on this. so the GDP is at about 29 trillion. Give, give, or take. you know, somewhere between 29 and 30 trillion currently, but it shrunk in the first quarter this year.
[00:35:45] So again, for someone to show up and say, yeah, it's gonna grow 20, 30% annually, it's like, well, just shrunk 0.5%. How could it possibly grow 20? Or it, it's like a ridiculous thing to consider. So how does AI impact it? Well, it increases productivity. We can [00:36:00] do more, in the same amount of time. It, in theory, drives innovation and new product development, which maybe creates demand for new products and services.
[00:36:08] it creates industry and sector growth. Potentially it boosts consumer demand through personalization of products and services. Now the question is, will people be working and have the income to, to have that demand? Like that's an unknown. So we can only create more products and services if there's money to be spent to, to purchase those products and services.
[00:36:28] So. Yeah, I think that this is an example of why Zuckerberg is spending tens of billions acquiring top AI talent. Why hyperscalers like Google and Microsoft have 80 to a hundred billion dollars CapEx expenditures this year. Google just raised theirs in their earnings call last week. They said they were increasing their CapEx this year.
[00:36:48] Microsoft, I think, has stayed steady at their 80 billion. It's why OpenAI and Xai are pursuing trillions to build out data centers and energy infrastructure. And it's why we have an AI action plan from the US government [00:37:00] that prioritizes AI acceleration at the cost of everything else. Because even if the 20 to 30% numbers are unrealistic, even getting to like five to seven to 10% would be transformational for the government, right?
[00:37:14] So if you could, you could do that in a consistent way. And so there's, there's literally trillions of dollars to be on locked here. And so the companies that can be at the center of it, which largely are. The AI model companies and the companies that produce the energy and the infrastructure to enable those things, build the AI factories like Nvidia, we're talking about trillions of dollars in market cap.
[00:37:35] And so spending tens of billions or hundreds of billions is nothing for the, for the opportunity and the missed cost. We've talked about this on a past episode. I know, because I think, I don't remember who we quoted on this, but it was like, it might have been Zuckerberg, it was Satya Nadel, or it might have been, I dunno, it might have been Sam Alman whatever.
[00:37:54] the whole idea of it might not work. We might spend a trillion dollars building all [00:38:00] this out as an individual company and it might not work. But what's the alternative, right? We sit on the sidelines and do nothing, and we're not part of the conversation. So this is why Meta and Zuckerberg has to be a part of this conversation.
[00:38:11] It might not work, but the alternative is they do nothing and they're irrelevant in 3, 5, 3 to five years. So all of this opportunity, this possibility of massive growth. Is in large part what is driving all of the investments, all of the actions that we talk about every week on this podcast.
[00:38:28] Mike Kaput: Yeah. And to that last point, if you are routinely scratching your head or scoffing at the fact, people are investing so much money in AI companies, some of whom do not turn a profit or like cash on fire, this is why it's a very logical move.
[00:38:43] It's not stupidity. It may be optimism or mania, but it is not ending anytime soon. Everyone has to do this.
[00:38:52] Paul Roetzer: Yep. Yeah. If you have the money, and this is why like last week I said there's basically five companies that can pursue the biggest models. 'cause we are, [00:39:00] we're talking about hundreds of billions and not to distant future trillions.
[00:39:03] Likes Sam moment kind of came out jokingly last year that he was pursuing 7 trillion. Mm-hmm. I don't think it was a joke. Like I, I, I don't know that the number was 7 trillion, but they raised a half a trillion already. Or you know, that's what Project Stargate is supposed to be. And I can promise you that was just one phase of the grander vision.
[00:39:21] So I am sure that they are at least discussing trillions as what it's gonna take over the next four to five years to build the infrastructure needed to build the models they envision. Mm-hmm. To unlock all this growth.
[00:39:35] Mike Kaput: Alright, our third big topic this week,
[00:39:37] GPT-5 Rumors
[00:39:37] Mike Kaput: OpenAI is gearing up to launch GPT five as early as August, according to a new report with some rumors in the verge.
[00:39:46] Sam Altman said recently on X as well, that quote, we are releasing GPT five soon and he previewed recently GPT five's abilities in a recent podcast with the comedian Theo Vaughn. And he told the host on that podcast that he [00:40:00] let GPT five take a stab at a question he didn't understand, saying quote, I put it in the model, this is GPT five and they answered it perfectly.
[00:40:09] He called this kind of a quote here it is moment and said he quote, felt useless relative to the AI because he felt like he should have been able to answer this question right around the same time a post on X revealed that GPT five had been spotted briefly in the while. The verge says the full rollout is expected to include three tiers.
[00:40:29] There's a flagship model with integrated O three reasoning, a lightweight mini model, and an API only nano model. it's assumed that GPT five could consolidate open AI's kind of fragmented model lineup into one unified system and still kind of unclear what that looks like. But that could be a mute move towards this long-term goal OpenAI has of AGI.
[00:40:52] And obviously if we declare AGI at any point, it could shift open AI's business relationship with Microsoft as well. [00:41:00] So Paul, if the rumors were true, we're getting GPT five very soon. The unified system thing, we. heard about, known about. I'm not sure if that means the system, system itself will determine which model to use for tasks.
[00:41:13] Like what else is worth getting ready for here if you're kind of a business leader or a user getting ready for GPT five?
[00:41:21] Paul Roetzer: Yeah, I think just paying attention to what, you know, OpenAI is talking about when it does come out. You know, understanding the impact. It's hard to know until we know if it's a unified model or a router model.
[00:41:31] I don't know if that's gonna make a difference, but yeah, I think we discussed the DI distinction there is when you put the prompt in, it may be multiple models still, there may still be a chat model, reasoning model, you know, an image model, and it just automatically decides which model to route it to versus it's actually just a single model with all of those capabilities built in.
[00:41:50] again, I don't know, as the user, there might be some latency issues. It might be a little slower if it's a router model, but I think it's still gonna do the same, you know, things generally [00:42:00] speaking. The other anecdotal piece is there was, um. There was some rumors that the models were being tested in the LM Arena.
[00:42:07] So they under code names like Zenith Summit Lobster, nectarine, starfish, and oh three Alpha, which wouldn't be too much. I mean, that, that's pretty obvious what that one would be. so those have been gotten pulled as of last night. I think they were no longer in the arena. I don't know how long they were active, but it appeared they were testing some new models that people had pretty, positive responses to.
[00:42:28] My general feeling as I've, I've kind of mentioned a couple times recently is I, I think we're kind of at AGI roughly. yeah, that, you know, I think OpenAI probably believes GT five is or will be AGI they're, they're kind of alluding to that. It would explain part of their, shift to the talk of super intelligence.
[00:42:46] so I don't think, yeah, I don't think that they're gonna call it that per se. I think they'll, they'll probably do a lot of cutesy tweets of like feeling the AGI and things like that. But I just feel like if you take these models and whatever gbd [00:43:00] five is gonna be, and you post train them on some specific things or give them agentic ability to take actions, it likely would qualify for any reasonably historic historical definition of AGI.
[00:43:11] Like, I, I don't, right. So again, I think it's just semantics at this point, whether it is or isn't, it's hard to really measure. A couple other things that Altman said on the Theo v, podcast that I thought were noteworthy. He said, G pti, GT five is the smartest thing smarter than us in almost every way.
[00:43:30] Meaning is the smartest thing in the room was kind of the perspective here. You know? And, and, and yet here we are. So this is Sam Altman talking to Theo. so there's like the, it's so hard to read Sam's quotes. Sometimes. There's something about the way the world works. There's something about, this doesn't mean it's true forever, but there's something about what humans can do today that is so different.
[00:43:51] There's also something about what humans care about. today that is so different than ai and I don't think the simplistic thing quite works now, again, by the [00:44:00] time it's a million times smarter than us, who knows? So he is basically saying G PT five is smarter than him. It's smarter than anybody else in the room, but yet he's still there as the CEO of OpenAI doing his job every day.
[00:44:10] You and I are still here doing the podcast. And so like, there's something unique about what humans bring to the table. He can't put his finger on it, but like it's, humans still seem to be needed, even though this thing's probably AGI based on his own previous definitions of it. he just doesn't know if that holds true, you know, three years, five years from now.
[00:44:29] And then the other one that had me, had me really thinking, I thought this was a really interesting analogy. He gave, I guess on Joe Rogan's podcast, Altman had mentioned something about eventually having an AI president. And so Theo V asked him like, Hey, do you think that's actually like a thing? And so Sam said, hadn't really taken my thinking to this extent.
[00:44:48] Everything that it takes to be a president, but I know what it takes a lot, takes a lot. People are willing to, man, I really struggled to read his quotes. so, okay, I'll just summarize this part 'cause it doesn't [00:45:00] make any sense. he's basically saying, I know what it takes to be the CEO of OpenAI and so I can better evaluate this on being a CEO versus being the president.
[00:45:09] okay, so CEO, because I know what that job is like. Okay. That should be possible someday. Maybe not even that far. Like, I think the idea to look at an organization to make really good decisions, there's a lot of things that you can imagine that an AI C-E-O-O-O of OpenAI could do that I can't do, meaning Sam Altman can't do.
[00:45:30] And I can't talk to every person at OpenAI every day. I can't talk to every user of ChatGPT every day. I can't synthesize all that information, even if I could. But an AI CEO could do, that and it would have better information, more context. It could massively paralyze this. and I think that would lead to better decisions in many cases.
[00:45:52] So that just got me thinking. I was like, oh my God. He is right. Like imagine if every morning you could do like a one question [00:46:00] poll of your workforce and then like get all that feedback back and like synthesize it in five seconds. A CEO could never do that. Like a human CEO could never do that. And imagine that with your employees, your customers, your board, your, your analytics data.
[00:46:15] Like imagine having real time intelligence and synthesis of that information on any, any data point you want as a CEO. And it's like, wow, okay. Like that. In that example, you can now start to see where a co CEO that is an AI truly starts to take a greater role in the leading of companies. And then you could apply that to basically any role and say, well, what data do I need?
[00:46:40] Right? What are the KPIs I'm looking at every day? What's the data I would love to have that I don't have? what's the data I have that I can't possibly synthesize every day and find meaning in, find insights from, make decisions based on, and imagine a generative AI model had access to all of that and could synthesize it into three point bullet [00:47:00] points at any given moment.
[00:47:01] It's like, whew. Yeah, I hadn't, I haven't really thought about it that way.
[00:47:04] Mike Kaput: Yeah. That would be quite the game changer. I also, as we're talking about this wonder as well, depending on how GPT five looks, how it uses different models, I wonder if it could be a wake up moment for your average person, because not only being smarter, but I feel like right now people are not already understanding the full capabilities of reasoning models, for instance, because people half the time aren't even picking the models they're supposed to be using.
[00:47:30] Yeah. Or picking incorrectly which models they should be using.
[00:47:34] Paul Roetzer: Yeah. I, I agree. Like if you ask a harder questions and it requires like deeper thinking and you don't know to go to the O three model. Right. But then the new model. Does that for you and it's like, whoa, that's different.
[00:47:45] Mike Kaput: Yeah.
[00:47:45] Paul Roetzer: Yeah. I could see that happening
[00:47:47] Mike Kaput: quite a bit if that's how it end up.
[00:47:48] Yeah, it's working.
[00:47:48] Paul Roetzer: Interesting.
[00:47:48] Mike Kaput: Agreed. Alright, let's dive into this week's rapid fire.
[00:47:52] AI Is Impacting Tech Jobs
[00:47:52] Mike Kaput: So first up you are not imagining, it says Forbes AI is already taking tech jobs. So they talk about how in a, in recent months, a wave of layoffs have swept across the industry with CEOs, as we've talked about growing more candid about AI's direct role on jobs.
[00:48:09] So we've covered plenty of this before Forbes mentioned the Fiverr CEOs AI e AI memo, Klarna cutting 40% of its workforce, citing automation, and then walking actually parts of that decision back. Duolingo, IBM, Microsoft, Forbes details, how essentially thousands of roles have started quietly disappearing.
[00:48:29] And AI is increasingly cited as the reason. Hmm. According to Forbes, the impact right now in tech is sharpest among entry level developers. They say that a Stanford study found employment for 18 to 25-year-old coders has dipped since ChatGPT launched. And they also talk about how companies are moving from mass hiring to kind of precision hiring, prioritizing top tier talent and letting average performers go.
[00:48:56] But they also say there's a bit of a silver lining here. AI is also [00:49:00] creating new demand for engineers, specifically outside of tech in finance, healthcare, and manufacturing. Now, Paul, just some more evidence here that we're not the only ones seeing this and talking about this and tech seems to be a bit of a canary in the coal mine here.
[00:49:15] Do you think that this speeds up and starts to go a bit beyond just tech?
[00:49:22] Paul Roetzer: Yeah, I think it already has started moving beyond tech. I, I think that the most interesting part of this story is probably just the continued. coverage from mainstream media. Yeah, that's, it's expanding now. And this is stuff we've known, been talking about for two years.
[00:49:41] you know, I think earlier this year we finally started getting CEOs admitting to this, and now we're starting to see mainstream media pick it up. And I, I've said on a recent podcast episode, I, I still think this maybe becomes the most important issue of the midterm elections in the United States in 2026.
[00:49:57] Mm. And so going into this fall, I would expect [00:50:00] coverage of this to pick up. I would, I would expect some pretty high profile stories on it, and I would probably, anticipate some increased negative reactions from society, I would imagine around this, because I think it's gonna become just more apparent where this is leading in the near term.
[00:50:18] And again, I'm, I'm an optimist when it comes to, like, I think we'll figure it out. I, I do think it's gonna open up all kinds of incredible possibilities and career paths that we do struggle to. Define right now.
[00:50:30] Mike Kaput: Yeah.
[00:50:30] Paul Roetzer: I just don't think that's gonna happen fast enough to offset this, the negative impact it'll have in the near term.
[00:50:38] And so I'm, I would say I'm a realist when it comes to jobs in the next, you know, I don't know if it's like a one to three year time period. I'm not even sure what that near term timeframe is that I think we go through some really painful parts, but one to three probably seems pretty realistic. And then I think over time we figure it out.
[00:50:57] And now that all these major [00:51:00] labs and nonprofits and governments are accepting the impact that This's gonna have on jobs, we might get some really smart people together who figure out how do we solve this? Like we just, we weren't, we weren't admitting that it was a problem. And now that we're sort of admitting it, maybe we can get to like working on solutions.
[00:51:16] And that's been my biggest thing all along is like, let's think about that. It does go bad for a little while and jobs and. Let's like come up with some plans and so at least I think we have people working on plans now, and that's a really good direction.
[00:51:32] Mike Kaput: I also like your point about just the overall narrative here being covered more by mainstream media.
[00:51:38] The narrative piece of this matters because this is now going mainstream, and your employees, if you are a leader, are going to increasingly be reading these headlines or watching this on the news. You need to have some type of AI communication plan. You need to be talking about your perspective. We talked about this last week.
[00:51:59] Yeah. Your, [00:52:00] your vision, your perspective on ai, because they guarantee you if layoffs start happening at your company, even for, you know, necessary reasons. AI is increasingly going to be seen as a scapegoat here too. I think employees are going to assume the worst by default if all they're consuming are headlines like this.
[00:52:19] Paul Roetzer: Yeah. Most CEOs are going to, if they haven't already connect the dots that AI equals efficiency and productivity gains. Mm-hmm. Which equals fewer people doing the same amount of work, which means your return to work policies five days a week are just veiled attempts to get 10% of the workforce to quit.
[00:52:37] So, you know, you can, you, you're just gonna do these things, but at some point, we're gonna run out of those things to do the leverage points that the C-suite holds without saying it's because of ai. so yeah, I just, I think it's gonna be a reality and I think we will adjust to that reality, and I think we will solve for it as society.
[00:52:55] we're resilient. Like, we'll figure it out. I, I, I just [00:53:00] think we have to be honest about what's happening and it's the only way to then kind of move to the, what do we do about it phase, which is what I think is the most important part of it.
[00:53:08] Mike Kaput: All right.
[00:53:08] Advice for College Students
[00:53:08] Mike Kaput: Next up, Nvidia, CEO, Jensen Wong, we mentioned earlier, has some interesting advice for today's students.
[00:53:15] He was asked about this and said if he was starting over. He wouldn't focus on software in his career. He'd study the physical sciences. He thinks the next great wave of AI is physical ai. He explains, the industry has already moved through perception ai, where AI was recognizing images and the current generative AI phase.
[00:53:32] The next frontier in his view, involves teaching machines about the real world. Concepts like physics, friction, and inertia. That's the understanding, the understanding here of that is the foundation for true robotics, and he thinks that, intelligent machines will be essential for running the automated factories of the future.
[00:53:52] Now, Paul, I mean, advice from Jensen Wong always worth taking seriously, but really we wanted to highlight this topic because it's something [00:54:00] you get asked about a lot in your talks and discussions with business leaders, like advice, like if you were in college, what would you be studying and why?
[00:54:09] Paul Roetzer: Yeah, and I, I thought.
[00:54:10] You know, I think it, what Jenssen's giving is great advice. The reality is not everybody is equipped to go into the physical sciences. Yeah. Like, like I, I was pre-med at the start of college that lasted about four weeks. I was like, I failed out of bios one 70. It was like the we biology class. Now I didn't really go to class, so like, it's kind of my own fault.
[00:54:29] But like, I wasn't, I wasn't equipped for the sciences. I love the sciences, but like I, it wasn't gonna be my thing. And so just saying like, yeah, there's gonna be tons of jobs in this phase. It's like, okay, yeah, but like 5% of people want to go into those jobs maybe. So I think the bigger, vision here is what are the industries and career paths where continually powerful, more advanced AI unlocks new areas of exploration, discovery, and [00:55:00] innovation.
[00:55:00] Like where, no matter how smart the AI gets. Is, is it actually gonna drive opportunity? And so the sciences is a perfect spot because all this unanswered questions in biology, cosmology, chemistry, like it's gonna open up all these incredible, like golden age of, of discovery. Now I was thinking about this over the weekend.
[00:55:20] I'm not even sure what drove this. I was, well, I don't What day is today? Monday. So I was with my family last week in Toronto. We were on a trip and so I spent a lot of time with my kids and just like a lot of conversations come up and somewhere in that trip I really started thinking about how I, I think I want to guide my children as much in as I can to focus on entrepreneurship.
[00:55:41] Like I, I, I want them to go to high school. I want them to go to college. I want them to get the degrees and have the life experiences come with it. I don't know what career path I would personally make a bet on being viable, you know, by the time they get outta college in six years and seven years. but entrepreneurship [00:56:00] I think is a whole nother realm.
[00:56:01] Like, I think we. All the barriers to entrepreneurship come down. And so they're 12 and 13 and I am proactively teaching them business fundamentals. And as of like this weekend, trying to think strategically of ways to ramp up my efforts to teach them about entrepreneurship. When I was so, I didn't even know entrepreneurship was a thing until, let's see, eight.
[00:56:22] When I was in eighth grade, going into ninth grade, I started caddying at a local country club. And that was like the first time in my life I met entrepreneurs. Like where I, where I came from. We didn't know people who ran businesses like that wasn't what I grew up around. it was a very blue collar town.
[00:56:38] And and that's what we knew. And then I was like, oh, you can own a business. Like I didn't even really realize that. And so I wasn't exposed to that. And then my mom started her cookie franchise, my junior year, Ignatius. And that was the first time in my family, you know, I really saw entrepreneurship. So my, my thinking now is like, I really want to expose my kids to have the opportunity to be [00:57:00] entrepreneurs at an early age, because I think AI is gonna make that way more possible than it was when we were coming up.
[00:57:07] Hmm. And like an example here, my daughter found, she's very creative, you know, an artist, a a creative writer. And so she found this really cool website to help her, visualize the stories she was writing. And it was a very specific niche product. And she said, can I get this? And I was like, okay, well, I, I'm gonna teach you how I would assess this.
[00:57:26] And so I actually showed her CB insights, which Mike and I used to analyze tech companies. And I, I went through, I ran analysis, I used ai, their AI agent to write a brief on this company. And then I walked her through the brief. I was like, okay, here's Mike. And I would look at funding, like, let's look at what funding they have.
[00:57:42] That's like a, who are their investors, which look at the competitive landscape and what are the other companies we could be looking at? And so. I felt like it was an opportunity to say, here's something she's interested in, so I'll explain competition and funding in the context of whether or not she can use this website, to [00:58:00] do this.
[00:58:00] And so I thought that that's a learning experience, and so I'm gonna proactively look for those things and try and, build on top of what their interests and passions are in life. Mm-hmm. And help create the opportunities for them if they choose to, to just pursue an entrepreneurial path instead of thinking they have to go work for corporation.
[00:58:18] So that's kind of the ways I'm starting to think about this is, and I did something recently with a family member of mine who's in college, connecting them to like a head of entrepreneurship. Mm. Because it's like, just get to know this person when you, you know, get to the campus. Like that's, go study whatever you want, get a business degree, whatever, but like, understand entrepreneurship while you're there.
[00:58:38] So I, I think that's a really important aspect of, of where this all goes.
[00:58:41] Mike Kaput: I love that. And I would also say if you're. At any age listening to this, and you are an employee at a business or a corporation, this kind of thinking is critical. Like thinking like an entrepreneur, they might say intrapreneur, right?
[00:58:54] Yep. Someone who's an employee. I think that's such a differentiator, especially in the age of ai, because you're going to [00:59:00] proactively seek out opportunities to use the tools to create value, which is not going to end poorly for you if you do that.
[00:59:06] Paul Roetzer: Yeah, it's a great point, Mike. 'cause not everyone's cut out to be an entrepreneur.
[00:59:09] And I, I, I would say that like, while I think entrepreneurship is going to be fundamental,
[00:59:13] Mike Kaput: yeah.
[00:59:13] Paul Roetzer: it's hard as hell and it's lonely to be an entrepreneur. Like it's really difficult. and so some people just need that entrepreneurial spirit within a company that they're at, so they can raise their hand and say, Hey, what if we did this?
[00:59:26] And, and maybe the CEO says, Mike, it's a great idea. Why don't you take the lead? Building that right. And like, just having that perception that like you can build things and, and then understand the basics of business. That's like, okay, is the CEO gonna agree or disagree? Like, let me build a business case for this.
[00:59:40] So yeah, you can have that mindset without having to do your own thing.
[00:59:44] Instacart CEO About to Take Reins of Big Chunk of OpenAI
[00:59:44] Mike Kaput: All right. Next up, Instacart, CEO Fidji Simo is about to start her new leadership role at OpenAI on August 18th, she is starting as the company's CEO of applications true covered in the past. She reports directly to [01:00:00] Sam Altman, and permission will be sickly be to lead at least a third of the company, focusing specifically on product growth and scaling the real world use cases for open AI's technology.
[01:00:11] This is part of a broader reorganization that allows Altman to concentrate more on core research, compute and safety systems. Simo who has also joined Open AI's board as a March, 2024, has been vocal about the need for responsible development. So at the same time as we approach her start date, she, shot off a recent memo to staff emphasizing that AI leaders must make choices that lead to broad empowerment rather than concentrating more wealth and power in the hands of a few.
[01:00:40] So Paul, what is her role, this new role even mean for OpenAI moving forward? Are there big changes we should be expecting here?
[01:00:49] Paul Roetzer: I think when they first announced this, I said it seemed like a prelude to Sam stepping back. Yeah. And, and this memo does nothing to change my mind on that. This is a vision for the [01:01:00] company and this is a roadmap for what they're gonna build.
[01:01:02] That would have come from Sam previously. So I don't know if there's like a formal plan in place or how this is all gonna play out, but it does seem very obvious that this is a prelude to her. Sam's stepping back from, these kinds of memos and, and her stepping forward. So I, this, I know this is not meant to be a main topic, but there's, there's some stuff in here we gotta talk about.
[01:01:26] Mm-hmm. So, you mentioned the power thing. she breaks it down into knowledge, health, creative expression, economic freedom, time, and support. And I want to unpack each of these real quickly because I think that they're extremely important to understand where OpenAI is going and where AI as a whole is going.
[01:01:42] So in knowledge. She says for the first time, AI is the power to truly democratize knowledge and the opportunity it brings. AI can compress thousands of hours of learning into personalized insights delivered in plain language at the pace that suits us responsive to our specific level of understanding.
[01:01:58] It doesn't just answer questions. [01:02:00] It teaches us to ask better ones, and it helps us develop confidence in areas that once felt opaque and intimidating growing both personally and professionally. In a 2024 OpenAI study, 90% of users said ChatGPT helped them understand complex ideas more easily. Once we put a personalized AI tutor on every topic at everyone's fingertips, AI will close the gap between people who have the resources to learn and people who have historically been left behind.
[01:02:27] So. This goes to that personal tutor, personal assistant idea. Again, every one of these, like this is well written. This is very intentional in its writing, and you can see that the Chad CT OpenAI roadmap emerge out of each of these descriptions. Mm. The next one is health says, I'm not alone. Nearly nine in 10 US adults struggle to understand and use health information, which leads to worse outcomes and more than 200 billion in avoidable healthcare costs every year.
[01:02:55] Patients often fe feel powerless in their own care and dependent on others to [01:03:00] explain what's happening in their bodies. AI can explain lab results, decode medical jargon, offer second opinions, and help patients understand their options in plain language. It won't replace doctors, but it can finally level the playing field for patients, putting them in the driver's seat of their own care.
[01:03:15] I have personally experienced this. yeah, I don't, Mike if you have, but like Yeah, I had a medical condition earlier this year. I was in the hospital. I didn't understand what was going on and I was like sitting there talking with ChatGPT the entire time I was uploading lab results. Like, explain this to me.
[01:03:30] No, I'm fine. Like everything worked out great. But there was this period for like 45 days where I didn't know what the hell was happening. Hmm. And I was trying to understand the condition. I used it for personal health planning, dietary things like protein, creatine, like trying to understand different things.
[01:03:46] At the age I'm at, it's like, okay, I wanna, I wanna just like, like my health span, like a great lifespan. Like I wanna like, enjoy life for a long period of time and I'm in the active stage of like trying to figure all that out. I do that with AI all the time. so it went on to say, I can [01:04:00] also make sure health decisions don't just happen in the doctor's office.
[01:04:02] Biggest levers preventing dis disease and optimizing health outcomes. Sleep, food movement, stress management connection, all depend on everyday habits. AI can help us build those habits through small, achievable daily steps with personalized real-time nudges. I boldfaced that, that is the proactive personal assistant where it's saying, Hey, did you take this?
[01:04:22] Did you think about that? This is where Apple excels, by the way. Like, if, if anyone listening uses an Apple watch, like one of the greatest products ever, like the amount of what they're doing with Health on the Apple Watch is incredible. The condition I mentioned earlier was related to heart. I would've never had the data I had if I didn't have an Apple Watch and hadn't been wearing it for two years.
[01:04:43] Hmm. I had two years of data that I could share with the doctors and that I could interpret through myself. So personalized real-time nudges, which actually leads me to like what products or what hardware is they OpenAI gonna build, because that might be, an important like indicator, creative [01:05:00] expression is maybe the most controversial part of this vision.
[01:05:03] so the problem is that our ability to express, creativity is often limited by our skill sets. Now that everyone has the resources, time, and training to paint, right, composer, build AI is collapsing the distance between imagination and execution. They feel like giving everyone these abilities to create video, image, voice, audio, like it is everyone's human right to be able to express themselves.
[01:05:25] This leads into Yeah, but like whose creative, expression are you stealing to enable everybody else to have this creative ability? That's kind of the issue here. And then the economic freedom. One ties back to the previous note about entrepreneurship. Most people aren't aware of this. So I'll throw this stat out.
[01:05:42] In the United States, 99.9% of all businesses are small businesses. 33 million bi small businesses in the United States, only 6 million of those have employees. So the vast majority of companies that exist in the United [01:06:00] States are not the big enterprises that everybody works for. So small businesses employ 61.7 million Americans.
[01:06:07] About 46% of the workforce work for small businesses, things that are made possible through entrepreneurship. Mm-hmm. Um. So, what, what the article said here is, when people can independently create and capture value, they gain power over their economic de destiny. Starting a company isn't easy. The average cost in the US is around $30,000.
[01:06:25] I can personally attest that as a low number, an impossible threshold for most aspiring entrepreneurs. And until recently building a product or launching a service, technical knowledge, AI gives people the power to turn ideas into income no matter their age, credentials, or zip code. And 2024 Shopify report showed AI enabled solopreneurs launch businesses 70% faster than their peers without AI tools.
[01:06:48] Hmm. So this is that whole idea of, you know, entrepreneurship, the golden age and AI unlocking it. She gets into time and people having more time because of ai and then support, which will lead us into the next [01:07:00] item. Except for many people, the biggest barrier to progress aren't lack of access or opportunity, but self-doubt, isolation, and burnout.
[01:07:06] Sometimes what's most empowering is support someone or something that can help us reflect, feel, seen, or simply move forward with clarity and confidence. People are already turning to ChatGPT for support when they're preparing for a tough conversation facing a career setback, working through grief, or just trying to untangle a spiral of thoughts at the end of a day.
[01:07:25] Being able to put feelings into words without judgment and pressure can be profoundly helpful. At the core of philosophy and religion is the idea of self-knowledge. To become who we want to be, we have to understand who we are. If AI can help people truly understand themselves, it can be one of the biggest gifts we could ever receive.
[01:07:41] So again, little more extended rapid fire, but I think it's really, really important and really, telling as to where OpenAI goes. And a lot of the same things talked about here would play out in the Gemini models and clawed models. Like a lot of these research directions and product directions are probably [01:08:00] gonna be running in parallel to what other labs are gonna be thinking about in building as well.
[01:08:04] Mike Kaput: Yeah. We've talked about this in the past, that OpenAI to achieve the revenue and valuations that it's, you know, aspiring to really does need to get into some very lucrative businesses. Yeah. To make money. And if you look at each of these as a market, I don't have any numbers in front of me, but I'd imagine each of these as a market is a massive opportunity For sure.
[01:08:25] So we can actually get a little bit of insight into the one of these markets because in our next rapid fire topic,
[01:08:32] The First AI for Therapy
[01:08:32] Mike Kaput: we have a new AI startup in one of these areas. So Neil Paik, who is the co-founder who turned Casper into a billion dollar mattress brand, has a new venture backed by $93 million. His startup Slingshot AI is tackling the mental healthcare crisis with a chat bot named Ash, which has now officially launched after 18 months in development.
[01:08:55] Paik was inspired by his own experience with therapy and the massive gap in [01:09:00] access to care. They estimate that only one provide, there's only one provider for every 10,000 people seeking help. Ash is his proposed solution. Unlike general AI like chat, GBT Ash is trained specifically on behavioral health data and is designed to essentially provide therapy even providing pushback rather than just agreeable answers.
[01:09:21] The AI is learned from various therapeutic styles, including CBT and DBT, and it's even developing its own perspective on what a user should work on next to keep them moving forward. So critics are raising safety concerns because this is an AI therapist. Slingshot says it has clinical advisory board and protocols to redirect users in crisis to human professionals, and basically they wanna create a new modality of kind of AI powered care.
[01:09:48] So Paul, we've talked a lot about AI being used for relationships, companionship, other deeply personal use cases. It seems like this is the next frontier. and it's got its share of, [01:10:00] it's both interesting, but also controversial. Even Neil, the founder, posted on X about this company, said, they said it couldn't be done.
[01:10:07] They said it shouldn't be done, and we tried anyways. So what do you think of Ash here?
[01:10:12] Paul Roetzer: Yeah. I think this is an inevitable market that will be explored and built out. I also think as a society we're very, very early in understanding the impact of this and what it means. one of the things we're early in understanding is the legal impact of this.
[01:10:29] Yeah. So Sam Altman addressed this in his podcast with the Yvonne that we referenced earlier, and TechCrunch covered this, said chat, GBT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI, CEO, Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations.
[01:10:50] 'cause there's no doctor patient confidentiality when your doc is an ai. In response to a question about how AI works with today's legal system, Altman said one of the problems of [01:11:00] not yet having a legal or policy framework for AI is that there's no legal confidentiality for users conversations, quote, people talk, about the most personal stuff in their lives to chat.
[01:11:11] GPT. People use it. Young people especially use it as a therapist, a life coach, having these relationship problems and asking, what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for that. There's PA doctor patient confidentiality, there's legal confidentiality, and we haven't figured that out yet.
[01:11:31] For when you talk to ChatGPT. This could create a privacy concern for users in the case of a lawsuit. Altman added because OpenAI would be legally required to produce those conversations today. Altman said quote, I think that's very screwed up. I think we should have the same concept of privacy for our conversations with AI that we do with a therapist or whatever, and no one had to think about that even a year ago.
[01:11:55] So again, you know, it just, it's early and people are taking risks by, [01:12:00] by doing this sort of thing. And that's just on the legal side. Also, consider the fact that there's nothing saying humans on the other can't read all the, your stuff you're putting into here, so, right, and, and maybe you don't care. And I get it, like a lot of people are just like, Hey, the benefit's worth the risk.
[01:12:14] But there's, there's people on the other side reading these things. Like there's, there's no obligation for them to not, they have to train these models. They have to understand how they're being used. whatever you put in there, you, you can assume someone in an AI lab might, might be reading it and know it was you that put it in there first.
[01:12:31] AI’s Environmental Impact
[01:12:31] Mike Kaput: All right. Next up. In a push for industry-wide transparency, Mistral AI has published a first of its kind environmental report detailing the lifecycle impact of its models. So they conducted this with sustainability consultants and this study quantifies the cost of both training and using ai. And the report reveals that training its mytral large two model generated 20.4 kilotons of CO2 equivalent and consumed 281,000 cubic [01:13:00] meters of water.
[01:13:01] In contrast, generating a single 400 token answer from its chatbot uses about 1.14 grams of CO2 and 45 milliliters of water. And the study found a strong correlation between a model size and its environmental impact, highlighting the importance of choosing the right model for the right tasks. So minstrel is now advocating for a global standard where AI companies publish environmental impact reports for their models.
[01:13:28] Now, Paul, I know you in particular get a ton of questions about the environmental impact of ai. this seems like a positive step forward to at least get some clarity here, though I would've liked more about like, how much is this actually in energy? And I think you had found some stuff on that too.
[01:13:46] Paul Roetzer: Yeah. They, they weren't super clear about it. There was one thing I found that said, I, I think I ran it through Geminis, like, can you explain this? Yeah. Like, put this in context. and so the 20.4 kilotons of CO2 equivalent is roughly the same as the [01:14:00] annual emissions of 500 French households was the one I got.
[01:14:03] Simon Willison, who we've quoted on the podcast numerous times, he did a blog post and he apparently tried the same thing I did.
[01:14:11] Mike Kaput: Yeah.
[01:14:11] Paul Roetzer: And in his analysis, he said, I'm not environmentally sophisticated enough to attempt to estimate myself. I tried running it through oh three. So he used Open Eyes reasoning model, which estimated approximately 100 London to New York flights with 350 passengers.
[01:14:26] Or 5,100 US households for a year. Okay. So again, yeah, we don't know. And then the water, the cubic meters of water, that one's probably a little closer 'cause that's an easy, like a straight equation. enough to fill about 112 Olympic sized pools. Okay. It's like, but the thing I thought was interesting here that I hadn't really thought about, and I I liked this, was they tried to give the context of, generating like one page of text.
[01:14:51] So this is straight from them. Yeah. and they said generating a single page of text. So this is about 400 tokens, so [01:15:00] that's what about 300 words, three 20 words, something like that, is the equivalent of watching online streaming for 10 seconds. It's like, okay. Like that, that's something you can wrap your brain around.
[01:15:09] So if you're watching hours of video or if you're watching a bunch of like, you know, Instagram reels, whatever, basically, like you're probably doing more than you are using, chat, EPT model, something like that. But then. The thing I liked is they said, well what can we do? So there's always the things like, as users, what can we do?
[01:15:26] And they gave some pretty solid responses. So one is, the AI companies themselves need to be more transparent about the environmental impact. Two users should be more mindful of their AI use, choosing the right size model and grouping queries to be more efficient. What they mean there is like, hey, if the mini version of something works, then use the mini version, right?
[01:15:44] Like, you don't need O three Pro just 'cause you have the license for O three Pro. 'cause that's definitely gonna have a greater impact on the environment over time. So use the smaller model when the smaller model is all you need. which again goes to, we probably need the AI companies to push us to the smaller [01:16:00] models when that's sufficient.
[01:16:01] Like versus the user being expected to know that. And then public institutions can drive market by considering the environmental efficiency of AI models and their purchasing decisions. In theory, the government would play a role in this also, but at least in the United States, we know the government doesn't care about the environmental impacts.
[01:16:16] So they're not likely to like drive that. So then it might be more like a. Educational institution level, nonprofit level, corporation level, sort of demanding that stuff. But yeah, I thought that was like interesting. And the other one that I thought was interesting is it says, get better at prompting, like as the user learn how to properly prompt your model so you get the thing you're looking for on the first prompt instead of having to like go through it five times to get it.
[01:16:41] So I was like, oh, okay. Prompting efficiency is actually a way to drive efficiency the model. It's like it's good takeaways.
[01:16:46] Mike Kaput: Yeah, for sure. And yeah, I definitely couldn't help reading this in the con, but in the context of the AI action plan with the US government. 'cause it bears noting that Mytral is a French company.
[01:16:57] They're kind of seen as like an EU AI champion, [01:17:00] my gosh. Very different perspective. Very,
[01:17:03] Paul Roetzer: very different.
[01:17:04] AI Search Summaries Result in Fewer Clicks
[01:17:04] Mike Kaput: Anyway. All right. Next up, a new study from the Pew Research Center confirms what many online publishers have feared Google's AI generated search summaries are significantly changing user behavior. So the research provides some clear data showing that when an AI overview appears, users are far less likely to click on links to other websites.
[01:17:24] According to this study, users who saw an AI summary clicked on a traditional search link in just 8% of their visits. That's nearly half the rate of users who did not see a summary. Users who do not see a summary click on links 15% of the time on average. Furthermore, users rarely click on the sources cited within the AI summary itself.
[01:17:44] This happened in only 1% of visits. The data also shows that users are more likely to end their browsing session entirely after viewing a page with an AI summary. These summaries appeared in about one in five Google searches. Conducted in March, [01:18:00] 2025, and were often, most often triggered by longer question-based queries.
[01:18:05] So Paul, we've kind of long suspected this is the case. It seems like it's confirmed. Definitely is not in line with what Google has said about this. but that's pretty sobering data.
[01:18:18] Paul Roetzer: Yeah, I mean, it's certainly logical that this would be the outcome. I did, I'll have to see if I can find it. We can throw on the show notes if I can find it, but there was a, research like over the weekend or the end of last week that said, yeah, like this is true, but we're seeing the quality of visits rise.
[01:18:36] Mm-hmm. Right? So yes, you're getting fewer like people to your site, but the people who are coming are seemingly far more qualified than the ones who, who, you know, maybe have come just from the random click through search results. So yeah, I, I don't know. I think like it's still gonna take time to play out.
[01:18:50] It's probably gonna be different by industry of like the impact and then. The other thing that's gonna, you know, really change this is how much of that traffic is AI agents six to 12 months from now? Oh [01:19:00] yeah. And I just feel like we're gonna be in this perpetual state of revisiting this data, you know, every three to six months of like, okay, well now what's the impact with AI agents having a higher adoption rate and things like that.
[01:19:10] So
[01:19:10] Mike Kaput: yeah, and I think it's also important to think about context here, especially from a business perspective. It's like, I think Andy Crestadina talks a bit about this. Reese says, look like this is a real impact, but not every search is created equal, right? It's disproportionately going to be for those more informational searches, which may have a very real impact on your website traffic.
[01:19:30] But like you said, you may be getting better traffic that has more intent or is more, propensity to buy. so I, you know, it's unclear at this stage, but there's a little more nuance to it than just AI is killing search. Right? Yeah, definitely.
[01:19:45] AI Product and Funding Updates
[01:19:45] Mike Kaput: Alright, Paul, so in our last topic, I'm just gonna run through some AI product and funding updates and kind of close this out here.
[01:19:53] So first up, just weeks after raising $10 billion, Elon Musk's AI startup Xai is working to [01:20:00] secure up to 12 billion more to fund its massive expansion plans. This new capital would be used to purchase a huge supply of advanced Nvidia chips, and it's, got kind of a creative finance deal going on where those chips would be leased back to XAI to power a new jumbo sized data center for its chatbot GR second Anthropic is reportedly drawing investor interest that could value the company at more than a hundred billion dollars.
[01:20:25] They're not formally fundraising yet, but investors have approached Anthropic with preemptive offers. The potential financing would Mark A. Sharp increase from the 61.5 billion valuation Anthropic secured in a funding round earlier this year. According to a Bloomberg report that companies annualized revenue has climbed from 3 billion to 4 billion in just the past month, some other Anthropic news and a leaked memo.
[01:20:51] Anthropic, CEO Dario Ade revealed the company has reversing its stance and its plans to seek investment from Gulf States like the United Arab Emirates and Kata. [01:21:00] This marks a pretty big shift because Anthropic previously said it was not gonna take money from Saudi Arabia back in 2024. Citing national security concerns in a candid message to staff.
[01:21:12] Ade acknowledged that accepting the money would likely enrich dictators, but stated unfortunately, I think no bad person should ever benefit from our success is a pretty difficult principle to run a business on. Alright, and finally, perplexity ai, CEO. Arvan Serena has outlined a new vision to transform the company's browser product Comet into a personalized operating system.
[01:21:36] Beginning next week, the company will roll out shortcuts for repetitive tasks. Soon after, users will be able to create their own custom scripts and workflows using natural language. And the goal is for each user's browser to feel like a mini customized computer that they built for themselves complete with their own apps, scripts, and dashboards.
[01:21:56] Perplexity, CEO stated that this roadmap is the [01:22:00] reason the company purchased the domain os.ai, which we talked about. They purchased it from Dharma Shaw, HubSpot, and their long-term plan includes a hybrid approach to computing with the ability to run AI models both on the server and locally on a user's device.
[01:22:16] Alright, Paul, that is a wrap in a very busy week in ai going deep on some topics. Appreciate you demystifying everything for us.
[01:22:23] Paul Roetzer: Yeah, the one observation I had just as you're going through the funding stuff is if you go through the five AI labs I highlighted last week of meta Google Xai, OpenAI, Anthropic, I, I don't mean this in like an overly negative way, but the only ones who don't have to sell their souls to achieve this.
[01:22:43] What they wanna pursue is meta and Google the only, the only two of those five labs who can actually fund this. Fund it. Yeah. Without doing what Dario Ade is saying is like, Hey, we're gonna take a bunch of money from people that we maybe don't, think are the right people to align ourselves with, [01:23:00] but we need the money.
[01:23:01] Right. XAI has absolutely done that already. OpenAI is doing it like they're, the only way they can get that kind of money is going outside of traditional, vehicles of funding. Whereas Meta and Google can fund it through the growth of their own companies. And yeah, that is maybe a completely overlooked advantage that those two have, moving forward.
[01:23:27] Microsoft, again, if they weren't limited through their contract with OpenAI, Microsoft could be in that discussion sooner. And, and maybe that's actually the out for Microsoft to. Figure out a way to renegotiate this contract with OpenAI is like, what's the value to Microsoft being able to build their own frontier models?
[01:23:43] Mm-hmm. And, because they have the money to do it, and it's not gonna last that long. Like, you gotta get in there before all this goes. I, I guess takes off. So, yeah. I don't know. Interesting. But yeah. Good stuff, Mike, as always. more to think about for next week. Thanks [01:24:00] everyone for joining us. We will be back next week, same time, same place.
[01:24:04] Thanks for listening to the Artificial Intelligence show. Visit Smarterx.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters. Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in the marketing AI Institute Slack community.
[01:24:29] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.