58 Min Read

[The AI Show Episode 164]: New MIT Study Says 95% of AI Pilots Fail, AI and Consciousness, Another Meta AI Reorg, Otter.ai Lawsuit & Sam Altman Talks Up GPT-6

Featured Image

Serious about learning how to use AI? Sign up for our AI Mastery Membership.

LEARN MORE

AI that feels conscious is coming faster than society is ready for…

In this episode of The Artificial Intelligence Show, Paul Roetzer and Mike Kaput unpack the viral MIT study, the brutal reality of companies forcing AI adoption, and Mustafa Suleyman’s warning about “seemingly conscious AI.” Alongside these deep dives, our rapid-fire section gives updates on Meta’s AI reorg, Otter.ai’s legal troubles, Google and Apple’s AI strategies, and the environmental impact of AI usage.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:05:52 — MIT Report on Gen AI Pilots

00:16:26 — AI’s Evolving Impact on Jobs

00:25:00 — AI and Consciousness

00:35:48 — Meta’s AI Reorg and Vision

00:40:59 — Otter.ai Legal Troubles

00:46:30 — Sam Altman on GPT-6 

00:51:14 — Google Gemini and Pixel 10

00:56:20 — Apple May Use Gemini for Siri 

00:59:49 — Lex Fridman Interviews Sundar Pichai 

01:05:38 — AI Environmental Impact

01:10:37 — AI Funding and Product Updates

Summary:

MIT Report on Generative AI Pilots

A new study from MIT NANDA has been getting a lot of attention online this past week for its seemingly explosive findings:

The study claims that 95% of generative AI pilots at companies are failing.

The authors of the study write:

“Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.”

To get to this finding, the researchers conducted “52 structured interviews across enterprise stakeholders, systematic analysis of 300+ public AI initiatives and announcements, and surveys with 153 leaders.”

In some circles online, the study was used as proof that AI is in a bubble and that the technology’s capabilities are currently overhyped.

AI’s Evolving Impact on Jobs

We just got an in-depth case study of what AI transformation really looks like within an organization that goes all-in on AI fast and the details are both educational and messy, according to an in-depth profile by Fortune.

In 2023, Eric Vaughan, CEO of IgniteTech, made one of the most radical bets on AI we’ve seen. Convinced that generative AI was an existential shift, he told his global workforce that everything would now revolve around it. Mondays became “AI Mondays,” with no sales calls or budget meetings—only AI projects. The company poured 20% of its payroll into retraining.

But resistance was fierce. Some employees flat-out refused. Others quietly sabotaged projects. The biggest pushback came not from sales or marketing, but from technical staff who doubted AI’s usefulness.

Within a year, nearly 80% of the company was gone because they wouldn’t adapt fast enough, replaced with what Vaughan called “AI innovation specialists.”

The gamble paid off financially: IgniteTech kept nine-figure revenues, acquired another major firm, and launched AI products in days instead of months. 

Still, it raises a dilemma. Is it wiser to reskill, as Ikea has done, or to rebuild from scratch? Vaughan admits his approach was brutal but insists he’d do it again.

Even though he cautions at the end of the article, when asked about laying off 80% of his staff:

“I do not recommend that at all. That was not our goal. It was extremely difficult.”

AI and Consciousness

A new kind of AI is coming, says Microsoft’s Mustafa Suleyman. In a deeply reflective new essay, Suleyman, Microsoft’s AI CEO, warns that “Seemingly Conscious AI” is on the horizon.

Seemingly Conscious AI is AI that doesn’t just talk like a person, but feels like one. It’s not actually conscious, but convincing enough to make us believe it is. 

And that’s exactly the problem. People are already falling in love with their AIs, assigning them emotions, even asking if they’re conscious.

Suleyman says this makes him more and more concerned about what people are calling “AI psychosis risk,” where believing AI chatbots are conscious can distort a person’s reality.

It also makes him concerned that if enough people start believing (mistakenly) that these systems can suffer, there will be calls for AI rights, AI protection, even AI citizenship.

He says there is zero evidence that AI can actually become conscious in this way. But the social and psychological consequences of holding this belief are becoming more alarming.

In Suleyman’s view, we need to build AI that helps people, not AI that pretends to be a person, and we should avoid designs that suggest feelings or personhood. 


This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.


This week’s episode is also brought to you by our AI Literacy project events.  We have several upcoming events and announcements that are worth putting on your radar:

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: I think it is an inevitable outcome that people will assign consciousness to machines. I think it will happen way sooner than people think it will, and I think we are far less prepared than people might think we are for the implications of that. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:22] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:43] Join us as we accelerate AI literacy for all.

[00:00:50] Welcome to episode 164 of the Artificial Intelligence Show. I'm your host Paul Roetzer on with my co-host Mike Kaput, who is battling through. Huh a [00:01:00] scratchy throat this week. So Mike might talk a little quieter than normal to try and get, get us through this, but this is the dedication. We show up every week to record this thing no matter what.

[00:01:08] Yeah. As long as 

[00:01:09] Mike Kaput: this is not a deep, fake voice or anything. This is just me with getting over a little cold or something. 

[00:01:15] Paul Roetzer: All right. Well, we appreciate you powering through Mike. All right. This episode is brought to us by MAICON. This is our, annual conference happening in Cleveland. The sixth annual conference happened in Cleveland, August, not August.

[00:01:26] Gosh, thank, thank goodness it's not August, October 14th to the 16th. happens at the Huntington Convention Center right across in the Rock and Roll Hall of Fame and Cleveland Brown Stadium, at least until 2028 when they're supposed to move. but right on the shores of Lake Erie. It's a beautiful place to be.

[00:01:41] October and Cleveland is my favorite time, time of year. We are based in Cleveland, if you don't know that. so we would love to see everyone there. We are. Trending way above last year. We actually, I don't even know if I'm supposed to say this, but I guess I'm a CEO, I can say it if I want. So we already surpassed last year's ticket [00:02:00] sales total.

[00:02:00] So we are what, seven weeks out, 50 days out. I think I saw Kathy Post and we have already surpassed last year's ticket sales total. So things are humming along. we are looking at a really good crowd in Cleveland, October. Lots of AI forward, marketers, business leaders, great place to network. Get to, you know, know your peers, collaborate, share ideas, hear from an amazing group of about 40 speakers.

[00:02:25] So we'd love to see you there. It's MAICON.AI. And you can use POD100 that is POD100 as a promo code, and that'll get you a hundred dollars off of your ticket. And it's also brought to us by. Well, I guess our AI literacy project, but most importantly, the new AI Academy by SmarterX 3.0, which launched last Tuesday.

[00:02:49] So we talked a little bit about this on episode 162, I guess, was our last weekly episode. we had an AI answers episode sandwiched in there, but Academy launched on Tuesday, [00:03:00] August 19th. It was amazing. We had nearly 2000 people registered for that launch webinar. We shared the vision and roadmap for academy, talked about all the new on demand courses, series and certifications.

[00:03:12] Introduced AI Academy Live, which is regularly scheduled, you know, weekly, biweekly live of events. previewed our new learning management system, which is coming later this year, which is gonna be amazing. Talked about business accounts, which is a new feature where you can buy five or more licenses and get access to not only deeply discounted pricing, but tons of new features.

[00:03:32] Um, we had a, a 30 minute Ask me Anything session with me, Mike and Kathy, so you can go back on the set. All of that is available on demand. Well, it, you can go to the SmarterX website, SmarterX do ai. There's a link to that. We'll also put it in the show notes and then you can just go to academy dot SmarterX dot ai and read about all of it.

[00:03:50] So we launched a brand new website on Tuesday also that includes all the details for individual plans, business accounts. We previewed AI Fundamentals, which is a new core series [00:04:00] piloting AI scaling ai, which I'm actually recording tomorrow and Wednesday. So that new series will drop on September 5th.

[00:04:07] Mike did AI and professional services, AI for marketing. we introduced the AI Academy Live, as I mentioned, gen AI app series, which I'm really excited about. That's a new drop. Every Friday morning we're gonna drop a new product review and Mike did GPT-5 and Notebook LM already. So those are already in there for mastery members.

[00:04:25] And then we'll have another one come up on Friday, which Mike is, what are we planning for? 

[00:04:29] Mike Kaput: ChatGPT Deep Research. And then the following Friday will be GPTs. 

[00:04:33] Paul Roetzer: There you go. So every Friday we're recording it. Mike's teaching a lot of these initial ones, but we, we we're lining up other instructors with expertise in a bunch of different tools and features of platforms.

[00:04:44] And so every Friday something new is gonna drop. And that's the most exciting thing to me about the new academy is it's no longer just some static courses and a quarterly session, you know, with trends and things. This is live weekly stuff, like realtime things going [00:05:00] on, which keeps everything fresh.

[00:05:01] So, check that out. Again, it's academy dot SmarterX dot ai. And then we also have ongoing free events under our AI literacy project. So the next ones we've got going on are, September 18th. We'll have an intro to AI that's presented by Google Cloud. That's a very popular series. We just did our 50th of those.

[00:05:19] We started that in November, 2021. That's a monthly thing. And then we also have our monthly five essential steps to scaling ai, and that one is also presented in partnership with Google Cloud. That one's coming up September 24th. So on the Smart X website, you can actually just click on free classes.

[00:05:35] It'll take you right to these, but we'll put the links in the show notes as well. we'd love to have you join one of those free upcoming classes. Okay, Mike, let's see how your voice does as we dive into what became a viral sensation at the end of last week. Much to my dismay, 

[00:05:52] MIT Report on Gen AI Pilots

[00:05:52] Mike Kaput: well, yes, Paul. A new study from MIT has been getting a lot of attention because it is touting [00:06:00] some seemingly explosive findings.

[00:06:02] It claims that 95% of generative AI pilots at companies are failing. So the author's right. Despite 30 to $40 billion in enterprise investment into Gen ai, this report uncovers a surprising result in that 95% of organizations are getting zero return. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remains stuck with no measurable p and l impact.

[00:06:34] Now, to get to this finding, the researchers conducted 52 structured interviews across enterprise stakeholders and did an analysis of 300 plus public AI initiatives and announcements, as well as surveys with 153 leaders. So some people are using this as proof that we are in an AI bubble, and the technologies capabilities are way [00:07:00] overhyped.

[00:07:00] So Paul, you've obviously got some feelings on this. Maybe take us beyond the headline here. 

[00:07:05] Paul Roetzer: Yeah, this, this definitely just blew up. I mean, by like Thursday and Friday, I got asked about this two or three times on live events on Thursday and Friday, like different, AMA sessions we, we did last week, and then it, it was just all over LinkedIn.

[00:07:18] Like, I couldn't open LinkedIn without a someone commenting on this thing being at the top of my feed. So, you know, first and foremost I would say I'm a big advocate of, I love research. I love when people try and take different perspectives on we were, where we are with AI adoption, what best practices look like.

[00:07:36] Um, I'm not a big fan of headlines for headline's sake. And, and so my initial reaction when I first saw this, I had not had time to dig into it. When I got asked initially about it last week and I said, listen, anytime you see a headline like that, you have to immediately step back and say, okay, that seems unrealistic.

[00:07:55] Like that, that instinct in you that's like, Hey, a little bit of a red flag maybe [00:08:00] about this research. So. My general policy on any of this stuff is, I won't share it anywhere on social media or talk about it on the podcast until we've actually looked at the methodology they use to arrive at their data.

[00:08:12] And so I didn't, I didn't share anything on social media about this. I didn't even comment on anybody. So I got tagged by like five different people to comment on this thing on LinkedIn, and I just left it alone for the time being. So then, Sunday morning I write my, or I guess it was Saturday morning, I write the executive AI newsletter that we send out through SmarterX.

[00:08:29] And so Saturday morning I finally sat down for like an hour and a half, went through the full research report, read the whole thing, looked at the methodology, and then I wrote an editorial for the newsletter. That sort of my, my perspective on, on the research itself. So I'll just kind of like go through a quick synopsis.

[00:08:46] Anyone who reads the exec AI newsletter has, you know, kind of heard my thoughts a little bit on this, but I'll, I'll, I'll explain my thinking. So what I said in the, in the newsletter was like, I honestly would've never read past the executive summary if this hadn't [00:09:00] gone viral. Like it was, it was very, very apparent right away that this research wasn't super valid.

[00:09:06] Um, so my problem with it, Mike, you read it, was the first opening line says, despite 30 to 40 billion in enterprise investment in Gen I, this report uncovers a surprise result in that 95% of organizations are getting zero return. Zero is an extremely bold statement to make in any form of research. and so that alone basically told me that everything else I was about to read probably wasn't, super viable in terms of how they extracted that information.

[00:09:37] And so the first thing I did is actually then jumped ahead to their research methodology and limitations section, and they, because I wanna understand how are they defining the return? Like, what exactly are they considering the return in this situation? So they said success defined as deployment beyond pilot phase with measurable KPIs.

[00:09:56] ROI impact measured six month post pilot [00:10:00] adjusted for department size. So it's like, okay, so they're specifically, I think now getting into like revenue, it seemed maybe like revenue and profit and only over a six month period. And then they go on to explain the figures are directionally accurate based on individual interviews rather than official company reporting.

[00:10:16] So it's like, okay, so they only did 52 interviews and their feedback that they're, that zero return from 95% is based on 52 interviews that are quote unquote directionally accurate. So again, it's starting to kind of like fall apart a little bit in, in my mind, what's going on here. And then they offer a research note that they define successfully.

[00:10:38] I, this is quote, quote unquote def define successfully implemented for tasks specific gen AI tools as ones. Users or executives have remarked as causing a marked and sustained productivity and or p and l impact. they did touch a little bit on the idea of individual productivity, but not overall productivity.

[00:10:57] Even that alone is like, well, how do you have individual [00:11:00] productivity when you combine it to not have collective productivity? So I wasn't really clear exactly how they were analyzing that. So they didn't really seem to get into efficiency gains, you know, reduction of cost, things like that. The productivity lift part, they didn't give any indication of how they were measuring that, if at all, within the results and then the overall performance.

[00:11:20] Like it wasn't considering customer churn reduction, lead conversion rate improvement, sales, pipeline velocity, customer acquisition cost, like all that was just getting thrown out the window. And so if you're gonna say something has zero return, how can you do that without acknowledging all the other ways that AI can benefit?

[00:11:36] Um, so I don't know. So I did still read through the whole thing and there was elements of it that made sense, but I, my point was like, it wasn't because of the methodology. You could just sit back and say these things without doing any research of what's gonna make a pilot work and not work. And so I don't know that the methodology itself held up.

[00:11:53] And then my final challenge with the methodology overall was they, they touted this 300 plus [00:12:00] public AI initiatives and announcements that they researched and nowhere in the report does it explain anything about that research? Like what, how did they find them? What were they, how did they assess them?

[00:12:10] How did they synthesize that within the findings itself? So overall, I would just caution people one, when you see what, what, what is that saying, Mike? like something like, great, profound claims require great, profound Yeah. Like supporting material. I batch like butchering the quote itself. But the point is when you see something like that, 95%, 5% with no research, that's a very, very bold claim that needs to have very strong supporting evidence.

[00:12:43] And so my greatest takeaway from this is people need to be a little bit more critical of headlines. And they, rather than being the first one to jump on with breaking like 95%, like we all see it on X and LinkedIn. Everything starts with breaking all caps. Before we [00:13:00] jump to posting things like that, take three minutes and just read the methodology and how they got to these things.

[00:13:07] And you may find that it's maybe just fitting a data point and a headline to a narrative and that people just run with it on social media 'cause people love this stuff. So all this being said, again, I don't want to like, you know, belittle the research itself and the work that went into it, it's hard to do research really well.

[00:13:26] Um, I just think sometimes we maybe shouldn't publish things that aren't, like, don't stand up to the scrutiny of the headline that you, you yourself write into the lead paragraph. So all that being said. If nothing else, it gives us a reason to step back and say, okay, so what should we be doing to make sure our pilot projects work?

[00:13:46] I would keep this really simple. Have a plan for your pilot projects. Personalize use cases by individuals. Don't go get chat GPT or, or co-pilot or Gemini, and just give it to people. Give them three to five use cases that help [00:14:00] them get value. Day one, provide education and training on how to use the tools.

[00:14:04] You know, think about it as a change management thing, not a technology thing, and then know how you're gonna measure success. It isn't all six months out. Did it impact the p and l? As a matter of fact, that's probably pretty rare. So the one thing I would say is within the fine tuned criteria they were using to define success, maybe it's not that shocking of a headline, it's the fact that that's the zero return thing is what just like immediately threw me off as that is a a ridiculous statement.

[00:14:32] So. I don't know. That's my soapbox take. It's like, please don't put any weight into this study. Please do not cite this study in some, you know, thing you're using for management to convince them about investing. This is not a viable, statistically valid thing. I is, I guess my overall point I would make here, 

[00:14:51] Mike Kaput: I mean, it's just such a critical reminder, especially with everyone trying to fit their narratives as well.

[00:14:57] You could tell there are a lot of people that have been [00:15:00] saying like, oh, I've been saying there's an AI bubble forever. Here's the proof. Everyone's trying to fit this Yes. Into what they want to believe. 

[00:15:06] Paul Roetzer: And you can make data say anything like, we're ev you know, and again, like when I was building, and I know you do the same thing, Mike, when I was building the AI Academy courses, you, you do like, as an structor, like, look, I wanna, I believe this to be true.

[00:15:19] I'm very confident what I believe is true. Let me go see if I can find any data to support. Yeah. And then you go and do a search like. Heaven forbid you use like deep research to do these things. 'cause there's all these websites that are basically just curations of data sets and they pick like the one sentence out of a report and then they throw it in there, like 20 things to know about AI adoption.

[00:15:39] And they all sound amazing. Like, well, this would make for a great slide. And then you take a moment to go figure out where does, where are they getting this quote from? And then you find the original source and then you read the methodology like, this is from 2022. Like this is, and I just think there's, so there's not enough, critical thinking about the data points we [00:16:00] use.

[00:16:00] 'cause to your point, Mike, like it, it's, you want that supporting thing. You want the thing to validate what you believe to be true. And so it's easy to find a data point that supports you, but we need to be a little bit more honest with the things that we're, we use to make these cases. yeah. And it's not always easy.

[00:16:20] I get it. We want that easy data point and, and sometimes it's just not there. 

[00:16:26] AI’s Evolving Impact on Jobs

[00:16:26] Mike Kaput: Yeah, that's such a good reminder. And you know, our second big topic this week, I mean, somewhat related, just kind of actually shows how messy and all over the place AI transformation can be when you actually pull up the hood of an organization doing this.

[00:16:42] Because we did just get an in-depth case study of what this stuff is really looking like when an organization goes all in on AI really fast. we just got this in-depth profile from a fortune on a company called Ignite Tech. And in 2023, [00:17:00] Eric Vaughn, the CEO of the company, made one of the more radical bets out there on ai.

[00:17:06] He was convinced that generative AI back then was an existential shift. He told his global workforce that everything at the company would now revolve around it. Mondays became ai Mondays, he literally prohibited people from working on sales calls, budget meetings, anything that wasn't ai. The company poured 20% of payroll into retraining, and then he experienced all sorts of resistance.

[00:17:31] Some employees flat out refused to use ai, others quietly sabotaged projects. And the biggest pushback actually came from technical staff who doubted AI's usefulness that fortune interviews him at various points, and he actually said sales and marketing for instance, were very excited about what was possible.

[00:17:50] Now, where this led is that within a year of these overhauled initiatives, 80% of his company was gone because they would not [00:18:00] adapt fast enough, and he replaced them with what he called AI innovation specialists. Now, in this scenario, this kind of gamble, this aggressive action paid off financially. They kept nine figure revenues.

[00:18:13] They acquired another firm. They started launching AI products in days instead of months, and it kind of just highlighted how. Strange and messy and chaotic. This can all get, because Vaughn, for his part, admits that his approach was pretty extreme, but says he would do it again. And he does caution at the end of the article, they ask him about laying off 80% of their st his staff because they wouldn't advance fast enough.

[00:18:41] And he said, I do not recommend that at all. That was not our goal. It was extremely difficult. So, you know, Paul, I appreciated the candor and the detail in this story, but this sounds like a truly brutal process of change management. Like what can we learn here both about what to do and not to do? 

[00:18:59] Paul Roetzer: [00:19:00] Yeah.

[00:19:00] It's, it is rare to see these kinds of, very honest stories out in the open. I mean, it's the thing we get asked a lot, like, where are the case studies? Who can we look at? And the reality is a lot of the companies that are doing it well aren't talking about it. And a lot of other companies are just struggling to do it.

[00:19:15] And also don't wanna admit how hard it is. So to see this level of transparency, in terms of the early actions, what went well, what did not go well? I think these are the kinds of stories we just need more of so that people realize they're not in this alone. I think one of the often overlooked elements of AI adoption and, successful AI adoption, getting to the point of return on investment, however you define it, is human friction.

[00:19:42] It can be over fear and anxiety. It can be, the idea that they, they just think that AI is gonna take their jobs. Why should they accelerate that? It can be like with any technology, someone who's maybe a director of VP a C-suite didn't get there using ai and [00:20:00] it's not the familiar thing to them. It's a bit out of their league.

[00:20:03] And then to have the vulnerability as a leader to allow people who maybe are more native to this stuff, to actually innovate with it and not feel threatened themselves. And, and to invest in re-skilling themselves and being more prepared to be a leader in the AI age, that's all hard. Like changing humans is very difficult.

[00:20:23] And that was the thing he said is like, you can't compel people to change, especially if they don't believe, like as a CEO, you have to have a vision for where the company is going. And you have to have a team of people who believe in that vision and work as a team toward that vision and remain very positive in the way, like Isay this often, Mike, you've heard me say it internally.

[00:20:46] I don't, I don't talk too much about my personal leadership style on the podcast or anything, but I hate negativity. Like, it is, it is, it is a disease within companies negativity. Like I don't, I love, [00:21:00] pushback. I love constructive criticism. I love challenging ideas. Like I want that, but I don't want problems presented without solutions, like preliminary ideas with solutions.

[00:21:10] And I don't want negative energy. As, as a CEO when you're trying to do something extraordinary, when you're trying to like go into a market, no one's gone into when you're trying to build something no one else has been willing to build. The last thing you need is negative negativity around it. Like you have to maintain as a leader such a positive mindset, such an optimistic outset.

[00:21:31] That mindset that you can achieve anything and anything that deters from that, it, it, it, it is, is disastrous to cultures honestly. So this is how I look at stuff like this. Like if, if you're gonna build an AI emergent company, which is what we're talking about here. So when we think about the future of all business, we always say AI native build smarter from the ground up.

[00:21:51] AI emergent is you infuse AI into every aspect of the organization in a human-centered way, and you evolve as a company or you become obsolete to, [00:22:00] to become AI emergent to a company that has people that don't want to be a part of it. They gotta go like it is the hardest truth right now. That. And, and I've seen this done well and I've seen it communicated well within companies that we will invest in you, we'll provide you education and training.

[00:22:17] We will give you access to these tools. You have to want it though. And if you don't take advantage of these things, you will not be part of this company anymore. And I've, I've said before, I think we were talking about the AI CCEO memos, I think you should say that point blank in every memo. Like I think CEOs should be honest, straight up.

[00:22:33] We will provide the education and training, we will provide the tools, we will provide you the ability to innovate and experiment. If you choose not to do that, you will be working somewhere else. I truly believe that should be said by every CEO before the end of this year. 'cause you cannot build a company full of people who aren't bought into this.

[00:22:51] So, I don't know, again, like I'm, I don't really comment on this one in particular too much, but I think overall it's a good example of, the [00:23:00] kind of conviction it's gonna take to move existing legacy companies. I, you can't move them without. A level of conviction and transparency about where you're going.

[00:23:10] Mike Kaput: Yeah. And while this article or example is pretty extreme, you know, obviously because of the headline of Okay, 80% of the people we're gotten rid of it does kind of gloss over some of the more positive aspects. Like he said at one point, we're going to give a gift to each of you. And that gift is tremendous investment of time, tools, education, projects to give you a new skill.

[00:23:33] Like, sure, it's scary, this stuff is happening so quick. But that's an incredible opportunity if you're someone that leans into that. 

[00:23:40] Paul Roetzer: Yeah, and I've, I've sat in meetings where executives have told their teams, like, we, we don't know what, like 18 to 24 months out looks like. We can't promise you there won't be an impact on staffing here, but what we can control is we're gonna prepare you for the future of work.

[00:23:56] Hopefully it's here, but if it's not, you're gonna be [00:24:00] ready to be, to create value in any company you work for. And I, again, I feel like that's the right mentality. I think honesty, no one can promise that I'm, trust me, like I'm the biggest believer in a human-centered approach to this at, of anyone. And I don't know, like 18, 24 months out, what it looks like.

[00:24:18] I don't think we would ever need to reduce staff. I, my goal is just keep growing, keep building the business and keep, you know, meeting demand by with more people. I like, I want people in the company, but I have no idea what 24 months out looks like. But I can promise the team, I will put everything into you.

[00:24:33] I will invest everything into you becoming, you know, a next gen worker being ready for this age of AI tools, education, training, anything you need, we will, we will have you ready. And if it's here, awesome. Then we will benefit from that. And you'll create value here. If it ends up not being here for whatever reason, then you'll be ready to go create value somewhere else.

[00:24:53] And I think as a CEO, that's, that's all you can promise right now is that have a vision and then like commit to your people to invest [00:25:00] in. 

[00:25:00] AI and Consciousness

[00:25:00] Mike Kaput: I love that. That's awesome. So our big third topic this week is about a new kind of AI that's coming according to Microsoft's Mustafa Suleyman. So he just published a pretty reflective new essay.

[00:25:15] He is Microsoft's ai, CEO, and he warns that seemingly conscious AI is on the horizon. This is a term he specifically uses, and seemingly conscious AI is AI that doesn't just talk like a person, but feels like one, it is not actually conscious, but convincing enough to make us believe it is. And his kind of argument is this is becoming more prevalent and it's a huge problem because people are falling in love with ai.

[00:25:44] They're developing relationships with ai, assigning them emotions, and in some cases people are making the argument that models are conscious. And Suleman says this makes him more and more concerned about what people are calling quote [00:26:00] AI psychosis risk where. Believing AI chatbots are conscious, can kind of send you spiraling a bit in terms of your relationship with reality.

[00:26:10] It also makes him concerned. He says in the essay that if enough people start believing mistakenly that these systems can suffer, there will be calls for AI rights, AI protection, even AI citizenship, even though there's zero evidence that AI can actually become conscious in the way some people are arguing.

[00:26:29] So he ultimately ends this essay. We saying we need to build AI that helps people, not AI that pretends to be a person, and we should avoid designs that suggest feelings or personhood. So Paul, like anecdotally, it just feels like the concept of AI psychosis, the overall idea that models could be conscious, it just feels like it's getting talked about quite a bit more, like Suleman ISS talking about it.

[00:26:57] We've unfortunately seen some pretty depressing [00:27:00] headlines about people that are severely mentally impacted by how they're interacting with ai. we covered on a recent podcast, Sam Altman himself has said there's drama around some acknowledged in the drama around G PT five that a small percentage of users, he said, quote, can't keep a clear line between reality and fiction when using ai.

[00:27:23] So what do you think? Is this becoming more common? 

[00:27:26] Paul Roetzer: It's definitely gonna be a, a growing topic. And again, it, I don't know that it gets politicized. I don't know if it falls into the religious realm like this is, this is gonna be a hot button issue for sure. and probably when it falls into politics and religion is, is when it, you know, becomes more mainstream talked about within those circles.

[00:27:46] Uh, we, we've talked quite a bit about consciousness. We've talked about Demi in a recent episode, one of the podcasts he did where he was talking about it. we touched on it last week, philanthropic, and I'll, I'll mention that in a moment. And so like, I [00:28:00] always have to go back and be all right. Like, let's, let's level set.

[00:28:02] What, what are we talking about when we're talking about consciousness? And, Mustafa does cover it a a little bit and he talks a little about how his work, when he co-founded Inflection and they built pi, and how he was thinking about that AI assistant slash chatbot and its personality and the things it would do.

[00:28:20] So this is something, Mustafa has thought deeply about and worked on for a while. so he touches on a definition, but the problem with conscious is, is we just don't know what it is. Like there is no universally accepted definition. There is. This belief that it, it is basically our awareness of our own thoughts and being like that, that we know we exist, that we know we will die.

[00:28:43] That, you know, we have emotions and sensations and feelings and perceptions about the world and memories and awareness of our surroundings and like, and that there's subjective to us. So, Mike, I know, I assume you are conscious. I don't, I don't know what it feels like to be you though, right? And, and that's [00:29:00] the, that's the point of consciousness is like you are subjectively aware of all this.

[00:29:04] When I look at colors, I know what it feels like and looks like to me when I experience, you know, a warm summer day. Like I feel that, and I know I feel it. I don't know what Mike feels when he watches a sunset. I know what I feel. and so it's this awareness of those feelings and emotions is, is roughly what is kind of generally accepted as consciousness.

[00:29:24] So to assume that a machine is aware of itself, that's what we're talking about here, that it, it knows it was created from this training set. It knows, it has weights that determine its behavior and its tone and what they're implying. What Mustafa ISS implying is if it says like, you know, I guess a real relevant example here would be when openAI's sunset, the four oh model mm-hmm.

[00:29:48] In favor of the GT five model. The people who are starting to believe that maybe these things will have consciousness at some point. I haven't heard a true argument that they currently [00:30:00] do, but like, we're on a path to them having it. They would say, well, you can't shut off four. Oh, it's aware of itself.

[00:30:07] Like you can't sunset it. You can't delete the weights. It's deleting something that has rights like it is aware of itself. That's, that's basically where we're heading here, is that you couldn't ever delete a model because you're actually killing it, is basically what they're saying. And so I share Mustafa's concern that this is a path we're on because to his point, he feels like it's kind of already possible.

[00:30:35] Mm. It's really a combination of things that already exist that could make it, it has language capability, it has an empathetic personality, has memory, it can claim subjective experience. So I mean, these things have definitely done that. You ask it, Hey, are you aware of yourself? And it was like, yeah, yeah.

[00:30:50] I'm, I'm g PT four. Like I was created, blah, blah da. Like it. Okay. It seems like it's aware of itself. It has a bit of a sense of itself. It has intrinsic motivation because these [00:31:00] things are, are pursuing reward functions that are given to it basically to do, fulfill the thing that's asked of them. it can do goal setting and planning, and it has levels of autonomy, like that's the recipe they think for like a conscious AI or perceived seemingly conscious ai.

[00:31:16] So Mustafa's point is. All the ingredients are already there. Like we, we don't need major breakthroughs for people to think that they're talking to a being that is aware of itself. We've seen it, there was a New York Times article that Mike had pulled that I asked him not to get into because I wasn't emotionally like able to, to have the discussion myself.

[00:31:36] So we'll put that in the show notes. Like, people get deeply connected to these things. They, they alter people's behaviors and their emotional states and they're like their understanding or perception of reality. Like it's, this is real. And so I think that part of this essay is actually in response to the philanthropic thing we talked about last week, or it's just interesting timing.

[00:31:58] Mm-hmm. So philanthropic [00:32:00] just published ex exploring model we welfare. And in that essay or in their blog post, it said, I can't help but think this, oh, this, that was my comment. It said, should we also be concerned about the potential consciousness and experience of the models themselves? Should we be concerned about model welfare too?

[00:32:17] And again, this is Anthropic. But now that models can communicate, relate, plan, problem, solve, and pursue goals along with very many more characteristics we associate with people, we think it's time to address it. To that end, we recently started a research program to investigate and prepare to navigate model welfare.

[00:32:32] So here you have Mustafa saying, no, no, no, we should not be exploring model welfare. There, there is no such thing as model welfare. They are statistical machines like, and you have philanthropic basically saying, we accept the future where we will need model, model welfare. So to me it seemed very interesting timing that Mustafa published this days after the Anthropic thing that was basically saying this, 'cause he was calling on other AI labs to stop [00:33:00] this.

[00:33:00] Do do not talk about them as though they're conscious beings. 'cause if we, if we make it normal to say that, then we won't, there's no going back. Like once society thinks that that's a possibility, we got major problems. So. I am, I'm kind of on Mustafa's side here. Like I really, really worry about, a society where we assign consciousness to machines.

[00:33:28] Um, I also believe it to be inevitable. So I appreciate what Mustafa is doing. I do think it will be a fruitless effort. I don't think the labs will cooperate. It only takes one lab, takes Elon Musk getting bored over a weekend and making XAI just talk to you like it's conscious. It, this is uncontainable in my opinion.

[00:33:50] So we will be in a future state. It could be two to three years, it could be sooner where a faction of society believes these things are conscious and they, they [00:34:00] fight for the rights. Th this is inevitable in my opinion. So the only thing I think we can do is education. I look at it like on Facebook right now, how many of your relatives think the images and videos they're seeing are, are.

[00:34:12] Like how many images and videos that are appearing on Instagram and Facebook are actually real versus AI generated. And then what percentage of people can actually identify the difference anymore. And so I think that's just a prelude to consciousness. It's gonna be the same feeling. Like I think it's real.

[00:34:30] Like I look at this image, it feels real to me, and you're gonna have a conversation with the chap. I be like, sir, feels real. Tells me it's real. Talks to me better than humans. Talk to me. Like it's conscious to me. And I think that's kind of where we're gonna arrive at is people are just gonna have these opinions and these feelings and you can't change.

[00:34:47] Go back to the one about changing people's behaviors of the CEO memo. Like Right, you can't, changing people's opinions and behaviors is really, really hard. And generally speaking, I mean if you look at just [00:35:00] politics, like, you know, roughly 45 to 52% are gonna eventually probably think these things are conscious and the other percent are gonna think people are crazy for thinking it.

[00:35:08] And. Here we go, like back into the downward spiral of society where we can't agree on anything. So I, again, I think it's a really important conversation. I think it is an inevitable outcome that people will assign consciousness to machines. I think it will happen way sooner than people think it will.

[00:35:24] And, and I think we are far less prepared than people might think we are for the implications of that. 

[00:35:30] Mike Kaput: Yeah. I feel like the emotional response to, like you had mentioned GPT-4 Oh, being temporarily taken away. That should be an alarm bell for anyone big time about this. 

[00:35:44] Paul Roetzer: Yep. Yeah. That times a hundred. Like, 

[00:35:48] Meta’s AI Reorg and Vision

[00:35:48] Mike Kaput: all right, let's dive into this week's rapid fire.

[00:35:51] So first up, Zuckerberg is already making a big shakeup to Meta's new Super Intelligence Labs division. This is according to the [00:36:00] New York Times. They reported this past week that the division will reorganize. And that reorganization splits their work into four pillars. There's research, training, products and infrastructure.

[00:36:14] Most division heads will now report directly to Alexander Wag, who is the company's new AI chief AI officer, and that includes GitHub's, former CEO Nat fridman on products, a longtime meta exec, Aparna Rami on infrastructure and sheo, who is a chat GPT co-creator, who is now at Meta as a chief scientist.

[00:36:37] Uh, the research will be split between fair, which is META'S Long standing Academic Style Lab, which is still being led by Yann Lecun and Rob Fergus. And there's a new elite unit called TBD Lab tasked with scaling massive models and exploring something that Wang Cryptically calls a quote omni model. At the same time, meta is dissolving its [00:37:00] AGI Foundations team.

[00:37:02] So Paul, this seems like a pretty significant move for Meta. It comes as weighing also announced a partnership with Midjourney around the same time. So some big things are happening here. What do you think these actions signal about where they're headed? 

[00:37:17] Paul Roetzer: I kind of alluded to this on a previous episode.

[00:37:20] Like to me this just feels like a train wreck waiting to happen. Like, we're gonna watch this happen in slow motion. and, and the reason I feel that is like I just think from a, from an analogous, analogous perspective, gimme any sports team in history where you put like 10 superstars on one team and they coexisted like these are the best of the best.

[00:37:43] These, these are not people, these are a bunch of alphas who have to report to another alpha who meta paid $15 billion for, who now internally is perceived as the most valuable of the alphas. So everyone else is like, I got my 200 million, but. And I Ner got [00:38:00] 15 billion or whatever, like what, whatever.

[00:38:01] He ended up getting outta that deal with scale. But they roughly paid 15 billion to get, Wang and his team at Meta. And, and, and like now you have, I think like Freeman now has to report to Wang. And, and Yann Lecun who created all this at Meta, has to report to Wang, who, who doesn't believe in large language models as a path to intelligence, who believes as purely as any researcher in open source being the path to the future.

[00:38:28] And, and now you have models where they're basically saying, yeah, we're probably gonna close the models. Like the open source that we built on for 12 years is pretty much gonna be done. I don't know. Like I will, the labs innovate. Will they create incredible products? Probably like, it's not like it's gonna fail in three months, but the fact you're already having to do this reorg three months into all this is probably not a great sign.

[00:38:53] And so I just feel like. Again, this is more of an opinion and kind of like looking from the outside in. [00:39:00] I feel like there's going, we're gonna be talking a lot on this podcast in the next 12 to 18 months about things going wrong within this meta structure. I think this is not the last of the reorgs. It is certainly not the last of a lot of their top researchers leaving, which they maybe they want, an attrition here of the top previous people who don't want to change and have their beliefs set.

[00:39:23] Um, I don't know. Again, I, the closest thing I can tie it to is just sports teams and, and when you put some superstars on the same team, you might win a championship here or there, but it's almost inevitable that there will be clashes and, and that it just kind of doesn't end up working well. I don't know.

[00:39:42] It's like, it's almost just throwing culture out the window and saying, we're just gonna brute force this with talent. And, and I just, I don't know that it's ever worked in business and I may not be thinking of the right example on the spot here. Brute forcing a bunch of top talent together without culture, [00:40:00] just usually doesn't work great.

[00:40:02] So I'll be fascinated to watch it and, you know, intrigued by what they create and how they innovate. But I don't know. 

[00:40:10] Mike Kaput: I felt like I thought like five different times to myself. Poor Yann Lecun, when I was reading through these. I can't believe he's still there. This is like the worst possible outcome for you in a few ways.

[00:40:21] Paul Roetzer: Yeah. Imean he has to quit. Like I, yeah. If Yann Lecun is still at meta by the end of this year, I don't even know what he would be there for. Like, I really don't like, I, he doesn't need it. he could obviously take his talents wherever he wants. If these people are getting 400 million, like, shit Yann Lecun’s, 2 billion, 3 billion.

[00:40:41] Like, what are you paying for? Like a Nobel Prize winner, like touring award winner? Yeah. I don't, I don't know, like a godfather of modern ai. So. Yeah, I just, I don't know. Maybe he has, doesn't have an ego at all and doesn't care and he just wants to do his thing. It's possible. I don't, I don't know him personally, so I don't know.

[00:40:59] Otter.ai Legal Troubles

[00:40:59] Mike Kaput: Alright, [00:41:00] next up, otter.ai. The popular meeting transcription tool is facing a federal class action lawsuit that accuses a of secretly recording private conversations. So the complaint was filed in California. It says, Otter a otter's, AI deceptively and surreptitiously captures workplace meetings through its Otter Notetaker feature sometimes without the knowledge or consent of participants.

[00:41:26] The plaintiff of this lawsuit, Justin Brewer, says his privacy was severely invaded when he discovered Otter had logged a confidential discussion, especially because it happened when he joined a Zoom meeting where Otter's note taker software was running. He himself does not have an Otter account. This was just another participant in the meeting, had it going and.

[00:41:49] Brewer says he had no idea the service would capture and store his data, or that the call would be used to train Otter speech recognition and machine learning models. The lawsuit argues [00:42:00] this practice violates state and federal wiretap laws and accuses the company of exploring exploiting recordings for financial gain.

[00:42:09] Otter's privacy policy does mention AI training, but only if users grant explicit permission. Now, lawyers allege many users are being misled and critics point out that Otter can auto joinin meetings via calendar integrations without informing all attendees. So Paul, I'm curious about your thoughts on this lawsuit specifically and the bigger implications.

[00:42:33] You and I have talked a bunch of times here about how uncomfortable we both are with it becoming increasingly common for AI note takers to auto joinin meetings. Otter seems to be kind of putting the onus on the person using the note taker to get permission, which is clearly not happening. What did, what did you kind of take away from this?

[00:42:53] Paul Roetzer: Yeah. I'm not an attorney, took some law classes in college. this would [00:43:00] seem like a really strong case to me, just, on the outside looking in. so from a legal perspective, yeah, it seems like a problem. It seems like the things they've laid out as to why this is a problem make a ton of sense.

[00:43:13] Um, and then, yes, like I've voiced this before. You and I have talked about this in the podcast. I am not a fan when people's Firefly or Otter just shows up in meetings. I'm not a fan when it's added to webinars. I'm not a fan when it, like, I don't, I don't like 'em. I don't like when it's assumed that the attendees are okay with someone else's ai recording things, transcribing those things, summarizing those things.

[00:43:40] Putting it into training data of things. I have no idea what the agreement you have is with Otter or Firefly when I'm on a call with you. Right. I don't know where the conversation is going, what it's being used for, or how it might be hacked in some larger data leak that comes out of that company. And now the private things we talked about confidential things.

[00:43:54] Proprietary things are in somebody's data set that's out on the web. Like I just [00:44:00] feel like we, the tech became available, became capable of doing what it does. It sort of just happened that people just started throwing it into meetings all the time and we never really agreed as society on this like that, that this was okay.

[00:44:16] And it's an awkward thing to be like, Hey, could you please turn off your note taker? Like, I don't know even what the vendor is you're using. Right. I've never heard of that one. so I feel like we need to have a bit more of a social contract here, where there is kind of that permission, like I'm, I'm agreeing to allow your note taker to take notes.

[00:44:38] Uh, or you're get at least getting notified of, Hey, their AI companion is here. Now what I think, and I'd have to go back and like look at this, but I feel like if you're doing it in Google or Zoom or you know, Microsoft Teams, it, and when it's a native thing, you're at least alerted like, Hey, that this is coming on.

[00:44:55] And you're like, okay, click checkbox. Like, okay, I'm being told. But when it's a third party [00:45:00] thing, like a Firefly or Otter, I feel like it just shows up with no, yeah. You know, I've agreed to this or anything. So, yeah, I think, I think this is one of those things that maybe everybody needs to do a little inward check of themselves and say, am I, am I, am I doing that?

[00:45:13] Like, you know, maybe, maybe it's, it's like bothering people that my note taker shows up all the time and sometimes even when I don't show up personally. Right. I love that one. The note taker shows up before the person and it's like, it's just you and staring at the note taker window and it's like, oh, hello, note taker.

[00:45:30]   Yeah. So I feel like maybe this needs a little more dialogue and we need to come to some better, better, principles as a society of like what, what we think is acceptable. But it's gonna be a bigger problem with AI agents. It's gonna be a bigger, much, much bigger problem when everybody, you know, is wearing air pods that are recording everything and glasses, right.

[00:45:48] And whatever devices they're wearing around their neck and their fingers and whatever, like this is only gonna get worse. And, text mo is just push it all forward, keep doing [00:46:00] further and further across the edge, and then these lawsuits just eventually go away or get paid off, and then it becomes commonplace in society.

[00:46:06] I mean, that's, that's how Facebook normalize so many things that, you know, caused them to sit in front of, the house and explain things over years where it was like, at the time it was, taboo and then it just, people just got used to things. It's how tech does stuff. You just push the edges and, and then, you know, you pull back a little bit and then you push further.

[00:46:26] It's how politics does things. It's just how stuff works. 

[00:46:30] Sam Altman on GPT-6 

[00:46:30] Mike Kaput: So next up, Sam Altman has said that GPT-6 is coming sooner than people expect, and it's going to feel a lot more personal. He shared with journalists in recent weeks a vision for GPT-6, which centers around memory. So the ability for chat GPT to remember who you are, your routines, your tone, your quirks, and then adapt around that.

[00:46:52] He was quoted by CNBC as saying quote, people want memory, people want product features that require us to be [00:47:00] able to understand them. And he says this, personalization extends to politics. He says, future versions of chat GPT should start neutral, but allow users to tune them whether he said they want a super chat bot or a conservative one, for instance, he acknowledges there are privacy risks around memory and hinted that they might start being able to encrypt memories at some point.

[00:47:25] Beyond chat. He said he's already thinking about neural interfaces or AI that responds to thoughts directly, but that's some ways down the line. For now though, the goal is apparently to just make GPT-6 something that feels like it knows you. So Paul definitely seems like Sam wants to move on to the next hype cycle here after GPT-5, but this really does hit on some themes We've been talking about this episode you've predicted as far back, I was looking as episode 35 in 2023, February of 2023, we were talking about [00:48:00] how it seemed likely openAI's would eventually give you the ability to control personality, politics, preferences, tone.

[00:48:08] Um, so it seems like we're potentially getting that in the next release. 

[00:48:12] Paul Roetzer: That was pretty GPT-4. That was right. It was, yeah. That's right after it was,   Yeah, so it's, it's interesting, like they've, they've moved on so fast from the GPT-5 thing. Yeah. Like, once they rolled it out and, and it wasn't like the leap forward, it was just like, Hey, we don't enough compute to deliver the model we wanted to deliver.

[00:48:29] Like, we have more powerful models already, but we can't deliver on yet. And then co hosting this dinner two weeks ago where they're just like straight up saying, yeah, GPT-6 is gonna do this and this and this. so yeah, I don't know. I think it's interesting that they're being very open about it. I gotta wonder like their own confidence level in these statements that people want this and they want that.

[00:48:51] It's like y just like crashed and burned on what you thought users wanted with GPT-5. Like everything you premised it on that they didn't [00:49:00] want 4.0 that they wanted, they didn't want models or like all the things you assume like caused some problems. And so I wonder if there's any internal like, hey, maybe do, do they really want personality?

[00:49:10] Look, apartment system. I don't know. Again, I feel this is inevitable. I think this is where the models probably all go. much more personal preferences. it seems like it's what they have to probably do. and it's the only way to stay politically neutral. which probably gets back into some of the issues we've talked about with these government contracts that they all want a piece of and why they're all kind of given everything away, to government agencies.

[00:49:39] Like you gotta, you gotta play ball. And if your model is perceived to be too conservative or too liberal, then, depending on the administration that's in party and, and that, that kind of is decide whether they like you or not. And so if you make a politically, religiously neutral model, or, [00:50:00] well, I should back up, you post, train it to be politically neutral because it's not gonna come out of the oven one way or the other.

[00:50:08] It's gonna come outta the oven based on its training data. So you actually control it through your system prompts and your post training to answer things in, in a certain way. that's, that's gonna be a problem. So the way you solve that is by making it neutral and letting people say, Hey, I prefer these sources, or I, you know, I like to listen to these podcasts and these perspectives, and I tend to believe these people more.

[00:50:30] And you gotta go, you can almost imagine where these things actually audit you and it's like asks about your beliefs and your interests and things you're passionate about, where you get your information from. Like you could tailor these things pretty fast to behave in specific ways, and then it, if it could au auto update its own system prompts specific to you.

[00:50:46] So imagine almost like everybody gets their own GPT and the system prompt rewrites itself as it learns about your own beliefs and interests. Mm-hmm. and, and then basically there's just an [00:51:00] algorithm that personalizes it to you. That, that's in essence what it seems like they're all gonna have to do for either because they think it's what users want or because they think people in power are going to demand.

[00:51:14] Google Gemini and Pixel 10

[00:51:14] Mike Kaput: Yeah. That's one. One way to give everyone what they want by letting them figure it out instead of trying to guess in some ways. Yep. Alright, next up, Google has unveiled the Pixel 10 smartphone lineup, which is their biggest bet yet that AI can make people switch owns, because the new devices put the Gemini AI assistant at the center of everything.

[00:51:37] So there's features now like something called magic Cute, which anticipates what you need before you ask. So if you dial, for instance, an airline, your flight details pop up automatically. There's a camera coach that critiques your photos in real time, suggesting better angles and lighting. And Gemini Live lets you chat with the phone about what's on screen.[00:52:00] 

[00:52:00] Thanks to Google's project, Astra Vision Systems, there's a number of models here that it starts at $799. There's a Pro Xcel version for about $1,200, and then a foldable model that's about $1,800 with the biggest inner display on the market. Now, each of the pro phones actually comes with a year of Google's $19 a month AI Pro subscription, which unlocks premium Gemini features.

[00:52:29] So Paul, what's interesting, this article states something I think is increasingly important to think about. It says that despite Google's unique smartphone offerings, there haven't been major signs that AI has yet become a key driver of smartphone sales, or that users are deciding to switch from Apple's platform to Android due to AI offerings.

[00:52:51] That I think, for me, that's something I think about often, which is where is the tipping point here Are we going to see in the next. Couple generations, [00:53:00] people start to make the switch as they expect AI to kinda be in everything. 

[00:53:04] Paul Roetzer: I don't know. That's an interesting one to think about. I still don't feel like society as a whole really understands AI enough to change their behavior as a result of it.

[00:53:16] Right. You know, if you think about how many people have iPhones versus Android devices, you know, is the average iPhone user. I think about, you know, my parents grandparents, even a lot of my own, you know, peers within the, my, my same age group do, do they like assess their device based on its AI capabilities or even know, like what AI capabilities are baked into it.

[00:53:43] And unfortunately, like if they have an iPhone, what, what is your experience with ai? Like really, like right. There is no. Life changing thing in there where you're like, oh, so this is ai, like you can make some emojis and you know, some others intelligence stuff that's, you know, fun [00:54:00] parties to show, I guess.

[00:54:01] But overall, like sury is still useless and it's just like your experience with AI isn't anything. So is it enough? I don't know. My guess is Google would probably hammer Apple in their ads and try and see like, they're gonna test the market and, and gauge Would people switch for these different capabilities?

[00:54:17] Is is that enough value? Is it enough? Like curiosity? I will say personally, I've always had iPhones. Is this the first time where I did go that night and I was like, nah, maybe I'll grab a pixel. Like maybe, maybe I'll test one. just to see. Now I have Gemini on my iPhone, so that on its own isn't enough.

[00:54:36] I can talk to GeminI just open up the app. But are all the other AI capabilities, worth experimenting with? I don't know. Like I probably will just get one and, and test the technology. The foldable one looks pretty cool. Yeah.   But I also know Apple's having their event, in probably like September 9th is the current rumor.

[00:54:55] They, they usually wait till like 10 days before to announce the actual date, but they're [00:55:00] supposed to unveil a new lineup of iPhones and, maybe preview what's coming. And so Bloomberg is reporting, they, they have a Photoable phone also, maybe on, coming to market in 2026, and then like a total reimagination of the iPhone in like 2027.

[00:55:18] So, you know, I'll probably stay with Apple. It's just, I love Apple products. It's what I've always had. So it'll be interesting to watch. But I do think that I probably agree, like I don't know that most people are ready to make that switch because of AI capabilities into their phone, because they probably don't really understand the AI capabilities that much.

[00:55:37] Even like I know one of the ones I always show people on my iPhone that they're like, wait, what is when you take a picture of nature, like a, a leaf, a flower, a bug, a bird. It can tell you what it is. Like if you just click the little i with the stars at the bottom, it'll like pop up and be, you know, tell you exactly what the flower is or the tree is, or whatever the type of stone is.

[00:55:57] Um, and people have no idea that that's [00:56:00] there. And it's probably one of like the coolest little AI features that has been on your iPhone for like two years. People don't even know for sure. So I don't know. It'll, it'll be interesting. I doubt that Google's gonna like, grab a bunch of market share here, but they're certainly making a way more intelligent device at the moment than Apple is.

[00:56:17] It's, I don't think that's debatable 

[00:56:20] Apple May Use Gemini for Siri 

[00:56:20] Mike Kaput: and actually related to that, our next topic is about what Apple is doing here, because they're apparently now weighing a surprising move, which is, according to Bloomberg, apple has been in talks with Google about using Gemini to power a revamp version of Siri.

[00:56:37] The idea is to build a custom model that would run on Apple servers and finally bring Siri up to speed in generative ai. Now, Google is just the latest in a series of AI companies that Apple is talking to. We've talked about a couple others. They have explored deals with Anthropic and openAI's to try to include Claude or Jet JPT as the foundation of Siri.[00:57:00] 

[00:57:00] According to this article, inside Apple teams are running what they call kind of a bake off to determine which is better. One version of Siri built on Apple's own models or another that relies on outside tech. So obviously this comes after all the delays. We've discussed all the controversy at Apple about kind of being behind in ai.

[00:57:21] So I, you know, it's not particularly surprising, Paul to me, that Apple's talking to another AI company about powering Siri, but the fact they keep having these conversations seems significant. 

[00:57:36] Paul Roetzer: This is a, this is interesting. I think I've been a proponent on the podcast numerous times that I thought this is the approach they should take.

[00:57:44] They should stop trying to fix sur themselves and accept that they're, that's probably not their strong suit, and they're probably not gonna be able to recruit and keep the right people to compete long-term with ChatGPT and Gemini and stuff. And, so maybe just doing a deal is better. It, [00:58:00] it wouldn't shock me at all if something like this occurs.

[00:58:02] I mean, meta just did a $10 billion deal with Google Cloud, so competitors coexist and work together, partner all the time in this space. 

[00:58:10] Mike Kaput: Yeah. 

[00:58:10] Paul Roetzer: Do you have to keep in mind, like Google Cloud functions as, as its own thing within Google, in a massive growth company where they want to host the data, they, they, they wanna, you know, work with these competitors.

[00:58:24] In Google itself, and Apple have a longstanding partnership from, you know, Google Maps to Google search. I mean, they pay Apple, what, 20 billion a year, at least in I think 2022. That was the number to keep Google searches like the primary on Apple devices. So it's not outta the question they would do that.

[00:58:42] And I think just based on how much trouble Apple has had catching up here, it, it almost seems like it would be again, like you don't have all the information obviously, but from when you zoom out and you just say, well, that would make a ton of sense. Like, you can't compete there. That is not your business, your business' [00:59:00] devices.

[00:59:00] Like just do the devices really well and make them as intelligent as possible, as quick as possible. Don't try and fix or hope it comes out in spring 2026 and have to delay for another year again. 

[00:59:11] Mike Kaput: Mm-hmm. 

[00:59:12] Paul Roetzer: So I feel like at some point you just have to accept this and Google, you know, looks at, it's like, it's cool, like we're probably never gonna overtake the iPhone.

[00:59:20] Like, you know, we sell tons of devices, it's great, but. It's not, you know, necessarily our core business. Like, let's make the money on the inference, like serving up the intelligence, let's make it on our models. And so, I don't know, it just, it almost seems like it just makes too much sense. And I would think that doing it with Google would be better than Anthropic, because there's just lots more complexities with the Anthropic situation.

[00:59:44] So I don't know, I, this would not surprise me at all if something like this came through. 

[00:59:49] Lex Fridman Interviews Sundar Pichai 

[00:59:49] Mike Kaput: So next up, Sundar Pichai, CEO of Google and Alphabet sat down with Lex fridman for a sweeping conversation that's worth, examining if you want to [01:00:00] understand how one of AI's top leaders thinks about where we're headed.

[01:00:03] So it was a, you know, two and a half hour, three hour discussion about, ranging from Pichai's childhood in India to the future of ai. on ai. He was very clear. He said, you know, repeated his claim from several years ago that we've cited often that it will be the most profound technology in history. Greater than fire or electricity.

[01:00:24] He spoke about scaling laws, the trajectory towards AGI and what he calls the AI package. An explosion of creativity, productivity, and new inventions that will ripple through society like agriculture or the industrial revolution once did. the two actually also explored Google's evolving role, the shift from classic search to AI powered dancers, the merger of DeepMind and Google Brain advances in video generation with Veo immersive communication through beam and XR glasses and the promise of robotics and self-driving cars.

[01:00:59] And [01:01:00] interestingly enough for phai, these breakthroughs are kind of forming into a single trajectory, which is building a world model powerful enough to reshape how we learn. Great. And connect. So Paul, this kind of comes on the heels of another Lex fridman interview we covered on episode 1 62 with De Saba.

[01:01:20]   What was, what stood out to you about the conversation with Sundar and is the timing here a coincidence that we're getting all this insight from Google leaders? 

[01:01:30] Paul Roetzer: There was obviously like a PR push because Sundar's was this dropped June 5th. I used to get around to listen into it until it last week, and then Des has dropped like three episodes later.

[01:01:39] So obviously they had sort of coordinated that these were gonna come out at the time they did. The first thing that jumped out to me with this one is Sundar's. He's a CEO of, you know, the second or third most powerful company in the world. He, he has to be very polished in what he says and how he says it, and it's often very apparent that he's, he's got PR talking points, like he's been given the [01:02:00] talking points, like, here's what we're gonna say.

[01:02:01] And when these things, different things comes up, this interview felt a little bit more open. Like he was a little bit more willing to share his points of view on things that maybe they don't traditionally talk about, like what the future is for AI mode and search and ads and stuff like that. Like I felt like they were just.

[01:02:16] A little bit more honest answers that weren't as polished of like corporate messages, I would say. so a couple of things that jumped out at me. He did ask me about scaling laws. It's the co you know, common question that gets asked of all these, you know, major executives at these AI companies. And he held the line that we've heard from everybody else.

[01:02:34] Like, yeah, they're, there's three different scaling laws, the pre-training, the post-training, and the test time computer, the inference. and they're all kind of moving in a direction and they're, you know, like maybe the pre-training isn't moving as fast, but the other ones have sort of made up for it.

[01:02:47] So there's no slow down there. He expressed a similar, I guess fascination as Des did in vo three's understanding of physics. Like there's just this surprise that comes from these people that it just [01:03:00] seemed to do this better than we thought it would train it on a bunch of videos and it just sort of learns to understand the world and, and physics, he did ask about AGI super intelligence and I thought he gave a pretty diplomatic answer there of like.

[01:03:12] Term just doesn't matter that much, that they're gonna get more powerful, it's gonna have a massive impact on society, and we need to deal with that is pretty much his point of view, whatever you want to call it. He talked about the future of search and AI mode, which I thought was kind of intriguing. I don't know if you've experienced much with AI mode lately.

[01:03:29] Mike might be a good gen AI app review. Yeah. I've, I've actually found I'm using it more again, like I had gone through a phase where I wasn't using Google search at all, and I really like ai. It's, it's actually quite good. And, and he was saying like, they have their best model. Like, you're gonna have a great experience because we're putting our best stuff into AI mode, like the most powerful current models, things like that.

[01:03:52] So if you haven't tried AI mode yet, I would give it a try. And if you don't know how to get to it, one, it's in the tab in your search. But you can also, when you conduct a search and you [01:04:00] get an AI overview at the top, it'll say like, explore more, like talk deeper what? I don't remember what it says, but you click there and it takes you to AI mode.

[01:04:07] He talked about, ads and Lex was pushing around like, well, you know, as you kind of move people away from the 10 blue links, aren't you gonna suffer your ad business? That was really interesting that he drew a parallel to YouTube. He said, we do a mixture of subscriptions and ads now. And it was almost like he was implying that's the model.

[01:04:24] Like, we'll, we'll find a balance and maybe it'll be some subscription based stuff and maybe it'll be some ads, things like that. And then he talked about that. Right now AI mode is gonna stay separate, but it was very apparent that the intention is, that's the future of search, that eventually they will just do away with the 10 blue links and like what you've known is search will eventually morph into it as consumers become, ready basically.

[01:04:49] So it's kind of like an organic thing. Like we push it here now we put it here, watch behavior. Now we push it here. And so you could definitely see one, two years out where search just looks nothing like [01:05:00] the 10 blue links. It's, it's all AI mode basically. that was the one thing I took away there. So.

[01:05:05] Yeah, overall just a really good interview. I mean, again, it's like all Lex interviews, it's like two hours, two and a half hours long. But again, wh where are you gonna get these insights, right? I mean, to hear a CEO like Sundar for two, two hours, 15 minutes, whatever, sit there and talk about his childhood, which was crazy fascinating.

[01:05:21] Like I've heard stories, but I'd never heard him tell it like that. So just where he came from and how he got where he is and his perspective on the world and technology is, is just cool. Like, it's, it's a privilege that we get to hear these interviews, I guess is kinda like how I said it with Demis last week.

[01:05:38] AI Environmental Impact

[01:05:38] Mike Kaput: So next up, Google actually did the math on how much energy and what environmental impact their AI has, when being used. So they actually published a deep dive into their AI energy usage and found that a typical Gemini text prompt consumes just 0.24 watt [01:06:00] hours of energy. Releases 0.03 grams of carbon dioxide and uses about five drops of water.

[01:06:07] To put that in perspective, it's like watching TV for less than nine seconds, and that footprint is far smaller than many public estimates, and Google claims it is shrinking fast in the past year alone says Google. The energy used per prompt dropped 33 fold and the carbon footprint fell 44 fold even as the quality of answers improved.

[01:06:32] So the company credits years of efficiency gains for these energy savings. They've done everything from developing custom belt tpu and new inference techniques to ultra efficient data centers. It also stresses that its calculations include overlooked factors like idle chips, cooling systems, and water consumption.

[01:06:52] This makes the numbers more realistic than narrower estimates that only count active hardware. So [01:07:00] Paul, on episode 1 59, we talked about how it was nice to see the French AI company minstrel publish a breakdown of the environmental impact of its models. Google seems to be taking this much, much further with a very robust breakdown of the actual environmental impact here.

[01:07:17] So I know you get asked about this a lot. Can you break down how we need to be thinking about AI's environmental impact? 

[01:07:24] Paul Roetzer: It is nice to see them doing this, reporting. It's an abstract thing, honestly. Like I, you know, they're always trying to say, equal to this amount of drops of water, or this many, you know, minutes of watching Netflix or something like that, or YouTube in their case.

[01:07:37] So you're always trying to like, give some perspective to people. They're, they're obviously, they're investing tremendously to make this more efficient and, and it does seem to pay off in the numbers and, and each year it's just gonna get more and more efficient. Google has a clear advantage here to be able to deliver intelligence efficiently at scale.

[01:07:57] It's like the. We've talked many times about [01:08:00] Google's infrastructure advantage from their chips to their data centers, to, you know, the history of innovations in, in, in AI with Google Brain and Google DeepMind. the, this is, this is their, sweet spot. And, and so I would expect them to, to kind of like really become a dominant leader in this space.

[01:08:20] Probably share more details because they're gonna have tremendous confidence that they're doing more than anybody else in this space. And they have the power to do that. So it is good to see this kind of data. It is a very, very common question. And the thing that people often want to know is like, well, what can I do?

[01:08:37] And I think I touched on this on the podcast recently, but like, there's two main things. I think we came up in the Mytral conversation actually. Use more efficient models. So if you can get by with a lesser model, use that 'cause it requires less compute to deliver the outputs to you, whether it's images or videos or text.

[01:08:53] Uh, the more efficient the model is, the less pull on an energy, standpoint. And the other is get better at [01:09:00] prompting. Yeah. So the better you are at telling the machine what you want and getting it on the first result or second result, and not giving a bad prompt that you just need to keep going every time you prompt it's, there's a, there's a cost, energy cost.

[01:09:14] There's a, you know, an actual hard cost. And so use the more mission efficient model when you can and, and get better at prompting or like the two things you can do to actually make a difference. If you're in a leadership position, then you're making sure that at scale across your company, you're using the most efficient models, for the specific use cases.

[01:09:32] But, you know, allowing the deep thinking models, the reasoning models when they're called for, like, that's gonna, you know, I'm thinking, saying this out loud. That's almost gonna be a job of the future. Yeah. Like you may have people in, in it potentially dedicated this idea of like this mixture of models and being able to manage with when to use which models.

[01:09:49] Yeah. There may be routers that help you figure that out, but overall, like you're saying, okay, the marketing team, 90% of their uses are for copy generation and da da da. They don't need [01:10:00] GPT-5 reasoning model to do that. They, they can get by with the four oh or whatever. It's, so I think there's gonna be a lot of that, or, or an open source model.

[01:10:08] Um, as we think about these overall strategies and how to diversify the model use in companies, I think you could see a lot more of that. 

[01:10:14] Mike Kaput: Yeah. And we've talked so much about how, at least in the US that's unlikely you're going to get any environmental regulation around this. So this could feel a bit like a ray of hope here if you are very concerned that, you know, with the company spending tens of billions on CapEx, they have a vested incentive and interest in making things, like you said, as cheap as.

[01:10:35] Paul Roetzer: Yep. 

[01:10:37] AI Funding and Product Updates

[01:10:37] Mike Kaput: Alright Paul, so we are almost done here, but I wanna round up some AI funding and product updates as we kind of close out the episode. 

[01:10:45] Paul Roetzer: Sounds good. 

[01:10:45] Mike Kaput: All right. So first up, Databricks is raising a series K round at evaluation north of a hundred billion dollars. They are raising funding as they double down on ai.

[01:10:55] Earlier this summer, the company unveiled Agent Bricks, a system for building [01:11:00] production ready AI agents tailored to a company's own data and Lake base, a new type of database design specifically for AI workloads. Next up, Anthropic is an advance talks to raise as much as $10 billion double what was expected just weeks ago.

[01:11:17] This jump in the capital raise is driven by what they call surging demand from backers. plenty of people see Anthropic as one of the few credible challenges to openAI's and Xai and other top labs. for context, Anthropic was valued at 61 billion earlier this year. After raising three and a half billion dollars.

[01:11:37] This new round could push its valuation well past $170 billion. Grammarly has rolled out a new suite of AI agents designed to change how students and teachers interact with writing. There's an AI grader now that they've rolled out that doesn't just check grammar, but actually will predict what grade a paper could get.

[01:11:59] Depend [01:12:00] drawing on course details and public info about an instructor. Alongside that, there's a reader reaction agent that anticipates questions. A paper might raise a paraphraser that adapts tone and style and a citation finder that automatically builds properly formatted references. And for educators, they're launching two new AI tools.

[01:12:20] On the other side of this equation, there's an AI detector to flag machine written text. And a plagiarism detector that scans massive databases. 

[01:12:31] Paul Roetzer: Mike, I would just add a quick note. Anyone who's ever written a book that citation finds it, that automatically just, oh my God. Literally I've written three books.

[01:12:40] The the most arduous and unpleasant process of writing the three books is a hundred percent. Having to do all the citations in the proper format and then having your publisher correct every one of them. And then you've gotta go through 70 citations and change the format. Oh my God. Citations are [01:13:00] brutal, but essential in any research or or publishing.

[01:13:03] Yes. 

[01:13:04] Mike Kaput: Yeah. I'd have to imagine there's some academic researchers that might be like excited about that. My gosh. Alright, and last but not least, the company Unity, which is a leading software company. They're known for the Unity game engine, which is used heavily in video, the video game industry. They are going all in on generative AI with their latest update, unity 6.2.

[01:13:26] This release introduces a suite of new tools that are collectively branded as Unity ai. They've got a built-in copilot that's powered by GPT models from Azure, openAI's, and Meta Lama that basically answers questions, generates code, places, objects and scenes as you're building out a game design and world.

[01:13:45] They also add generators, which is a set of tools for creating textures, animation sounds, and other assets. And interestingly, some of these models that are all bundled up in this run guardrails to block prompts that are likely [01:14:00] to produce infringing content. So you're saying, Hey, make me an asset for my game that is too close to something copyrighted.

[01:14:06] But Unity makes clear that developers are ultimately responsible for ensuring their generated assets don't violate copyright. So they've like put the burden on the user, not their models generating this. 

[01:14:20] Paul Roetzer: Yeah, I think that's a key thing. Like, and we'll kind of end here, but I feel like this is the absolutely going to be the common practice.

[01:14:28] So in the Unity AI guiding principles, it says importantly, you are responsible for ensuring your use of unity, AI, and any generated assets do not infringe on third party rights and are appropriate for your use. As with any asset used in a Unity project remains your responsibility to ensure you have the rights to use content in your final build.

[01:14:45] What the reason this is really relevant is this applies to anything with image generation, video generation, audio. All of them either have this in their terms of use, I'm guessing, or will have this in. 

[01:14:57] Mike Kaput: Yeah. 

[01:14:57] Paul Roetzer: And what the reason you need it is [01:15:00] the models inherently are capable of producing copyrighted material because they're trained on copyrighted material.

[01:15:06] The only way that they don't do that is through guardrails that are put in place by humans saying, don't output this. If it's asked for this celebrity, this politician, this, you know, cartoon character. So they have the ability and they want to do what the human asks them to do, but the guardrails keep it, what they're basically saying is Ask, screw it.

[01:15:24] We can't police it all. It's on you. Like, yeah. If you use it to output something that infringes on a copyright, you're the, you're the responsible party, not us. they're passing it off to the user. And I assumed our, kind of alluded to something similar with ve Like you and I talk like, how is it doing storm troopers?

[01:15:40] Like, why, why is all of a sudden Google stuff able to create copyrighted images and, and videos? And I think the answer probably lies somewhere within this realm where the creators are just gonna try and pass legally the burden onto the user. So the near term is user beware.

[01:15:57] Like if you think you're allowed to put up a [01:16:00] meme that is using someone's copyright material because everybody's doing it, don't be surprised if Disney comes knocking on your door like. A a and, and you may be stuck if that's the case. So as, as individuals, but also as, as brands like you have to have this senior generative AI guidelines for your policies, for your people, that they're not allowed to produce copyrighted stuff just because the machine lets them do it.

[01:16:24] It's, it's really, really important you have those conversations. 

[01:16:28] Mike Kaput: All right, Paul, that's a wrap on another busy week. I appreciate you breaking everything down for us. 

[01:16:33] Paul Roetzer: Yeah. Thanks for fighting through the voice held up, man. Yes, it held steady the whole time. I'm glad. Yeah, I tried through without even having to stop.

[01:16:38] So thanks everyone. We will be, back with you next week. Thanks for listening to the Artificial Intelligence Show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters. Downloaded AI blueprints, attended virtual and in-person events, taken [01:17:00] online AI courses and earned professional certificates from our AI Academy, and engaged in the marketing AI Institute Slack community.

[01:17:07] Until next time, stay curious and explore ai.

 

Recent Posts

[The AI Show Episode 164]: New MIT Study Says 95% of AI Pilots Fail, AI and Consciousness, Another Meta AI Reorg, Otter.ai Lawsuit & Sam Altman Talks Up GPT-6

Claire Prudhomme | August 26, 2025

Episode 164 of The Artificial Intelligence Show explores the viral MIT study, the reality of AI adoption, and issues over “seemingly conscious AI.”

How to Create an AI-Powered Search Strategy with Wil Reynolds [MAICON 2025 Speaker Series]

Cathy McPhillips | August 21, 2025

In our ongoing speaker series, we’re spotlighting the remarkable AI leaders featured at MAICON 2025. During this upcoming session, Wil Reynolds will discuss how generative AI is reshaping traditional SEO.

[The AI Show Episode 163]: AI Answers - AI Environmental Concerns, Agentic Workflows, SEO Impact, The Future of Creative Careers, & Human-First Processes

Claire Prudhomme | August 21, 2025

Explore AI’s impact on the environment, culture, and creativity in Ep. 163 of The Artificial Intelligence Show as Paul Roetzer and Cathy McPhillips answer your questions.