Marketing AI Institute | Blog

[The AI Show Episode 157]: Anthropic Wins Key Copyright Lawsuit, AI Impact on Hiring, OpenAI Now Does Consulting, Intel Outsources Marketing to AI & Meta Poaches OpenAI Researchers

Written by Claire Prudhomme | Jul 1, 2025 12:15:00 PM

AI is reshaping hiring, law, and business strategy.

Join Mike and Paul as they unpack Anthropic’s major legal win over authors suing for AI training data use, explore the tsunami of AI-generated resumes flooding recruiters, and analyze why OpenAI is now doing high-ticket consulting. They also weigh Salesforce’s claim that AI does half its work, Meta’s billion-dollar talent raids, and OpenAI’s mysterious hardware rebrand drama. 

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:05:00 — Anthropic Wins Key Lawsuit Against Authors

00:19:37 — AI’s Impact on Hiring and HR

00:31:34 — OpenAI is Now Doing Consulting

00:39:28 — OpenAI - Jony Ive Drama

00:43:08 — OpenAI’s Microsoft Office Rival

00:47:53 — Intel Outsources Marketing to Accenture and AI

00:53:31 — Salesforce CEO: 30% of Internal Work Done by AI

01:01:53 — More Meta AI Recruitment Efforts

01:07:15 — AI First Book Release

01:12:20 — AI Product and Funding Updates

Summary:

Anthropic Wins Key Lawsuit Against Authors

A federal judge just handed Anthropic a win in a high-stakes copyright case that could shape the future of AI.

The court ruled that Anthropic’s use of copyrighted books to train its language model Claude qualifies as “fair use.” Judge William Alsup called it “quintessentially transformative,” likening Claude to a writer learning from other authors—not copying them, but using their work to create something new.

That’s a big deal for AI companies, which argue that their systems depend on vast training data to generate original outputs, and that they have a right to use data online as part of “fair use.”

This is the first court to explicitly endorse fair use as a defense for what AI companies are doing to train models. But the win isn’t complete.

The judge also found that Anthropic went too far by downloading over 7 million pirated books from shadow libraries. That, he said, was copyright infringement—and a trial in December will decide how much Anthropic owes.

AI’s Impact on Hiring and HR

A new report in The New York Times highlights a growing AI-related problem:

Job seekers are unleashing a wave of AI-generated résumés—and recruiters are drowning in them.

On LinkedIn alone, job applications have jumped over 45% in a year, with users submitting about 11,000 every minute. Tools like ChatGPT can instantly customize résumés to match any job posting, while more advanced AI agents now automate the entire process—scanning job boards, filling out applications, and even answering screening questions.

The result? What recruiters are calling an “applicant tsunami.” Many résumés look nearly identical, and it’s getting harder to tell who’s actually qualified or even real. Some candidates are faking identities. Others are using AI to cheat in automated interviews.

To keep up, employers are fighting AI with AI—using automated interviews, game-based assessments, and even bots like Chipotle’s “Ava Cado” to screen and schedule faster.

But that raises its own risks: AI hiring tools have faced lawsuits over bias, and regulators in the EU are already labeling them high-risk.

OpenAI Is Now Doing Consulting

OpenAI is getting into high-touch consulting, mimicking the model popularized by defense tech company Palantir.

OpenAI is now offering fine-tuned, enterprise-grade AI solutions built by its own engineers, but only to clients willing to spend at least $10 million. 

These custom services involve tweaking models like GPT-4o using a company’s proprietary data, then building apps—often chatbots—tailored to specific business needs.

This puts OpenAI in direct competition with consulting giants like Accenture and software firms like Palantir, whose "forward deployed engineers" it’s quietly been hiring to build out its own consulting team.

Clients already include the Pentagon—which signed a $200 million deal—and Southeast Asia’s Grab, which used OpenAI to map roadways using street-level imagery. 

OpenAI says these partnerships are about solving harder, billion-dollar problems—and giving customers insight into what’s next, including future enterprise uses for the AI-powered device it’s co-developing with former Apple designer Jony Ive.

This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

This episode is also brought to you by our upcoming AI Literacy webinars.

As part of the AI Literacy Project, we’re offering free resources and learning experiences to help you stay ahead. We’ve got a few sessions coming up in July—check them out here.


Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: That's what they're leaning into is this idea of like, you're just gonna be able to cheat on everything, and why not do it and we'll help you do it. And it's like, oh my God, like this. This is the antithesis of what we should be striving for. With ai, it's like, let's save the world and cure diseases. Oh no.

[00:00:14] Let's just like teach people to cheat on everything. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput.

[00:00:38] As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.

[00:00:54] Welcome to episode 1 57 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my [00:01:00] co-host Mike Kaput. We are back for another weekly edition of the Artificial Intelligence Show. We are recording on Monday, June 30th, about 11:00 AM I don't, I don't think anything crazy is gonna happen today, opening eyes about to be like taking a week off.

[00:01:14] So I don't think they're gonna do anything nuts. we'll talk a little bit about that in a minute. Mike and I are actually gonna be taking a couple weeks off, so I'll get that outta the way upfront. There will, there's no plan currently to do a July 8th or 15th weekly episode because one, I will be on vacation for one of those weeks.

[00:01:34] And two, Mike and I are about to lock down to create and record the new courses for our AI Academy, and the AI Mastery Membership Program. So over the next two weeks. We'll kind of be in the lab creating all the new content for AI Mastery membership. and so yeah, like I need every waking second that they're [00:02:00] finished, what I'm working on right now and probably then some.

[00:02:03] So, we could share a little bit more about that another time. But you can go to smart rx.ai/ai-mastery. We'll put that link in the show notes as well. to learn more about the academy program, we've got some pretty exciting changes coming up, starting in August. new courses, new certification programs, new live experiences, new business accounts, like everything is, is kind of changing.

[00:02:29] So this is kind of our Academy 3.0. the first one was launched in 2020, then 2.0 was, I guess probably like January, February of 2024. We kind of reimagined a little bit, introduced some new stuff. And this is a complete re-imagining of the AI Academy by Smart Rx program. So again, more to come on that, but for the next two weeks, we will, barring any crazy stuff happening in the a world, we are not planning to have a weekly episode on those dates.[00:03:00] 

[00:03:00] Alright, so today's episode is brought to us by MAICON MAICON 2025. This is our, flagship in-person event. Have six annual marketing eye conferences happening October 14th to the 16th. We are trending way ahead of last year's numbers. I won't get into specific data at the moment, but it is significantly ahead of last year's.

[00:03:19] So we're grateful for everyone who's already registered for that event. It is looking like last year around 1100. We are, we're trending way above 1100 at the moment. so we'd love to have as many people as possible. Join us in Cleveland for marketing AI Conference 2025. You can go on site, learn more about it.

[00:03:39] What was the code, Mike? Is it POD100? Was that the, I'm pretty sure we had, yeah, yeah, yeah. POD100 will get you a hundred dollars off. Prices go up at the end of every month, so the sooner you get in, the more money you can save. Go to MAICON AI, that is M-A-I-C-O-N.AI to learn more. And then the second part is a free option.

[00:03:59] We [00:04:00] have the AI literacy project that we've talked about many times. This is kind of our initiative to drive and accelerate AI literacy, not just in the business world, really across society. And one of the key initiatives as part of that is our intro to AI class that I've been teaching since fall of 2021, the 49th edition of the Intro to AI class, which is a free webinar.

[00:04:22] It's about 30 minutes of presentation, 30 minutes of q and a. That is coming up July 9th. So, I'm gonna take a brief break from creating new courses for AI Academy to run a live intro to AI class, so you can join me and Cathy McPhillips, our Chief Growth Officer on July 9th. For the 49th edition of Intro to ai.

[00:04:42] Alright, Mike. lots of, I don't know, like just some bigger stuff going on here, like impacts on hiring and hr. We've got a big lawsuit. Win for philanthropic, got OpenAI moving aggressively into consulting. I don't know, there's some fascinating topics prepping for this one. So let's, let's dive [00:05:00] in.

[00:05:00] Anthropic Wins Key Lawsuit Against Authors

[00:05:00] Mike Kaput: Alright, Paul, so first up, a federal judge has just handed Anthropic a pretty significant win in a high stakes copyright case that could have some implications for the future of ai. So the court ruled that Anthros use of copyrighted books to train its language model quad qualifies as quote, fair use.

[00:05:22] We'll talk more about fair use in a second here. Judge William sup called it quote quintessentially transformative. Likening Claude to a writer, learning from other authors, not copying them, but using their work to create something new. This is a key distinction we'll talk about here in a second because this is a pretty big deal for AI companies.

[00:05:43] They argue that their systems depend on vast training data to generate their outputs, and that they have a right to use certain types of data online as part of fair use. So this is the first court to explicitly endorse the fair use defense [00:06:00] for what AI companies have been doing and what many of them have been sued for this win.

[00:06:05] Philanthropic is just a isolated win for them. It's not a broader commentary necessarily on various doctrine, and it's not totally complete either because the judge also found Anthropic did go too far by downloading over 7 million powered books from Shadow libraries online. He said that was copyright infringement and a trial in December.

[00:06:30] We'll decide how much Anthropic owe for doing that. Now Paul, again, with kind of the caveats here that this is not a blanket ruling. It's likely going to be appealed. It still seems like this is a pretty big deal. It sounds like at least one federal judge thinks it's okay for AI companies to train models on copyright material like they've been saying they've been allowed to do.

[00:06:56] Paul Roetzer: Yeah. So anytime we talk about this stuff, we always caveat we're not [00:07:00] attorneys. talk to your IP attorneys. You know, if this stuff affects you in any way, if you wanna dig deeper onto this, you know, follow some experts online who are, you know, experts in IP law, what we're gonna try and do is break down what exactly it is.

[00:07:16] So when I saw this, you know, my first questions, it's almost the same every time. It's like, okay, so what is fair use or reminder there? What is a transformative use? the facts of the case. What did we learn? What didn't we learn? What does it mean from a legal perspective? What does it mean moving forward and the creator IP rights holder?

[00:07:33] Like what is the perspective for them versus, you know, thinking about it from a lab perspective? So I'll do my best to just break this down for a few minutes here to try and put this in context of how significant this ruling is. So fair use according to the US copyright, office is a legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright protected works in certain circumstances.

[00:07:58] Section 1 [00:08:00] 0 7 of the Copyright Act provides the statutory framework for determining whether something is fair use and identifies certain types of uses such as criticism, comment, news, reporting, teaching, scholarship, and research as examples of activities that may qualify as fair use. search engines is another one that comes up, and we'll get into that in a second.

[00:08:23] Section 1 0 7 calls for consideration of the following Four factors in evaluating a question of fair use. So again, this is coming right from the US copyright office. The first factor that's evaluated, so again, this is a case by case basis, is how this has to be determined. So in this case, philanthropic is suit over, you know, using the the copyright material to train their model.

[00:08:45] And what the judge has to look at is, you know, where across these four factors does this fall? And is it fair use or not? So the first is purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit [00:09:00] educational purposes. Courts look at how the party claiming fair use is using the copyright work and are more likely to find that nonprofit educational and non-commercial uses are fair.

[00:09:11] So in this case, that is not. The case, like it is, it is obviously a for-profit thing that they're doing. So it doesn't fall into that, you know, educational, non-commercial use. It actually is for commercial purposes. The second factor is the nature of the copyrighted work. This factor analyzes the degree to which the work that was used relates to copyrights purpose of encouraging creative expression.

[00:09:35] The third is the amount and sustainability or substantiate substantiality. That's a, that's a word you don't see in a sentence every day amount and substantiality of the portion used in relation to the copywriter work as a whole. In other words, how much of the original work was used in the output?

[00:09:57] under this factor, the court looks at both the [00:10:00] quantity and quality of the copywriter material that was used. If the use includes a large portion of copywriter work, fair use is less likely to be found if the use employs only a small amount of copy provided material. Fair use is more likely. So, you know, again, go back to surge engines.

[00:10:16] If they're only outputting a snippet of a copyrighted material, that's gonna get the fair use protection. And then the final one is effect of the use upon the potential market for or value of the copyrighted work. Here courts review whether and to what extent the unlicensed use harms the existing or future market for the copyright owner's original work.

[00:10:37] So if your book is used to train this model, what is the likelihood this thing's gonna output your entire book versus some transformative, purpose? So courts evaluate fair use claims on a case by case basis, and the outcome of any given case depends on fact specific inquiry. This means there is no formula to ensure that a predetermined percentage or amount of work, [00:11:00] or specific number of words, lines, pages, copies may be used without permission.

[00:11:04] So like when I was going through journalism school, I forget the exact number. I don't even remember if this was like an AP thing or not. But I think it was like, and even in writing our books, I think it was like a hundred or 125 words, was kind of the guide. So if you were gonna cite a work, if you were copying and pasting or like, you know, quoting more than like a hundred words from that source, then you had to, you, you couldn't do it.

[00:11:28] Yeah. You had to find another way to do it. So just to give people some context, when you're writing articles or publishing or even doing social media posts, you're not supposed to put like 500 words from a source into your material. That would likely be infringing on the copyright as an example. So, so the second piece here is, what does transformative mean in the realm of copyright law?

[00:11:52] the term transformative is central and often decisive concept within the doctrine of fair use. The use is considered transformative [00:12:00] if it does not merely reproduce the original copyrighted work, but instead adds a new dimension, purpose, or character. Altering the original with new expression, meaning or message.

[00:12:11] Essentially, the more a new work transforms, quote unquote the original, the more likely it is to be considered fair use. So Mike used, what did they say? Quint, what was the word the judge 

[00:12:20] Mike Kaput: used? Quintessentially. So, transformative. Transformative, right? So, so the 

[00:12:24] Paul Roetzer: judge is saying it's dramatically different.

[00:12:28] Yes. By training on it. Yeah. okay, so, so what did we learn? Like what does this court case tell us? as Mike kind of highlighted, training AI on legally acquired works is fair use. So in Anthropics case, they bought a bunch of books, scanned them, and then trained on them. And the judge is saying that was okay.

[00:12:47] Like you went through a process, you acquired the books, you transformed the use. So you're good. Digitizing purchase books for training is fair use, using pirate materials is not fair use. Now this [00:13:00] is fundamental. We've listen to the show for a long time. We've talked about books three. What think if I'm the number.

[00:13:06] If I'm not mistaken. So 180 million pirated books exist within this book's three database. Yeah. And we know for a fact that Meta and others trained on books three. So when you think about the impact on other rulings that are already are other court cases that are out there, they are not going to be able to, at least until this is appealed and potentially overturn this, this will be used in those court cases to say, Hey, this judge already said, this court already ruled that you cannot use these pirated books.

[00:13:36] And if I'm not mistaken, Mike, the the ruling that's expected here, the potential penalty is $150,000 per incident of copyright infringement. And so if you did that 7 million times Yeah, like just being assumed under Anthropics case, that puts you outta business. So we're not saying that's what's gonna happen, we're just saying like, this is [00:14:00] what the court now will look at is what is the actual cost per infringement when they know that they use pirated books to do some of this.

[00:14:08] What we did not learn to move into that case, the legality of the AI outputs. So the decision focused on the input side, the training data side, it did not address the legality of the outputs. the question of whether AI generated content that resembles or reproduces parts of copywriter works constitute infringement remains open and fair use of pirated works for training.

[00:14:28] While the court ruled against the use of pirate materials to build a central library, it did not definitively rule out the possibility that using pirated works solely for the purpose of training, could in some circumstances be considered fair use. Meaning this is just a ruling, like this is now kind of gonna be in integrated into other cases.

[00:14:47] But this is not some definitives, not a Supreme Court saying, I. This is the case, and now everybody should change the way they do things. 

[00:14:52] Mike Kaput: Right? 

[00:14:53] Paul Roetzer: So what it means from a legal perspective, this ruling sets an important, though not nationally binding precedent. It introduces a more [00:15:00] detailed legal framework for analyzing AI and copyright distinguishing between the active training and the sourcing of the data.

[00:15:07] what does it mean moving forward? It will now, the, this court case will now proceed to trial focused on the damages resulting from the use of powered books as we talked about, and from the creator IP rights holder perspective, creators and intellectual property, right holders, the rulings kind of mixed bag.

[00:15:23] Mike, as you said, on one hand, does offer a little bit more protection, but it doesn't really stop the fact that they could just go buy your book and train on it. 

[00:15:30] Mike Kaput: Hmm. 

[00:15:31] Paul Roetzer: So the thing like that came to mind for me is the, I remember this Google Books project, so I have not actively used Google Books, the website, but I recalled that Google had an initiative to scan all books.

[00:15:45] Like I think the goal was originally 130 million. Yeah. And that back, starting back in the early two thousands. And then they actually got sued by the authors Guild and major publishers due to massive infringement on copyrights. And they eventually won that core case in [00:16:00] 2015. The second circuit affirm the authors versus guild, authors Guild versus Google, where they said it was okay that they were scanning these as long as they were only providing snippets online.

[00:16:12] Mike Kaput: Hmm. 

[00:16:12] Paul Roetzer: and so if you go to books.google.com right now, like I went there and looked it up, our book Mike this morning, and it has 62 pages of our book Right. Available to read. and then you kind of hit a limit. So the reason I bring this up is because Google has a database of at least 40 million. It's probably way more.

[00:16:32] It was 40 million back in 2019. Now they've slowed down the program to my understanding. I don't know this is even actively happening, but they were basically doing deals with publishers and libraries to digitize all these books. And so then the question becomes. If they are legally allowed to train on books.

[00:16:50] No one has a larger database than Google. Yeah. Of digitized books. And the value of books is when you go to train, rather than scraping the [00:17:00] internet and all the crap that comes with it, books are high quality. They, they are unmatched in terms of like expertise in different fields, diversity of knowledge.

[00:17:11] So books will likely get heavier weighting when going into training sets because they generally are higher quality than what you're gonna find just randomly across the internet. So that then leads back to like, wow, like maybe Google has a pretty distinct advantage here, right. Because of their books project from 23 years ago.

[00:17:29] So I don't know, like I just, again, like kind of thinking out loud here of things that might come out of this finding. 

[00:17:36] Mike Kaput: Yeah, that's really interesting. I think there's also some other commentary from, uh. Ed Newton Rex, who we've talked about quite a bit, he posted pretty extensively about this. I was kind of going through his, comments and things, but pretty interesting.

[00:17:50] He did say at one point, you know, post that if the Anthropic fair use verdict one survives appeal and two becomes precedent for other lawsuits. Those are both big. [00:18:00] If obviously, and AI training is broadly deemed to be fair use as tech lobbyists, hope paywalls will go up everywhere, which is also something I didn't consider as a possible, you know, second or third order effect of this.

[00:18:13] It's like everyone will start avoiding, AI from training on their material, though I don't know how sustainable a strategy that is in the age we're about to enter. 

[00:18:24] Paul Roetzer: Yeah, and then also like if the paywalls, the only thing preventing it, I mean, as a lab, I would imagine you're probably willing to pay 300 bucks a year to get access to the information articles, right?

[00:18:37] Because they got great stuff. And so you just basically curate and say, okay, here's the 300 sources we're willing to pay annual subscriptions for. And like somebody goes and does it and you just, yeah. Yeah. I mean, if it's legal to train on the material, like you just go pay the fees. You don't even have to do licensing deals, then you just, yeah.

[00:18:54] I think I'm, again, I'm kind of thinking out loud here, but if that's the case, if they can walk out and buy any book [00:19:00] at any bookstore or take it out of the library, digitize the thing, and then it's legal to put it into the training data, why couldn't you just do the same thing with all content on the internet?

[00:19:09] Yep. Especially with stuff behind paywalls. And you don't have to do licensing deals, 

[00:19:12] Mike Kaput: because I believe that's exactly what Anthropic pivoted to after a while, is they just started going and buying huge amounts. Yeah. Like yeah. Millions meta did I believe as well. so yeah, that's a really interesting point.

[00:19:24] They might just go through the paywalls. . It'll be, it'll be a very interesting times. I had to follow this one. I'm sure we'll have some follow up. yeah. I expect monthly, if not weekly. There's gonna be new stuff 

[00:19:35] Paul Roetzer: popping here. 

[00:19:37] AI’s Impact on Hiring and HR

[00:19:37] Mike Kaput: All right, our second big topic this week, a new report in the New York Times highlights a growing AI related problem.

[00:19:45] The problem is that job seekers are unleashing a wave of AI generated resumes, and recruiters are drowning in that. So according to this report on LinkedIn alone, job applications have jumped over 45% in a year with [00:20:00] users submitting about 11,000 of them every minute. Tools like ChatGPT can instantly customize resumes to match any job posting.

[00:20:09] And more advanced AI agents are now automating parts of the entire process. They're scanning job boards, filling out applications, and even answering screening questions. So the result is what recruiters are calling an applicant tsunami. So many resumes end up looking nearly identical, and it's getting a lot harder to tell who's actually qualified or even real.

[00:20:32] Some candidates are faking their identities. Others are using AI to cheat in automated interviews and to keep up with this, employers are fighting ai. With ai. They're using automated interviews, game-based assessments. Chipotle has a bot that screens and schedules resumes faster, faster. And even this response to it, even though some of them are sensible, also raise their own risks.

[00:20:58] So AI hiring tools have [00:21:00] faced lawsuits about bias. Regulators and the EU are already labeling them as high risk, which is going to be a no go under the AI Act. So Paul, I think we've touched on this topic a bit here and there, but it feels like it is beginning to hit a bit of critical mass, and you do a lot of work, a lot of speaking, a lot of cons, consultation with top executives.

[00:21:24] Thinking about this, some of the top companies in the world. Do you get the sense that they're ready to deal with this problem? 

[00:21:33] Paul Roetzer: Not that I'm aware of. I mean, I have not spent a lot of time with HR leaders recently and talked about this and like heard firsthand stories, but it makes complete sense that this is a major issue.

[00:21:44] And when you dig into to the article you were talking about, you know, there was, at the early on it said with a simple prompt ChatGPT, we'll insert every keyword from a job description into a resume. . Some candidates are going to step further paying for AI [00:22:00] agents that could autonomously find jobs and apply on their behalf.

[00:22:04] Recruiters say it's getting harder to tell who is genuinely qualified or interested and many resumes look su suspiciously similar. Then they cited, Jeremy Ling, a career coach who regularly conducts tech-focused job search training at universities and he said, he could see this back and forth going on for a while as students get more desperate, he says.

[00:22:28] they, the students say, I have no choice but to up the ante with these paid tools to automate everything. And I'm sure the recruiters are going to raise the bar again, doing the same. He ar argues. The end game could be authenticity from both sides, almost like we kind of hit this pinnacle and it's like, okay, we gotta go back to the way this was before.

[00:22:44] But then I actually came across an article over the weekend that, I thought was really good and maybe like highlighted a little bit better even what's going on. . So this is from Derek Thompson. We have talked about him before on, episode 1 [00:23:00] 46. I think this was in April of this year. He had written an article for the Atlantic, called Something Alarming is happening to the Job Market.

[00:23:08] So this was about 11 episodes ago. Yep. And so he did a follow up and it was interesting, this was on his substack, but this was like a continuation of the Atlantic article. So he said, in the weeks after my article came out, I saw a torrent of concern about AI and entry level work, which that was the topic we talked about was the impact on entry level work.

[00:23:27] He said the labor market for recent grads hasn't been this relatively weak in many decades, but he has called the new grad gap. that is the difference between unemployment, between recent grads and the overall economy. it's hard to con find conclusive economic data that, AI is destroying jobs, that the new cycles are moving quickly.

[00:23:46] Macro economics move slowly. but then he gets into kind of the bigger thing. So he said if anybody could provide a useful forecast. So basically he's like continuing his research and trying to say, is something happening here? Is the, is the impact manager [00:24:00] entry level jobs happening? But he actually found something different when he started making phone calls.

[00:24:05] So he says, if anybody could provide a useful forecast, I thought it would have to be college career offices who have a panoramic view of the entry-level economy and their own students' anxieties. So he plays several calls to directors of career offices at different universities around the country, asking them the same question.

[00:24:22] What, if anything feels uniquely concerning about this economic moment? And then I love this Mike. As you know, you and I are both kind of trained journalists. He says, sometimes in journalism you go fishing for trout and you catch a trout. You're reporting on covers exactly what you were seeking. But sometimes when you tug out the line, a Marlin's head pops out of the water.

[00:24:42] You come into possession of information you didn't even realize you were looking for. As I let my sources keep talking, they told me about their students. This age of anxiety, the fresh hell of looking for a job these days, and the role that AI plays in the process. After hours on the phone with them, a new story [00:25:00] clicked into focus.

[00:25:01] The most dramatic takeaway from these conversations wasn't that AI clearly was destroying jobs. It was something I wasn't expecting to hear at all. AI is shattering the process of looking for jobs. . And then he gave this like great context. So he says, 20 years ago it was rare for students to apply to more than 20 posi positions as seniors.

[00:25:21] but tech to customize resumes and personal statements allows people to transform one application into dozens, almost instantly. At the same time, new hiring platforms such as Handshake, I have not tested that, but then again, we haven't hired at this level before, have made it easier for young people to find hundreds of plausible jobs in the same place.

[00:25:42] this is quote, we're now seeing students sending 300 applications a year. Sometimes it's 500 or even 1000 applications from one student in one year. This wasn't possible before AI and it's still accelerating. And then this is where my brain just started to hurt. Imagine 2 [00:26:00] million college graduates applying to an average of say, 50 or 100 jobs.

[00:26:05] That's 100 to 200 million job app applications for entry-level positions across the country every year. Mm. It's impossible for carbon-based human resources departments, meaning humans, to go through all of that. So then it just kind of keeps going on and on about this, and I was like, oh my gosh. Like I hadn't even considered all these things.

[00:26:27] And so then he concludes with, I went into my conversations with college career executives expecting to hear about AI replacing work. What I heard instead is that AI's transforming everything around work. The transition from college to the workforce is fully drenched in ai. AI is automating homework, obliterating the meaning of much testing, disrupting labor, labor market signal of college achievement, and grades distorting the job hunt by normalizing 500 plus annual applications per person turning first round interviews into creepy surveillance experiences or straight up [00:27:00] conversations with robots.

[00:27:01] And after all, that may be kind of beginning to saw off the bottom of the corporate ladder by automating entry level jobs during a period of economic uncertainty. This ends with, this really is a hard time to be a young person. Hmm. So, yeah, Mike, I think like to your point, we've touched on these things, but I don't know that.

[00:27:19] I had really stopped and considered how massive this is becoming. Like I knew people were automating interview processes and you were interviewing with like, you know, ais before you'd ever talk to a human, and ais were reviewing resumes. But the idea that an individual grad graduated from college may send out like 500 applications.

[00:27:39] Like how do they even filter through all the responses, like right, the whole thing. It's like AI is needed to deal with all the AI output from all of this. 

[00:27:47] Mike Kaput: What do you think happens next here? This seems like an escalating arms race between applicants using more AI recruiters and or brands using more ai.

[00:27:59] Do we [00:28:00] just throw out the online application entirely? What do you think ends up happening here? 

[00:28:06] Paul Roetzer: I, I mean, and in part this almost falls into those AI gaps we were talking about last week. . Of like as an HR professional, how do you verify the accuracy of all this? How do you think critically about.

[00:28:19] These candidates, how do you have the confidence to say, these are the five people? I think we should move through the process. Like it's just creating more than a human could possibly go through. Yeah. And so, yeah, like AI becomes the solution or people, you know, venture capitalists invest in HR technology that they claim is gonna be the solution.

[00:28:38] And it's actually just accelerating the problem. I don't know, I'm kind of with, you know, the author here of like, well, maybe some point we just kind of come back to what it was before because this is, unmanageable. And then they even got it a little bit about like, LinkedIn's role in all this. And like, that's, that's a whole nother ball, right?

[00:28:57] I don't know. I mean, it, yeah. Yeah. Again, like I [00:29:00] wasn't even really aware it was as big of a problem as it had become. 

[00:29:05] Mike Kaput: Yeah, no kidding. And it's, there's multiple facets here, right? It's like people wanna, I think, latch onto like, oh, okay. Like people are cheating on job applications with ai. Like that's a huge problem, but.

[00:29:15] Just the vast scale of these is the issue at first, because then even if you get the great resumes or applications sorted out eventually, which is a big give, they still then might have actually just made everything up and made it look great because of using ai. So it's like, until you get into that interview process, my gosh, I don't envy the job of HR professionals these days.

[00:29:37] Paul Roetzer: Yeah. And Derek Thompson referenced another article that you and I talked about extensively. The everyone is cheating their way through college article. Right? Right. And he said, you know, he made a couple good points here. you know, that New York magazine article, and we'll drop the link in again in case you missed that episode.

[00:29:54] but he said the cheating epidemic in college raises a big question for job recruitment. Why should [00:30:00] employers trust GPA in an age of rampant AI cheating? How can employers and students trust each other during the application process? 

[00:30:07] Mike Kaput: Hmm. 

[00:30:07] Paul Roetzer: The answer in many cases seems to be they can't, and they don't.

[00:30:11] And then it quoted, I, I've had students accused of using AI in the interview process. One college career executive told me, the student swears to me that they weren't cheating. But in a virtual interview when they have access to a computer, it's hard for the recruiter to know. So yeah, it's just like, does this person actually know what they're saying?

[00:30:28] Are these answers just being like, fed to them in real time? And then there's that. Oh, what's the, we didn't talk about clearly, I think was Oh, was the one, yeah. Yeah. We literally, maybe we'll talk about this on a future episode. They, they got a lot of buzz in the last, like 10 days, honestly. Like, maybe my stomach turns, I didn't even bother like talking about it, but they got funding.

[00:30:47] I think it's from Andreessen Horowitz. Yeah. It's, yeah. And literally their tagline is Cheat on everything. Yeah. Like that. And I know it's a big marketing ploy and there's like a pr stunt behind the whole thing. But that's what they're leaning [00:31:00] into is this idea of like, you're just gonna be able to cheat on everything, and why not do it and we'll help you do it.

[00:31:04] And it's like, oh my God. Like, does. This is the antithesis of what we should be striving for. With ai, it's like, let's save the world and cure diseases. Oh no, let's just like teach people to cheat on everything and give 'em $16 million in funding. Yeah. And maybe there's more to it and I don't wanna like be too judgmental here, but like it, yeah.

[00:31:23] Mike Kaput: Well, based on their marketing, I don't think you're being too judgmental. 

[00:31:26] Paul Roetzer: They want people like me to say, what I just said basically is like the whole goal. So there you go. I got baited into, like, mentioning them. 

[00:31:34] OpenAI is Now Doing Consulting

[00:31:34] Mike Kaput: Alright, our third big topic this week, OpenAI is getting into the consulting game. They are getting into high touch consulting, mimicking the model that's been popularized by defense tech companies like Palantir.

[00:31:48] OpenAI is now offering fine tuned enterprise grade AI solutions built by its own engineers. Only to clients willing to spend at least $10 million. So these custom services [00:32:00] involve tweaking models like GPT-4 L using a company's proprietary data. Then building apps, often chatbots tailored to specific business needs.

[00:32:10] So this puts OpenAI in direct competition with the consulting giants like Accenture and software firms like Palantir. Palantir has kind of gotten very good at doing this thing where they have these quote forward deployed engineers that go into organizations and build out, services and implement software.

[00:32:30] And so OpenAI has actually been hiring to build out its own consulting team from some of those people. The clients for OpenAI already include the Pentagon, which assigned a $200 million deal, and Southeast Asia's grab, which used OpenAI to map roadways using street level imagery. Now, OpenAI says these partnerships are about solving harder billion dollar problems.

[00:32:52] Giving customers insight into what's next, including future Enterprise uses for, say, the AI powered device. It's [00:33:00] co-developing with former Apple designer Jony Ive, which we will talk about again in a second here. But first, Paul, this seems like a pretty big move for OpenAI. Like are they seriously now competing with companies like Accenture, for instance?

[00:33:14] Paul Roetzer: Yeah, I mean, definitely. it's tough. So, you know, I experienced this firsthand Mike, you were, you were there as well. so I've mentioned this before, but my former marketing agency was HubSpot's first partner back in 2007. So we were the origin of their partner ecosystem today. They're solution partners, ecosystem.

[00:33:34] And so we became a reseller of HubSpot software, but more a value added partner where HubSpot would sell software and then we would provide the services to create value for that software. So if an organization were to buy HubSpot, and I integrated the CRM. Build their website, build a social strategy, build an inbound content strategy, whatever.

[00:33:55] They built the software, sold the software, we wrapped services on that software, and it [00:34:00] was great. It was a very profitable business. it's kind of a proven model to have these outside partners that, that helped do the work and bring the value to the hardware and the software. And so in the early days of HubSpot, they didn't want to have services inside because they had yet to IPO, I mean, when I started with them in 2007, this was seven years prior to their IPO.

[00:34:23] And so even back then, they had a vision of becoming a publicly traded company building, you know, a massive multi-billion dollar company, which they obviously succeeded at. And to them, they didn't want to have more than a certain percentage of their revenue coming from services because it would actually reduce their overall valuation.

[00:34:41] And so it, you know, things have evolved obviously since 2007, but generally the playbook is very similar that these companies that provide. The software, or in this case, the AI models. You don't want to have 50% of your revenue coming in from services. It's nowhere near the margins of a software [00:35:00] business.

[00:35:00] services are hard. It requires humans to deliver work, at least until OpenAI maybe replaces the need for the humans. But like in theory, you gotta go hire people, you have to build this entire forward engineering department or whatever they're building. And so the temptation to offer services for people like OpenAI and in my day, like HubSpot, one is there's revenue growth.

[00:35:23] And obviously here, there, there's tremendous revenue growth. I mean, we'll talk in, in one of the rapid fire items later today about like Accenture and what they're generating. But I mean, I would imagine that OpenAI probably looks at this as a five to $10 billion a year service business out of the gate.

[00:35:39] Like there's no reason it couldn't be. And over time, it may be a 50 to a hundred billion dollars annual business if they wanted to build services as a major revenue component. So that's a, that's the first thing. The second is quality control. So if you're relying on other people to do the work, [00:36:00] you lose the ability to control how the models are being fine tuned and how they're being integrated and things like that.

[00:36:06] And that becomes a real challenge as you're trying to scale. And that leads to the third real pressure, which is performance. So in HubSpot's case, the early days when you relied on outside partners to do the onboarding, to do the customization of the different hubs, you really needed those, those people to not only provide quality services, you needed it to lead to higher adoption rates, higher utilization rates, higher customer happiness.

[00:36:35] Value creation, and it had to prove out that it actually, you retained more of your clients, your customers, if an agency was involved, if an outside partner was involved. And so if you're opening the eye and you're in this moment where you're creating these incredible models and you're kind of relying on outside parties like an Accenture to do the work, to do the onboarding, the fine tuning, and maybe you're seeing it's not going the way you would want it to [00:37:00] go, then there becomes this like, okay, we have to get into this game because the people aren't getting the value.

[00:37:08] They should be out of our models. We have to do the fine tuning ourselves, we have to provide more services. So I don't know what their roadmap is here, but this is an age old issue where the creator of the product wants more control and wants to. you know, who, who believes that they can drive greater performance, adoption, utilization, retention, value creation if they're more involved versus relying on an outside partner ecosystem.

[00:37:36] And so I think that that's what's happening here. Now, the interesting part, and you kind of alluded to this, is like my first thought was, thinking Machines lab. So Mira tis new start, this is what we learned last week that they're doing is like, they're basically providing fine tuning on models. Now, we don't know if thinking machines are gonna build its own models or not, but the idea is they're gonna pie [00:38:00] kind of this reinforcement learning and fine tuning on top of it.

[00:38:02] I would think this is creeping into Microsoft territory. Hmm. You know, you're starting to kind of come up there. cohere is another, company that we've talked about many times, a Canadian AI lab that is doing something similar. It's all about fine tuning these smaller models and adapting them for enterprises.

[00:38:20] So. I mean, I think we're just gonna see a massive rush for this kind of stuff. and it'll be interesting to see what Opening Eyes formulas are, because again, I, I, maybe they're not thinking about it that far ahead, but back in the day there were formulas that said, if you wanna eventually IPO, you cannot have services in excessive X of your revenue.

[00:38:42] . And I know for HubSpot, over time, they have generated more and more revenue from their services. At some point they found that they had to get more involved in the onboarding process. They had to be more involved to drive more retention. and so like, yeah, I don't know. There's always that allure to [00:39:00] just start bringing this stuff in house.

[00:39:01] And that's honestly, like back in 2008 when I started building my agency to be HubSpot's first partner, I asked them point in play. I was like, are you guys gonna build an agency? Like, why wouldn't you just do this in house? And that was the answer I got is we can't, like, we can't have that much revenue, coming from services.

[00:39:16] I was like, all right, cool, then I'll, I'll do it. And that led to me writing the marketing agency blueprint and. You know, kind of being, you know, as high profile as I was about what we were doing with HubSpot in those early days. 

[00:39:28] OpenAI - Jony Ive Drama

[00:39:28] Mike Kaput: All right, let's dive into some rapid fire for this week. So first up, OpenAI AI's new hardware partnership with, ex Apple designer Jony, ive has hit a legal snag over its name.

[00:39:42] So the company has had to pull the promotional material for this upcoming AI device, which is called io, the letters io after being hit with a trademark complaint from a startup called io, which is IYO, which makes AI powered [00:40:00] earbuds. Now, this doesn't kill the $6.4 billion deal between OpenAI and Jony Ivy, I, but it does mean that the IO branding is temporarily off the table.

[00:40:13] And what's interesting here is Sam Altman took this fight pretty public. By posting private emails with iOS founder io, the one suing them, Jason Regulo. And he had previously pitched Altman on investing in his company at the time. According to his emails, Altman declined. He cited a competing device that was in the works and Ella's complaint says opening AI used those interactions to inform its own product.

[00:40:42] Then swooped in with a confusingly similar name, Altman in a post about this called the lawsuit. Silly, disappointing and wrong, but a court granted io a temporary restraining order on opening AI's use of the IO brand. So their actual device [00:41:00] that OpenAI is building is still moving ahead. Though we don't have, really any details on this, it is reportedly perhaps an AI assistant designed by Ive to sit on your desk and sense your environment, but we still have no real insight here into what it is, so.

[00:41:16] Paul, there's a lot of drama here, especially with Altman posting these emails. What is the likely outcome here? 

[00:41:25] Paul Roetzer: So, tech companies, as we, I mean, kind of started off talking about they tend to be pretty cavalier with their use of other people's IP and brand names. It's always like, I was always shocked at how blatantly these companies would just take someone else's brand name and just repurpose it.

[00:41:44] Like, it's almost like they didn't, either, they didn't even bother conducting a trademark search to see someone already had the name or they just don't care and figure they'll spend more money on legal fees and solve it. This one's a weird one because again, I, I'm not an expert on this stuff. [00:42:00] I have dealt with plenty of brand names and IP related things through the years.

[00:42:04] The fact that they're not even spelled the same is weird, right. The biggest issue here seems to be that they had communications and, that the leaders and the companies were in communications. And that there may be very similar products being built to the one that Altman was obviously aware of existed.

[00:42:24] Now, the one thing we did learn in this is in a, in a filing related to this, that they had to disclose that the device is not, quote, is not an in-ear device nor a wearable de device. Mm. So while we don't know what they're gonna build in a court filing, they said that the product is at least a year away from being offered for sale.

[00:42:48] And it is not a wearable, which is kind of fascinating. That's, so that's the first I'm aware of that being disclosed and it came in like a briefing last week. So I don't know. Other [00:43:00] than that, like, who knows? I guess they'll probably, it wouldn't be shocking if they ended up having to just change the name, but we'll see.

[00:43:08] OpenAI’s Microsoft Office Rival

[00:43:08] Mike Kaput: Next step. OpenAI is quietly preparing to take a direct shot at Microsoft and Google by turning ChatGPT into a full-blown productivity suite, thanks to a range of possible features. This is according to some new reporting from the information and Bloomberg OpenAI has developed features for collaborative document editing, multi-user chat, and possibly even file storage, which essentially reimagines ChatGPT as an all-in-one workspace for teams.

[00:43:40] Now, this move would escalate open AI's competition with Microsoft, its largest investor and closest partner. It would also threaten Google's dominance in cloud productivity internally. This project has been in the works for over a year, led by product Chief Kevin Weill. The rollout has been slow due to some staffing and other [00:44:00] priorities, but features like Canvas, which is an AI driven doc and code editor and is already out and part of ChatGPT has already kind of laid the groundwork here.

[00:44:10] Interesting. We also, we also saw another report that in the workplace ChatGPT is also quietly eating Microsoft's lunch. Companies like Amgen and Bain, which were one's co-pilot customers have shifted large teams to chatGPT citing much better usability and faster improvements. Now Microsoft still has plenty of scale here, though it claims co-pilot is used by 70% of the Fortune 500.

[00:44:37] Now Paul, I found this particularly interesting one, given open AI's increasingly strained relationship with Microsoft. Sam Altman's comments in the past about chatGPT basically becoming an operating system for your work, for your life. Not to mention. The copilot versus ChatGPT debate here, the sheer number of people I've talked to [00:45:00] who unfortunately have access to copilot, but don't appear to have a lot of positive things to say about it.

[00:45:05] This story, I have to say, kind of rang true to me. 

[00:45:10] Paul Roetzer: I th I think it's a big opportunity for OpenAI and I think Google better get their act together, like really fast. So this is, I, I've stated this on the show many times, like the number one frustration for me with these chat bots is that they are not integrated directly into the productivity apps that we use all the time.

[00:45:26] Yeah. So if I'm in Google Gemini and I'm having a conversation, I have to export it to Google Docs, and then it's static. Like now I'm no longer in that thread, and now I have like an export into Google Docs. What, what I've said all along, I want, I have no idea if Google's working on this or if Microsoft's working on this is.

[00:45:47] instead of going into Gemini and having a chat, I just wanna open a Google Doc and have the chat and have everything live right there because it's so hard to keep track of all the different threads and chats that are going on, all the [00:46:00] documents you've created. And so there just, there needs to be a much deeper integration between the chat experience and the actual productivity apps.

[00:46:09] . and that functions in a way that's familiar, where it's automatically like added to the file folders and the permissions carry through. And like all of that right now it's, there's very distinctly a Gemini experience and a Google Workspace experience. And the fact that those two aren't more tightly integrated is, is kind of like really confusing and frustrating to me.

[00:46:31] if Google solved that and it made that experience, I would use Gemini dramatically more than I do now. . there's always this balance between ChatGPT and Gemini. Like Gemini's a really good model. 2.5 Pro is a really good model and I like it. I don't like having to export everything every time I want to do it.

[00:46:50] And it's like it creates this unnecessary step and friction chat. GT is even worse because it has no productivity app. It's tied to, yeah, so then I, I'm [00:47:00] when ChatGPT, then I open a Google doc and then I have to copy and paste individual parts of a thread into a Google doc to make it work and in a, in a place where I actually can now do something with it.

[00:47:12] And so like that friction, someone has to solve that. If OpenAI ends up solving that before Google, shame on Google. Like you have the infrastructure for, you already have all the productivity tools. If OpenAI somehow shows up and replicates Google Sheets and, or for Microsoft's sake, Excel and Google Docs and word like shame on both Microsoft and Google.

[00:47:36] Like you cannot get beat at your own game here. Like it is, I'm watching it coming like a slow moving train for the last year and a half. If they don't see that coming and solve it, then. They deserve to lose that, lose that market share to OpenAI because that's absurd that they haven't figured that out yet.

[00:47:53] Intel Outsources Marketing to Accenture and AI

[00:47:53] Mike Kaput: Next up, Intel is slashing its marketing workforce and handing the reins to [00:48:00] Accenture and ai. So under a new CEO, the CHIPMAKER is outsourcing much of its marketing to the consulting giant, which will rely heavily on AI to handle campaigns and customer outreach. As a result of this, layoffs are expected with most employees to be informed by mid-July or so of whether or not they're affected.

[00:48:20] In an internal memo, Intel said the change was part of a broader effort to become a leaner, faster, and more efficient company. The company cited slow decision making and bloated programs as reasons that it's falling behind competitors, especially in fast moving areas like ai and this outsourcing. Marks a bet that AI, when paired with a partner like Accenture.

[00:48:44] Can outperform traditional teams in branding, customer insights, campaign execution, and the like. Intel even hinted that some employees may train their replacements. Alright, Paul, so I like totally understand the need to make some [00:49:00] painful decisions. If a company is not doing well and Intel is not doing well, according to the report we saw, their sales have fallen by a third in recent years.

[00:49:08] They're not profitable. I don't know though, this just seems like a possibly a terrible idea. Like not only are you just outsourcing all your marketing, but I think more, to me, you're outsourcing your AI usage and literacy. Like am I wrong for being deeply skeptical? You should be wholesale trusting Accenture with this level of responsibility and involvement in your company's AI future.

[00:49:34] I mean, I wouldn't do it, but it doesn't mean wrong. Like 

[00:49:38] Paul Roetzer: it's certainly not a human-centered approach. Like, like right. We preach is like a responsible human-centered approach. This is not that. it is a heavy reliance on Accenture and trusting of Accenture that they're gonna like do this the right way.

[00:49:53] You're not gonna sacrifice the customer trust and relationships and you're ever gonna be able to recruit humans again, who want to [00:50:00] come work in the marketing department and say like, why would I come work there if you're already telling me you don't think that I'm necessary to, to do this function?

[00:50:08] Right. at a high level, we talked in April of 2024, episode 91, we covered the fact that Accenture was seeing massive growth in their generative AI bookings. At that time, it was 600 million in the previous quarter. generative AI bookings in Q1 2025 for Accenture were 1.2 billion. So they doubled it in a year basically.

[00:50:30] you called out a number of things in this article. I'll just hit a couple of quotes here. They said, this is from Intel, what they told employees, quote, the transition of our marketing and operations functions will result in significant changes to team structures, including potential headcount reductions with only lean teams remaining.

[00:50:47] As part of this, we are focused on modernizing our digital capabilities to serve our customers better and strengthen our brand. That seems the opposite, but Okay. Accenture's a longtime partner and trusted leader in these areas, and we look forward to expanding our work together. [00:51:00] While we expect that lower costs will be a natural end result of this decision, the reality is that we need to change our go-to market model to be more responsive to what customers want.

[00:51:09] We have received feedback that our decision making is too slow. Our programs are too complex and our competitors are moving faster. Well, sure that's probably true. We are partnering with Accenture to lever ai, leverage AI driven technology with the goals of moving faster, simplifying processes and reflecting best practices while also managing our spending.

[00:51:27] companies seem to raise the possibility. They'll ask workers to train the replacements, as you alluded to, and said, AI can help us analyze large amounts of information faster, automate routine tasks, personalized customer experience, and make smarter business decisions. Again, this is all intel to their message for marketing.

[00:51:42] Our goal is to empower teams with more time to focus on strategic, creative, and high impact work by automating repetitive and time consuming tasks. I don't want you Mike, but like all I could see in my head is office space, the Bobs, when it's like, what exactly do you do here? Like, I just kept, like envisioning that entire thing.[00:52:00] 

[00:52:00] So, I will say anecdotally, I have had conversations with multiple executives at other large companies that aren't Intel, and they have confirmed for me, this is exactly what has happened. Like thi this is, this is not isolated to Intel. 

[00:52:16] Mike Kaput: Yeah. 

[00:52:17] Paul Roetzer: If you work at a major company, there is a very good chance that Accenture or some other consulting firm is pitching people at that company.

[00:52:25] About replacing workers. Like it is, it is Mike, when I've been warning this was coming for the last year and a half. It's happening right now. And so the thought here is I, I, like, I don't, I don't even know my final thoughts honestly. Like it is, it is going to continue to happen. Like people like Accenture are gonna generate a bunch of money replacing humans and outsourcing the work to them, which they will then use AI agents to do the work and a bunch of CEOs are gonna buy into this.

[00:52:54] Does this end up blowing up and being the total wrong move three years from now? [00:53:00] Maybe. it, it's gonna happen though. And these are kind of like the early people willing to go out and do it publicly. Although I guess this was an internal memo. They didn't like willfully publicize this. but this is the Andy Jassy memo from a few weeks ago.

[00:53:14] Yeah. Brought to life. This is the next thing that happens at Amazon, is the next thing that happens at all of these companies. So I don't know to, to be continued. but uh. This is happening now and it will continue to happen, I guess is my final thought here 

[00:53:31] Salesforce CEO: 30% of Internal Work Done by AI

[00:53:31] Mike Kaput: in our next item, somewhat related. Salesforce, CEO.

[00:53:34] Mark Benioff says that AI is now handling up to half of the company's internal work. In an interview with Bloomberg, Benioff revealed that ai, he said, is doing 30 to 50% of tasks at Salesforce, including software engineering and customer service. That shift has allowed the company to scale while hiring fewer employees.

[00:53:55] His exact quote was, quote, AI is doing 30 to 50% of the work at [00:54:00] Salesforce now. One standout tool is a customer service AI they're using that's hitting 93% accuracy, which they say is good enough for high profile customers like Disney Benioff framed the shift, not as job elimination, but almost as kind of liberation.

[00:54:16] He said, quote, all of us have to get our head around this idea that AI can do things, that can do things that before we were doing. And we can move on to do higher value work. Now, Paul, my first thought when I read this was like a bit conflicted. On one hand, like I'm not at all surprised if 30 to 50% of work eventually can be a reasonable goal to hit for AI to do over time.

[00:54:40] On the other hand, I just, for whatever reason, maybe with all the conversations I've had, how I've seen other companies work, I'm deeply skeptical. Salesforce is actually 30 to 50% of the tasks are automated by AI or being done by AI today. 

[00:54:56] Paul Roetzer: Yeah, I mean, Benny Ops a hype man. It's debate. He is, I mean [00:55:00] obviously an incredible, legendary CEO.

[00:55:02] He also tends to hype things. Yeah. Um. there's no way these numbers, all right. I'm not crazy over here saying, 

[00:55:10] Mike Kaput: that seems like a crazy number to me for a big organization out the game. 

[00:55:15] Paul Roetzer: My guess is it's like anything else. You pick data points within some context and then they, there's some element of truth to them in some things.

[00:55:23] So I actually like, because of my skepticism of this, I went and pulled the full transcript of the interview. so this is actually, it's live on a podcast now. We'll drop the link in. So the Bloomberg article we quoted was like a preview of what was coming. And so the circuit, Emily Chang, Bloomberg, she has a podcast and she is, I mean, we've talked about her interviewing Sundar Pcha, I think Sam all like, she lands great interviews with a lot of these tech leaders.

[00:55:49] So he, here's the actual excerpt, just to put this in context for everyone. So Emily says, to to Benioff. So you said you won't hire any more coders at Salesforce, and you've said [00:56:00] today's CEOs will be the last to manage all human workforces. What does this mean for businesses? To which Benioff said, well, I just had a meeting with my head of engineering and we're looking at productivity levels of 30 to 50% this year and key functions like engineering, coding, support, and service.

[00:56:19] So Emily says, you're saying AI is doing 30 to 50% of the work. Benioff AI is doing 30 to 50% of the work at Salesforce now, and I think that that will continue. I think that all of us have to get our head around this idea, and this is what you said, that AI can do things that before we were doing and we can move on to do higher value work.

[00:56:41] To which Emily replies. So Salesforce is 75,000 employees now. Is it half that in the future? Benioff? I'm not willing. To make a projection exactly like that. I do think probably we'll rebalance. There's no question that we [00:57:00] have this opportunity to take advantage of the technology to get to a new place, and I think every company is going to be able to do that.

[00:57:07] So the answer to that is yes, we are going to have fewer people in case not reading between the lines here. So then Emily says, so Salesforce is marketing, its AI tools on their ability to replace human labor. Do you have any ethical qualms about that, Benioff? Well, it's a digital labor revolution that is, we're probably looking at $3 to 12 do, or I'm sorry, three to $12 trillion of digital labor getting deployed.

[00:57:36] And that digital labor is going to be everything from AI agents to robots. So he is basically saying, we're gonna replace three to $12 trillion in human labor costs with agents and robots. we've seen all the kind of robots that are coming. We've seen the movies for a long time. Right now we're seeing it deployed and I think it's really just technology is marching forward.

[00:57:55] It's getting lower costs, it's getting easier to use. And I do think to your point, [00:58:00] CEOs have to make sure their values are in the right place and that values bring value. But we're becoming more auto. Then, we'll add this link to the show notes. I happened to then see this this morning as I was kind of prepping for today's podcast.

[00:58:14] Digital workers have arrived in banking. This is Wall Street Journal. bank of New York Mellon said it now employs dozens of AI powered digital employees that have company logins and work alongside its human staff. So I get asked all the time, Mike, and I'm sure you do too, who's actually doing this, right?

[00:58:31] Like, is this real? Are there actually AI agents? So here you go. Like people are always asking for these examples. Here's, here's an example of people claiming this is actually happening. I. This is again, continuing Wall Street Journal. Similar to human employees, these digital workers have direct managers that they report to and work autonomously in areas like coding and payment instruction.

[00:58:51] Validation says the CIO, Lee Ann Russell. Soon they'll have access to their own email accounts and may even be able to communicate with [00:59:00] colleagues in other ways, like through Microsoft Teams. so you may ask yourself soon, is soon like two years? Like what does soon mean? Well, here we go. Russell said this is the next level.

[00:59:12] While it's still early for the technology, I'm sure in six months time it will become very, very prevalent. So there you go. By the end of 2025, the what are we at? The Bank of New York Mellon is going to haveis, logging in and communicating with their people. the bank, also known as BNY, calls digital workers.

[00:59:32] Other banks may refer to as AI agents. The industry lacks a clear consensus on exec terminology. It's clear technology has a growing presence of financial services. So then they actually gave another example. many say that they're shaping AI into applications that increasingly replicate the capabilities and workflows of human employees taking on more and more tasks in areas like software, software development research.

[00:59:52] Several like JP Morgan Chase, say they're still figuring out the exact right access and management controls and system integrations [01:00:00] and how human-like these tech systems will become. Talked a little bit about BNY and how it took 'em a few months to kind of spin this stuff up. And then at JP Morgan Chase, chief Analytics Officer, Derek Waldron thinks, quote, unquote, digital employees as more of a helpful model for business people to conceptualize ai.

[01:00:17] He does envision a future where every employee will have an AI assistant and every client experience will have an AI concierge. Hmm, 230,000 employees already have access to a general AI chat bot through the company's proprietary platform. The goal is to build out more autonomous and more agentic versions of it that are further and further tailored to individual job groups.

[01:00:39] So zoom out. What, what we're hearing in this episode is, this is all stuff that's happening now. Intel's replacing workers with Accenture ai. OpenAI is gonna start playing in this space, and they're gonna probably start doing the same kind of work. They're gonna fine tune these models. So you just don't need as many people.

[01:00:59] [01:01:00] Salesforce is doing 30 to 40, 30 to 50% of their work with it. Like, so the people who don't think this is all happening and don't think corporate America is already changing. And not just America globally, like that corporations aren't changing, HR processes aren't changing, people aren't using ai. It is all reality right now, you're, you're just maybe not living in that bubble yet, but like it's coming to your world.

[01:01:23] And if you're not at the C-Suite level, you may not be hearing these conversations yet 'cause they don't know how to tell you it. If you are at the C-suite level and you're not having these conversations, you may be falling behind your competitors. Like I think, I don't know, just kinda like high level here, Mike.

[01:01:37] Like this is kind of what is coming through to me from this episode. 

[01:01:40] Mike Kaput: Yeah, no, I love it. And it ties together several of the different threads we've been kind of pulling on over these past episodes with ai, impact on employment and jobs and the incentives around the, alright. 

[01:01:53] More Meta AI Recruitment Efforts

[01:01:53] Mike Kaput: Next up, meta just poached four more AI researchers from OpenAI that brings its total to eight.

[01:02:00] In the past two weeks. Mark Zuckerberg is doubling down on his bid to catch up in the AI arms race. Like we've also talked about in the past couple episodes. These latest hires include key contributors to open AI's, fast reasoning models, oh one Mini and oh three Mini, as well as leaders in multimodal AI and perception.

[01:02:19] I. All four are joining Meta's Super Intelligence Lab under Alexander Wang, former scale ai, CEO, who was brought on this past month after Meta made paid $14.3 billion for a 49% stake in scale. It also came out this past week that Meta also recently held acquisition talks with runway, the video AI startup.

[01:02:42] The discussions never reached a formal offer. They're no longer ongoing, but they are part of Zuckerberg's increasingly aggressive push into AI acquisitions and recruiting to build super intelligence. In some cases, as we've talked about, he's reportedly offered a hundred million [01:03:00] dollars to poach talent.

[01:03:02] So Paul, this is a topic we've been following for a couple weeks now. I think we started out maybe reporting on it as a somewhat desperate attempt by Zuckerberg to catch up here and fix that as AI situation. But boy, does it seem like he's made some progress here. I mean. Poaching this many OpenAI researchers is no small feat, I don't think.

[01:03:23] why do you think they're going from OpenAI to meta? Why are they jumping ship now? 

[01:03:29] Paul Roetzer: Yeah. so meta historically is more open source. so there's a possibility some of these people want to go out, work on more open source stuff. There's a chance that Zuckerberg's just willing to do things. OpenAI isn't gonna be willing or able to do either because of their Microsoft relationship or their governance or whatever it may be.

[01:03:48] So some of this is just gonna be, people's personal preference to be maybe in a more forward-thinking lab. I don't know. Some of it is just probably the [01:04:00] money. but my thing was like, is like, what does this mean? Like, is four researchers actually meaningful? Like does this change anything in opening eye?

[01:04:09] Do they really even care? Do the researchers move all the time? And so I came across a Wired, magazine article. That sure. Makes it sound like this has become a pretty significant problem at OpenAI. So this is, again, we'll put this in the show notes. so here's straight from this article. Mark Chen, the Chief Research Officer at OpenAI, sent a forceful memo to staff on Saturday promising to go head to head with the social giant, meta in the war for top research talent.

[01:04:37] This memo was sent to OpenAI employees in Slack and obtained by wire came days after Meta CEO Mark Zuckerberg successfully recruited four senior researchers from the company. this is quote from Chen. I feel a visceral feeling right now as if someone has broken into our home and stolen something.

[01:04:56] Please trust that we haven't been sitting idly by. [01:05:00] Chen promised that he was working with Sam Altman, the CEO and other leaders at the company, quote around the clock to talk to those with offers, adding quote. We've been more proactive than ever before. We're recalibrating comp and we're scoping out creative ways to recognize and reward top talent.

[01:05:19] This, this creates an, I'm thinking this actually in this moment. This creates an even greater sense of urgency to solve this organizational structure issue so that OpenAI, can IPO Yeah. To get the kind of money that they're gonna need. They have to IPO at some point here, and now it's gonna become, there's no way that these levels of comp were built into their projections.

[01:05:39] And so now you're gonna have to go raise more money. Or eventually, I, the remarks come as OpenAI staff grapple with an intense workload that has many staffers grinding 80 hours a week as a re as a result, OpenAI is largely shutting down next week as the company tries to give employees time to recharge according to multiple sources.

[01:05:59] I actually [01:06:00] saw this on Twitter last night, so, they are, they're supposed to be shutting OpenAI offices largely next week. . Executives are still planning to work, though, said, said the sources. Now here's an interesting one from Chen's Memo. Meta knows we're taking this week to recharge and we'll take advantage of it to try and pressure you to make decisions fast and in isolation.

[01:06:20] Another leader at the company wrote, related to Chen's memo. If you're feeling that pressure, don't be afraid to reach out. I and Mark Chen are around and want to support you. So if this is like, code red and opening eyes, what it's sounding like. Yeah. And the one thought I had was like, I bet Elon Musk is so pissed that he isn't the one that's causing all this pain and frustration.

[01:06:45] So do not be surprised if when we come back after our, week off, not 'cause we're working 80 hour weeks, but because we have courses to build. if Elon Musk isn't in the game, also offering massive numbers to people [01:07:00] because he is not gonna wanna be left out of a party to stick at to Sam Malman. 

[01:07:05] Mike Kaput: Yeah.

[01:07:06] This doesn't seem like it's going to be a week of relaxation and recharging for Sam Alman. No, 

[01:07:11] Paul Roetzer: but there's gonna be a lot of opening eye people making some banks, so, no kidding. 

[01:07:15] AI First Book Release

[01:07:15] Mike Kaput: Yeah. All right. Our next topic, a book called AI First, the Playbook for Future-Proof Business and Brand is now available. You will perhaps recognize this book.

[01:07:24] It is something we've talked about for a while because it's been released chapter by chapter over the last year or so by the authors, Adam Braman and Andy Sack. Now in it, Braman and Sack secured interviews with some of the top people in AI and tech, including Sam Altman, bill Gates, and Reid Hoffman. Now, Brockman and S have awesome backgrounds for talking about this topic.

[01:07:48] Brockman is the former Chief Digital Officer at Starbucks. He played a pivotal role in the development of the coffee giants mobile payment and loyalty programs. Sac is a legendary tech investor, former [01:08:00] advisor to Microsoft, CEO, Satya, naa. And the book already made waves because the first time we really talked about it, we covered it way back on episode 86 when we reported on an explosive quote from Sam Altman in the book's early chapters that had been released at that time.

[01:08:16] So I'm just gonna quote this again very quickly, from our discussion then when the authors asked Altman quote, what do you think AGI will mean for us and for consumer brand marketers trying to create ad campaigns and the like to build their companies? Altman replied, quote, oh for that, it will mean that 95% of what marketers use agency strategists and creative professionals for today will easily, nearly instantly, and at almost no cost be handled by the ai.

[01:08:45] And the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing, again, all three instant and nearly perfect images, videos, campaign ideas. No problem. [01:09:00] So Paul, that quote made quite a stir among our audience. we got a crazy amount of discussion and traffic from posts about it, as people kind of, we had been kind of the first time people heard him kind of say that out loud.

[01:09:15] It's great to see the full book get released. Excited about that. You know, in reference to the quote, I kind of went back and looked. We first reported on that quote in early March, 2024. And just a few episodes ago, literally almost a year later to the day we covered this topic about calcis AI generated NBA finals ad that was made in three days for 400 bucks in credits for Google's new VO three video model.

[01:09:43] and it aired right next to $400,000 ads. Like stuff like that made me start thinking, I realized Altman is not correct here necessarily. There's a lot of nuance and context we unpacked to what he said. But my gosh, if you look at where video was a year ago and what happened just [01:10:00] recently with the NBA finals ad, it's, it feels like some of this is coming true.

[01:10:04] Paul Roetzer: Yeah. And that's what we wanted to like, you know, give a good mention for this book. Yeah. Because, Adam and Andy, so they were on the stage at Ma Con last year. I actually interviewed them about the book. So the book was originally AI journey and then it was rebranded as AI First. And yeah, I mean the stories they told at Ma Con were incredible.

[01:10:22] Like their experience of the Reid Hoffman and Bill Gates and Altman, and Mustafa Solomon and people like that, Sal Kahn. but yeah, it was so fundamental to like, when we created the AGI timeline, like this was the quote that sort of like triggered like, okay, we have to, we have to start doing more to prepare people.

[01:10:39] And to your point, I think like so much of what Sam has said is like, while we're not at Agis, we're not at this like 95% is gonna be done. I mean, there's so much of what he said then that we're starting to definitely see the front edge of, like, I have in the last month, I've had at least four different major companies.

[01:10:58] I have had the [01:11:00] conversation about synthetic data and modeling of campaigns. . Through simulations. Like this idea that we can create millions of customers in a, in a simulated environment and run campaigns against them to where it's just like you have this predictive model of e everything and how it's gonna work because we're testing it against simulated people.

[01:11:22] Like in, in a digitized world like this is in sci-fi stuff. And I, you know, I often get pushback when we talk about alman and people like, oh, he is just a hype man and he's just trying to raise money. And it's like, no. Like he just knows stuff you don't know generally. And sometimes he says it out loud.

[01:11:39] And so things like this, we always look and say like, we have to, you have to be. You have to take this seriously, that there is some element of what he's saying that is probable. Right? And so, yeah, I mean I think it'll be a great experience for people able to read the book now. Like I was just like, when we talked about on stage last September, I was anxious for the book to come out so everybody could actually experience this.

[01:11:59] [01:12:00] and so yeah, I think it's worth like, you know, taking a look at it 'cause they kind of set the stage with these interviews and then it's like, okay, what do you do now though as a marketer, right? What do you, what do you do with this information? So, yeah. You know, congrats to them, you know, friends of ours and big supporters of what we're doing.

[01:12:16] So we appreciate that and, wanted to make sure we, you know, mention the book today. 

[01:12:20] AI Product and Funding Updates

[01:12:20] Mike Kaput: Yeah, for sure. Any one of the interviews in that book is well worth the cost of the book. Yeah. So go pick it up. Cool. For sure. Alright, Paul, we're gonna wrap up this week with some AI product and funding updates. So I'm gonna go through these really quickly and if you have anything you wanna stop and comment on, go for it.

[01:12:37] Otherwise we'll just keep on trucking. So first up, rep lit the AI coding platform. Apparently in five and a half months went from 10 million to a hundred million in annual recurring revenue, which is an insane growth rate. And rep lit, spent literally over a decade in the wilderness before AI kind of caught up to their vision.

[01:12:57] their AI agent, which launched in late [01:13:00] 2024, turned them from a kind of freemium coding sandbox into a full stack AI app generator, and their numbers exploded as a result. Next up, we talked last week about how X OpenAI CTO Mira Meti raised a whopping $2 billion for her AI startup thinking machines lab, valuing it at 10 billion with no product and no details on the business model at all.

[01:13:25] we're finally learning what she's building. Well, kind of, according to some reports in the information. The core idea behind the company is quote, Inforce enforcement learning for business. Custom AI models train to optimize a company's KPIs like revenue or profit. So instead of one size fits all ai, she wants to deliver purpose-built models that directly impact the bottom line.

[01:13:48] So I don't know how much more clarity that gives us, but sentence by sentence, we're learning something about this company. Next up, a trio of former OpenAI engineers have quietly [01:14:00] raised $20 million for a new AI startup. It's called Applied Compute. It is from all former technical staffers at OpenAI. and at least one of them helped launch open AI's oh one reasoning model.

[01:14:12] The venture is still in stealth mode, but sources say it's also focused on reinforcement learning. So Benchmark led the round with Sequoia and top tier VCs following in, and it values applied compute at a hundred million dollars. Next step. Google just dropped a new AI fashion app called Doppel, and it's all about trying on clothes without ever getting dressed.

[01:14:37] So this is built by Google Labs. Doppel lets you upload a photo or screenshot of any outfit, so like something you see online, and then visualizes how it would look on an animated version of you. This is not just a static image, but a full on AI generated video that shows the outfit in motion. This app is available now in the US on iOS and Android, but Google does admit it [01:15:00] is still experimental.

[01:15:02] And last but not least here, Google Sheets just got a serious upgrade powered by Gemini. So starting June 25th, users could now type prompts directly into cells using a new AI function. So you do equals ai and then you can give it a prompt. You can give Gemini a prompt to generate content, summarize data, analyze sentiment, or categorize inputs instantly.

[01:15:24] So it's like having an AI assistant in every cell of your spreadsheet. Alright, Paul, that's a wrap on a busy week in ai. Really appreciate you unpacking everything for us as always. 

[01:15:35] Paul Roetzer: Yeah, good stuff. Again, reminder, no weekly July 8th or 15th, it looks like, July 22nd we will be back. We will probably do a mega episode.

[01:15:47] Probably, probably go all rapid fire. We've done that before where it's like, okay, let's hit as many as we can in like 90 minutes or less. So we'll do our best to keep you updated. follow, follow me on LinkedIn. I'll keep posting. you follow [01:16:00] Mike as well. Put our, you know, show note links. Twitter, I generally XI still say Twitter.

[01:16:07] I share a lot of the stuff we're gonna talk about throughout the week on X. So, you know, if you wanna follow me on X or on LinkedIn, try and keep you updated. I'll still be posting there while we're kind of in the lab building all these courses. And then we'll talk to you again on July 22nd. Oh, and then the, what was it?

[01:16:22] July 9th, we have the intro to ai. Yes. Yes. Yeah. Intro to AI on July 9th. So you can, you know, join us for that live class as well. All right. Well have a great couple weeks, enjoy your summer while we're away, and we will be back with you on July 22nd. Thanks for listening to the Artificial Intelligence Show.

[01:16:40] Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy [01:17:00] and engaged in the Marketing AI Institute Slack community.

[01:17:03] Until next time, stay curious and explore ai.