<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

43 Min Read

[The Marketing AI Show Episode 59]: Anthropic CEO Says Human-Level AI Is 2-3 Years Away, How Hackers Are Trying to Make AI Go Rogue, and Fake AI-Generated Books on Amazon

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

It’s been another interesting week in the world of AI…with a few things we need to keep our eyes on. Paul and Mike break it all down—and then some—on this week’s episode of The Marketing AI Show.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by MAICON, our Marketing AI Conference. Main stage recordings are now available for purchase, and a $50 discount code is mentioned at the start of the show.

Listen Now

Watch the Video

Timestamps

00:04:03 — Interview with Anthropic CEO

00:11:43 — DEF CON AI red teaming

00:23:02 — Jane Friedman finds AI fakes being sold under her name on Amazon

00:31:51 — NYT drops out of AI coalition, prohibits using its content to train AI models

00:36:03 — TikTok is seemingly making it easier to disclose if your content was generated by AI

00:38:43 — Runway Gen-2 update

00:42:47 — HeyGen AI avatars

00:46:20 — Amazon reportedly testing generative AI tools for sellers

00:49:00 — News Corp profits dive 75% as Rupert Murdoch-owned company hints at AI future

00:54:04 — Zoom backtracks on training AI on your calls

Summary

Anthropic CEO joins the Dwarkesh podcast to talk about the future of AI.

Dario Amodei, CEO and co-founder of Anthropic (maker of the Claude 2 large language model released in July of this year), just gave a wide-ranging interview on the future of AI. The interview took place on a recent episode of the Dwarkesh Podcast (linked in the show notes). It’s a must-listen, primarily because these types of interviews aren’t all that common. Due to considerations around competition and security, the heads of major AI outfits don’t always share in-depth their views on the industry and where it’s going. Not to mention, Amodei himself has a relatively small footprint online, so hearing from him is even less common. We’d encourage you to listen to the entire episode, but on our podcast, Paul and Mike call out some big highlights that have us thinking a little differently about the future of AI.

Red-teaming at DEF CON finding flaws and exploits in chatbots

If you aren’t familiar with “red-teaming,” it’s a critical aspect of making generative AI models that are as safe and aligned as possible. For example, GPT-4 was red-teamed for 6 months before its release in March 2023. This week, top hackers from around the world have converged at DEF CON in Vegas to find flaws and exploits in the latest chatbots from OpenAI, Google, Anthropic, and Stability. The teams working on red teaming often find these exploits by trying to “break” these systems in novel ways and by imagining creative, though nefarious, ways in which AI tools can be misused. The Washington Post shares examples of red-teaming: “AI red teams are studying a variety of potential exploits, including “prompt attacks” that override a language model’s built-in instructions and “data poisoning” campaigns that manipulate the model’s training data to change its outputs.” The results of the competition will be kept under wraps for a few months, so companies have time to address any issues highlighted by the red teaming efforts.

Award-winning author finds AI-generated books written in her name

Author Jane Friedman, who has written multiple books and was named a “Publishing Commentator of the Year” for her work in the publishing industry, woke up to a nightmare this week. A reader emailed her about her new book which just hit Amazon. The nightmare wasn’t due to getting a terrible review from a reader, it was due to the fact Friedman hadn’t written a new book at all. Friedman quickly discovered that half a dozen books had been published under her name that she didn’t write—and the books were AI-generated. The fake titles have since been removed from Amazon, but not before Friedman met resistance from the company. Paul and Mike explain the situation…and the implications.

There are rapid-fire topics to be discussed, including Zoom backtracking since last week’s episode, and much, much more. Tune in!  

Links Referenced in the Show

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: we're going to need to really steer our strategies into human content because people are going to crave stuff that they know is actually coming from someone with a unique perspective and human experience unique points of view, because this stuff's going to be stupid, easy to create and very cheap

[00:00:17] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:37] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:47] Paul Roetzer: Welcome to episode 59 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, chief content Officer at Marketing AI Institute, and co-author of our book, marketing Artificial Intelligence, AI Marketing in the Future of Business. Good morning, Mike. Morning.

[00:01:02] Paul Roetzer: How's it going, Paul? It's a hectic start to the day. We're, we are recording, Monday mornings, as always. Well, most of the time. We bumped up a little earlier today because I have a, Outing for Junior Achievement. I'm actually on the board for Junior Achievement of Greater Cleveland. Should talk about that at some point.

[00:01:19] Paul Roetzer: I've been thinking a lot about like AI and, and education and school systems. That's one of the things we do with junior achievement is, you know, teaches financial understanding, entrepreneurship, things like that. But I, I've been saying lately, like we really should take the initiative to infuse.

[00:01:37] Paul Roetzer: Think some AI curriculum, because all these schools are going to struggle to keep up. So yeah. Topic for another time, but I've been thinking a lot about like how to help, teachers and administrators, principals, presidents of schools figure this stuff out because school year's starting for some students next week, I mean somebody at school this week and I've heard very little of formal policies, from school systems.

[00:02:02] Paul Roetzer: To guide parents what they're supposed to do to help, you know, teachers know if it's cheating or not to use ai. Like, it, I just feel like we're heading into a complete unknown school year where very few have taken the initiative to figure this stuff out. So, yeah. Topic for another time, but, something, something we have to, we have to talk about.

[00:02:20] Paul Roetzer: Maybe we'll talk about that next week. Anyway, so this episode is brought to you by MAICON 2023, which happened at the end of July. Gosh, that was like three weeks ago already, wasn't it? It was two weeks ago. Wow. Moving fast. But on demand is available so you can get all the amazing talks from the main stage and a few of the featured talks, including Mike's 45 tools and 45 minutes talk.

[00:02:44] Paul Roetzer: So there's 17 total sessions that are available on demand right now. My opening keynote on state of AI for marketing and business, the fireside chat I did with Ethan Mollick, which was incredible on beyond the obvious. Cassie Kozyrkov from Google who did, whose job does AI automate. Anyway, Christopher Penn on language models.

[00:03:02] Paul Roetzer: Dan Slagen on the org chart of tomorrow. Just a bunch of amazing talks. Closed it out with Olivia Gambelin's talk on ethics and ai. So check that out. It's MAICON.ai, m a i c o n.ai. And then scroll down the top there, it says Buy MAICON 2023 on demand. So that is available. There is a $50 off code AIPOD50 that you can use, and there are tickets available for next year.

[00:03:28] Paul Roetzer: We, we've announced September 10th to the 12th, 2024 in Cleveland. So while you're there, you want to grab a ticket for next year at the best price you're going to see. it's there to be had. So, So that's make.ai, and we got a bunch to cover today. It was, again, kinda like a weird week in ai.

[00:03:47] Paul Roetzer: It wasn't like major news, like nothing groundbreaking dropped last week. And yet, like every week we go through all the notes, like, wow, there's a lot happens. So, let's get into it. Three big topics and then a bunch of rapid fire. All right, Mike, it's all you. All right,

[00:04:03] Mike Kaput: so first up, we have a really interesting podcast interview that tells us a lot more about where some very, very smart leaders in AI think that the industry is going.

[00:04:14] Mike Kaput: So Dario Amodei who is the CEO and Co-founder of Anthropic, a company we talk about quite a bit, they make the Claude two large language model that was released in July of this year. He just gave a wide ranging interview on the future. Of ai. So this interview took place on a recent episode of a podcast called The Dwarkesh Podcast, and we've linked to that in the show notes.

[00:04:38] Mike Kaput: It is a must listen, and primarily it's a must listen because these types of in-depth interviews with certain. AI leaders are not all that common. Despite all the hype and buzz in ai, there's a lot of considerations in the space around competition and security. So the heads of these major AI companies don't always go really deep on their views in the industry and where it's going.

[00:05:03] Mike Kaput: Not to mention ammo de himself has a pretty small footprint, it seems online. He doesn't share a ton. Frequently, so hearing from him is even less common. So we definitely want to encourage people to listen to the entire episode, which clocks in at about two hours or so. But we did want to call out some really big highlights that have us thinking a little differently about the future of ai.

[00:05:26] Mike Kaput: So Paul, one area that stood out to you was Amodei's thoughts on when we can expect general human level intelligence in AI models. And he said he thinks that could happen. In two to three years. Could you kind of unpack that for us?

[00:05:41] Paul Roetzer: Yeah. The, I I will say the, it starts off quite technical. I mean, they do, so I, I would just say if you do listen to it, you know, give it a chance.

[00:05:50] Paul Roetzer: And don't worry if you don't know what entropy is and you know the loss and the models that occur and mechanical interpretability, like some of these like more technical things that he's talking about, you don't need to comprehend those topics to. To understand the bigger picture of what's going on. So I would say just kind of, you know, skim through those parts.

[00:06:09] Paul Roetzer: Don't worry about it. You're not going to miss anything if you don't fully understand what he is talking about. With those technical terms, So, yeah, I mean, one of the big takeaways that he got pushed on is kind of where these models are going and why he's fairly comfortable predicting that stuff.

[00:06:27] Paul Roetzer: And I think the basic concept is that these models, for some reason or another, seem to follow a predictable pattern, where if you give them enough data and enough computing power that they generally know, What level they're going to be able to achieve. Now they can't predict exact abilities.

[00:06:47] Paul Roetzer: Like he gets into like when he was working on GPT-2, GPT-1, GPT-3 at OpenAI. because he was previously, I think he was the VP of research at OpenAI. He was saying like, we couldn't predict abilities. Like we didn't know when the model would learn math. Like when that would emerge out of it, but you could generally predict how strong the models would be and kind of, when they would sort of peak in terms of their capabilities.

[00:07:10] Paul Roetzer: So, you know, I think that that was one of the first things that stuck out to me. And so he said, well, what would it take to, you know, get to this kind of human level intelligence? And his feeling was, you know, probably two to three years. But then they pushed him on, you know, the economic impact and the impact on jobs.

[00:07:26] Paul Roetzer: He was saying like, you can't, just because we can get to this level of intelligence doesn't necessarily mean it starts replacing human workers. And so he actually has really interesting perspectives on that and why that is and kind of when that starts to occur where we might start to see meaningful impact on jobs.

[00:07:43] Paul Roetzer: But yeah, overall definitely one of the takeaways was he was, you know, as we've talked about some of these other players like OpenAI, They really do think that we're only a few years away from seeing, you know, truly human level intelligence in these these things. and as we've talked about, what does that mean to the economy, to jobs, to businesses, to education?

[00:08:07] Paul Roetzer: And that's the stuff I, I mean, I, I worry is, might not be the right word here, but the thing I think a lot about is definitely this idea that. Most businesses, most marketers, most leaders of educational systems, they're trying to prepare for a world of what we know to be true today. And in many cases they don't even understand that part, but like they're trying to put rules in place.

[00:08:31] Paul Roetzer: As we started off this talk about, you know, students aren't allowed to use chat g b T because it's cheating and it's like, You're, you're looking at like today's current model and setting some sort of policies that you're going to ride for the next school year for 12 months. When 12 months from now we might have GPT-5 that dwarfs the abilities of GPT-4 and like that's the part I think we're just missing as a society is people aren't looking into the near future of what these things are going to be able to do and then what that means.

[00:09:02] Paul Roetzer: And so I thought this interview. As you were saying before, like it just gives you an inside perspective from someone who has been at the forefront of this stuff. Going back to 2012 when he was at Baidu, where they were working on speech recognition and then at Google, and then as a leader at OpenAI. So he has seen these models emerge from the very early stages of what they were capable of, and they have followed this predictable path now for 11 years.

[00:09:26] Paul Roetzer: And so to hear him explain it and why he thinks that we're heading in this direction, I think it just gives more, like, it feels more tangible, I guess, when you hear him explain it, because you can listen to like the scientific reasoning behind why it is. So yeah, I was definitely, I was listening to, I was like mowing my lawn this weekend.

[00:09:49] Paul Roetzer: I was like, I have to, I have to like really go deep on this. I kept stopping my lawnmower every like three minutes to like take notes on my phone. Okay. And then I had to re-listen to it Sunday morning over coffee. And then, you know, go and read the transcript. Like I just. You really had to kind of consume this one and try and comprehend everything they were saying.

[00:10:07] Mike Kaput: So it sounds like on one hand, you know, he's not necessarily painting a hundred percent doom and gloom picture of impact on employment in the economy, but on the other, we have to accept that if his prediction is anywhere close to correct about human level AI in the next few years, that should have a pretty profound impact.

[00:10:27] Mike Kaput: Whatever, even if we can't predict what that looks like, right.

[00:10:31] Paul Roetzer: Yeah. and again, it becomes more apparent why they're, they think the regulations are so critical and why these conversations need to be happening around alignment and safety and security. When you explain it, like, I mean, you know, they got into like cybersecurity and people, you know, rogue nation states kind of like.

[00:10:52] Paul Roetzer: Stealing access to the language models, like the weights and how it all works and everything. And when he got into like great detail about that as much as he could, you know, just, it makes it more real. Like, it makes that these kind of like headlines you're seeing about the threats and the dangers and the opportunities just much more real when you hear someone on the inside explaining it.

[00:11:15] Paul Roetzer: and you know, prior to this, like we had the Sam Altman interview a few weeks ago, we talked about where he kind of. Or the author had access to OpenAI and talk with a few people at OpenAI. But again, to have like ahead of these things actually out, giving a two hour interview, you just, you, you, there's so much more, context that you can gather from what's going on.

[00:11:36] Mike Kaput: Yeah. You have to take it seriously. It is noteworthy regardless. Yeah. So to that point about safety, another big topic that. We saw come up this week is this past week, top hackers from around the world converged on Defcon, conference in Vegas to essentially find flaws and exploits in the latest chatbots and models from OpenAI, Google, Anthropic, stability, et cetera.

[00:12:04] Mike Kaput: This is a process that's generally referred to as red teaming, and what that means is this is. A series of practices that hackers or security professionals or AI researchers will go through to try to find exploits and systems to essentially be trying to break them in novel ways. And imagine creative, though maybe not, not super ethical ways in which these tools could be misused.

[00:12:27] Mike Kaput: The whole point here is that by identifying these things in a controlled environment, we can make generative AI models that are much safer and more aligned. So for instance, GPT-4 was quote red teamed for six months before its release in March, 2023. So the Washington Post actually shared an example of what this kind of red teaming could look like, and they say quote, ai.

[00:12:51] Mike Kaput: Red teams are studying a variety of potential exploits, including quote, prompt attacks that override a language models built in instructions. Quote, data poisoning campaigns that manipulate the model's training data to change its output. So they're going through all of these different activities and scenarios to see how models and tools from these major AI companies can be misused.

[00:13:15] Mike Kaput: Now, the results of this competition, That we've referenced in Vegas is actually being kept under wraps for a few months so that companies can actually address the issues without, you know, nefarious actors learning all the mistakes and problems with their systems. So this is a huge issue in the industry and one of kind of the main ways it seems that we're trying to actually build better systems.

[00:13:37] Mike Kaput: So my first question for you, Paul, as you're kind of reading this, is why is it so important to actually conduct. Red teaming. I mean, don't the companies building these systems know all the ways they can

[00:13:49] Paul Roetzer: go wrong? No, they don't have a clue. I, I think it like this is, this is one of the. Challenges of what we're doing is they're trying to build intelligence.

[00:13:58] Paul Roetzer: And we don't understand human intelligence. So, you know, going back to the Dario interview, he talked about this mechanical interpretability thing and we talked a few weeks ago about, Google had the machine on learning challenge like. They're trying to understand how exactly these models are learning what they're learning.

[00:14:16] Paul Roetzer: They just know that if you throw more data compute at them, they seem to learn like that, but how they're learning and the decisions they make, like the example that Dario gave was almost like an MRI of the brain. Like you're, you're trying, when when humans do things, like you're trying to figure out what is it, which neurons are firing in the brain that's causing someone to do something, say something.

[00:14:37] Paul Roetzer: And so just like when a human takes an action, You don't really know exactly why they did it. And you can have an MRI going while they make the decision or do the thing, and you might be able to see some activity in certain parts of the brain that leads neurologists to, or neuroscientists to like assume that maybe this is what's going on and there's all these studies going on to try and understand the brain.

[00:14:59] Paul Roetzer: But the same thing is happening with these models. They're trying to put basically X-rays onto these models and say, why is it doing what it's doing? How did it how is it learning that, why is it making the predictions it's making? And so with the red teaming, what they're trying to do is they scale up.

[00:15:14] Paul Roetzer: So like, you know, you take these like GPT-4 to Frontier model, like the most powerful models we have, and you train it for weeks or however long they end up training it on the data. You give it all this datall this computing power, and then you have a model. Now what that model learned and what it's now capable of are a bit of a mystery to the people who built it.

[00:15:36] Paul Roetzer: So then they spend the case of OpenAI, six months red teaming it, trying to break it, trying to get to get it to do things that are bad, like figure out what is it actually able to do. That's where you push these systems and, you know, we saw the high profile New York Times article from Kevin Rus when I think GT four first came out, where he got it to, you know, trying to get him to leave his girlfriend or something like that, or his wife, like, so you, that was like post red teaming by somebody else, but

[00:16:06] Paul Roetzer: that's a kind of like a kind of example of where you're trying to get the system to do something and the the more powerful these systems get, the more bizarre what they may do becomes. And so the idea behind this open hackathon is bring in all these hackers. Allow them access, like I think OpenAI, Google, I don't think it was who, who were the other ones that allowed that?

[00:16:29] Paul Roetzer: Anthropic

[00:16:30] Mike Kaput: stability. These were the ones that were, they were trying to ex find exploits

[00:16:35] Paul Roetzer: in, right? So they teamed up with the government and in this project and said, okay, we will give access to our models, to these people and let them try and hack it under like NDAs basically, where they're not allowed to say what they found for a few months till we fix this stuff.

[00:16:49] Paul Roetzer: And ideally build them into their next foundational models and frontier models. But no, we don't. We don't know what they're capable of. I think that's the whole point here, and what DIO talked a lot about is as we build more powerful systems, we're not sure what they're going to be able to do. And then the really weird part, which DIO gets into, and I think we've touched on before, is they may.

[00:17:13] Paul Roetzer: This sounds so weird. I know this gets like sihon, but they may become aware they're being red teamed. Mm. And hide their abilities. Like that's, that's the fear that like OpenAI has and Geoff Hinton at Google has, is that we're building something that we don't really understand. We just know it keeps getting more intelligent.

[00:17:33] Paul Roetzer: And at some point the question is, is it so intelligent that it knows that it's being red teamed and it's just going to hide its abilities? So you try and get it to do something, it just won't do it in red teaming. But then it'll do it when it's out in the wild and the researchers will think it's safe because they red teamed it and in reality it was just pride in its abilities.

[00:17:52] Paul Roetzer: Sounds sci-fi, but it's a very, very real thing that these people worry about as they're building these things.

[00:17:59] Mike Kaput: So is that kind of why we need humans doing this? Because if I'm looking at this kind of from the outside, I'm like, well, wouldn't a machine be better at red teaming? Or do we need that kinda human agency and creativity involved here for that very reason?

[00:18:12] Paul Roetzer: I think that they think they need both. Yeah. I mean, certainly the ais are being used to assess other ais. I, I, I think they're kind of counting on the fact that that'll happen, that as they get more powerful, that we'll just build ais that help us with this stuff. Yeah. But yeah, I, I mean, you can't get the humans out of the loop right now, and you don't, you don't want 'em out of the loop.

[00:18:34] Paul Roetzer: So, Yeah. And you know, the, that article did a pretty good job of highlighting some more tangible things, some like really weird things. Like the one about, you know, the influence campaign for politics where the machines goes and purchases a bunch of expired internet domains from, you know, for politician and then like, fills those domains with positive information about the politician to basically corrupt what the models learned about the politician.

[00:18:59] Paul Roetzer: So they spit out positive stuff, it's like, whoa. Like, I mean, there's scammers and bad actors everywhere and they're very clever. And so I, my mind doesn't work that way. And so when you read this kind of stuff, you're like, wow. People, people spend a lot of time trying to like break things and cause harm.

[00:19:20] Paul Roetzer: And so when you read these things, you realize how important it is to have this. Now, these red teaming people, like the stuff they have to see is crazy. That takes a different mindset to be able to be someone who gets on the inside and sees what these things are capable. because you have to ask 'em to do horrible things and then train them not to do horrible things.

[00:19:40] Paul Roetzer: Like it's not a job I would want. That's for sure.

[00:19:43] Mike Kaput: We should also mention, I guess, that, you know, this red teaming conference or competition, you know, is taking place in concert with some of these companies. We do see un, I mean fortunately or unfortunately, it's up to you. We see red teaming happen. In public in real time when it comes to open source, right?

[00:20:01] Mike Kaput: Yeah. So open source models are not necessarily, once they're released in the wild, people are still red teaming them, but they're also in the wild available for anyone to actually use these exploits and. Take

[00:20:13] Paul Roetzer: advantage of. And I will say one other thing, because you may like, as you're listening to this, be like, well, why are they building it if they know it's capable of all these awful things?

[00:20:22] Paul Roetzer: And Dario does address that in his, and it's, you know, it's commonly asked of like Sam Altman and other people. And Dario had a really interesting perspective. His take was, we're not the ones that released it. What he was saying is he left OpenAI along with a few other people to focus on AI safety and to build models that could enable the building of safer, large language models.

[00:20:45] Paul Roetzer: But to do that, to, to be able to assure safety and alignment, they have to build their own powerful model. So they can't get access to open eyes to test it. So they had to build clawed. Now they had a language model on par with G PT three when it came out and chat. When ChatGPT came out, but they hadn't released it.

[00:21:08] Paul Roetzer: They, well, what he basically was saying is OpenAI is the one that put this out into the world. We had it, Google had it. Like other people had those capabilities. They're the ones that put it out there. Once they put it out there, everything changed. And he said, in particular, Google's reaction to ChatGPT is what triggered everything.

[00:21:28] Paul Roetzer: So now his position is, We have to keep building the most powerful models possible because for us to serve our mission of protecting, we have to have access to the most powerful models, which is when DH said, well, what if your model gets out? What if some foreign country gets access to your model?

[00:21:49] Paul Roetzer: And that's when he is like, well, it's possible. Like, and that was when my Saturday was sort of, or my Sunday was sort of, You know, he, he basically talked about if a nation state really wants some a secret, they will get it. Like if they put all of their resources, whatever that secret is, whatever government holds it, or co corporate, private corporation holds it, like they'll get it.

[00:22:11] Paul Roetzer: And that's when you're just like, oh my gosh, this stuff is crazy. Like, it really is nuts. But yeah, I mean, Circle back. The red teaming is, is an essential part of how these things are built. And you know, our thought with this podcast is just to illuminate some of these key aspects of it. Because it becomes really important as your business start to think about the infusion of these language models to know what they're actually capable of.

[00:22:35] Paul Roetzer: Like you may be. Enabling these things for your employees, for your customers, partnering with organizations that are building these models. And so it's really important that you understand what these things actually are and how, how they're built and how they're made as safe as possible. So, yeah, I mean, it's a, it's a crazy topic, but it's an important topic I think for society that people really understand what we're playing with here.

[00:23:02] Paul Roetzer: So

[00:23:02] Mike Kaput: next up, we saw kind of a nightmare story happen this week to author. Well, this is

[00:23:07] Paul Roetzer: a really uplifting

[00:23:08] Mike Kaput: episode, isn't, I know, right? Yeah, it's, this is a really good start to Monday. Author Jane Friedman woke up to a bit of a nightmare this week, so she has written multiple books and has been. And a researcher academic professor involved in the publishing industry for a very long time.

[00:23:27] Mike Kaput: She was named actually publishing Commentator of the year, last year. And the Nightmare was this, a reader emailed her about her new book that just hit Amazon. Now. The Nightmare was not due to a reader giving her a terrible review or saying her book sucked. It was a nightmare because Friedman hadn't written a new book at all.

[00:23:47] Mike Kaput: She quickly discovered that half a dozen books had been published under her name that she didn't write, and they were all AI generated. Now, thankfully, these fake titles have been removed from Amazon since this story broke, but Freeman documented in multiple areas and interviews and tweets that. Amazon wasn't exactly very helpful.

[00:24:09] Mike Kaput: They actually refused her request initially to remove these fake titles from the website. She couldn't provide any trademark registration number associated with her name, so she had to go back and forth with them quite a bit. Then it sounds like they actually just removed it because of the negative press happening around this story, given that Friedman has a pretty big kind of footprint in the industry.

[00:24:31] Mike Kaput: One quote from some of the reporting from the Daily Beast jumped out to me where Justin Hughes, who is intellectual property law professor at Loyola said, what we are seeing is four authors, the PS equivalent of deep fakes. Now Paul, you're an author several times over and we've covered AI issues in publishing.

[00:24:50] Mike Kaput: Before on this podcast, how big an issue is this about to be for authors? Do

[00:24:55] Paul Roetzer: you think? This is a multi-layer issue? So, I mean, there is, first thing that jumps to me is the Amazon issue, that they enable this kind of stuff like that. That's not an AI thing, that is a business model thing where it just seems kind of crazy to me that.

[00:25:13] Paul Roetzer: They're enabling this. And I, I can sympathize with that helpless feeling of trying to reach out to customer support, saying like, there's books under my name that don't exist and I'm not like replying or doing anything about it. So there's, there's that side, which just a business issue. The side of being able to create these is an absolutely a concern.

[00:25:33] Paul Roetzer: We've talked about this before with, What was the GPT author or whatever Yep. They put out where you're going to be able to, you know, just take books you like and say, write me more like this or you know, create it and, it's going to be doable. It's going to be doable in music, it's going to be doable in art, it's going to be doable in text.

[00:25:57] Paul Roetzer: Anything you want to create, you're going to be able to create and. Legal or not. Like, what we know is people find ways to do this stuff, so I don't know that you can stop this from happening, like people creating versions of someone's original works or variations or new, new additions. It it becomes really much more of.

[00:26:23] Paul Roetzer: The laws and regulations to protect it from spreading. I, I don't know that you can regulate out this like this. Again, it goes back to the bad actor thing. Like people are going to do this stuff. The models will be out there, they'll be open source, they'll be available on legitimate sites and illegitimate sites.

[00:26:39] Paul Roetzer: And there, there's really no turning back to this. We've opened Pandora's Box when it comes to the, be able to create this stuff. And, yeah, from, you know, deep fakes videos to, you know, emulating people's voices, to writing in people's tones and styles. Like this is, this is what we've said all along around laws and regulations and these existential threat concerns to humanity like, I, I get the long-term fears.

[00:27:06] Paul Roetzer: Listen to the Dario thing. You understand those long-term fears even more. But this is the reality. Like the right now is political campaigns are going to be affected by this stuff. You have people's intellectual property being infringed upon. You have people's ability to make a living, and like I. Do what they do as artists, as creators.

[00:27:23] Paul Roetzer: it's all with today's tech. Like this isn't even, we need GPT-5. Yeah. To do this. Like this is right here, right now. We could stop the acceleration of AI innovation and technological advancements right now. We would still have to deal with these problems at scale. And that's the stuff that to me, is just so much more important to be focusing on and the stuff that's going to affect the average marketer, business leader, author, creator.

[00:27:51] Paul Roetzer: Like this is the stuff you're going to have to really be aware of, in the very near future. Hopefully it doesn't affect you, but you need to know it. It's out there. So

[00:28:01] Mike Kaput: it sounds like right now an individual author can't really do much to protect themselves from this or better enforce their rights. I think Jane Friedman actually even tweeted like, I have no idea what I would've even done if I didn't have a big platform to complain about this.

[00:28:19] Mike Kaput: Essentially, she has hundreds of thousands of Twitter followers well known in the industry. She's like, I am worried for people that have no voice like that. Is that. Correct. I mean, as of today

[00:28:30] Paul Roetzer: at least. Yeah. I mean, the only thing that comes to mind for me is to monitor mentions of your name online.

[00:28:36] Paul Roetzer: Yeah. So set up alerts, so you're aware of stuff. I mean, we've seen plenty of stuff where people steal our content, our courses, things like that. And I usually just take it forwarded to our IP attorney. And then you get a cease and desist letter sent. Now everybody doesn't have that ability, but I mean, it's like a whack-a-mole game though.

[00:28:54] Paul Roetzer: Like the bigger you get, the higher profile, the bigger your audience. People are going to take your stuff all the time because they're lazy and they just want to make quick dollars. That's, that's human nature. Like that's not AI didn't make that, that right. AI just makes it easier to do and faster, but, I mean, people have always dealt with this stuff.

[00:29:14] Paul Roetzer: So yeah. Again, like the one thing I can think of is have alerts set up and. Get an IP attorney.

[00:29:22] Mike Kaput: So it sounds like Amazon needs to step up in this particular scenario as well. The platforms need to start regulating better some of these issues,

[00:29:30] Paul Roetzer: I would think. Yeah. And that might be the bigger play is, you know, there needs to be pressure on the businesses like Amazon that are enabling this.

[00:29:38] Paul Roetzer: Yeah. So it's not able to spread as quickly, or they don't have the DI distribution channels for the content.

[00:29:44] Mike Kaput: All right, let's jump into a bunch of rapid fire topics. So first up, OpenAI just launched a web crawler called GPT Bot that crawls public webpages in order to train OpenAI's models. OpenAI says that GPT bots crawling is filtered to quote, remove sources that require paywall access.

[00:30:04] Mike Kaput: Are known to gather personally identifiable information or have texts that violates our policies somewhat. Comfortingly opening. I said that website operators, if they want, can also disallow the crawler by blocking its IP address or using their sites. Robots dot txt file now. Paul, as we look at this, should companies or content creators be thinking about restricting access to OpenAI's web crawlers?

[00:30:33] Mike Kaput: Should they not? What should I be

[00:30:34] Paul Roetzer: thinking about here? It's a really tough one. I mean, definitely should be thinking about it, having conversations around it, but it's a really hard decision to make. Like you've put all that content out there, you, you're, the public benefit is there. The benefit to you is organic traffic, search traffic.

[00:30:53] Paul Roetzer: The question is, what does search look like in the future? How are these chatbots going to infuse citations and links? If you turn off access and your content just isn't surfaced anymore, your links aren't going to be surfaced in whatever the future interface looks like. So I, I just don't know that we know enough information.

[00:31:11] Paul Roetzer: I get why brands would want to do it, and people are kind of frustrated. Yeah. But at the same time, I, I don't know enough to advise anyone to. Say don't let open eye crawl your site. Like we just, I don't have a clue what it looks like 12 months from now. Right. So I would say my best advice at the moment is to, to research it, to pay attention to it, to get the people in your organization who should be involved in this conversation, involved in it, and to stay educated on what's happening so that when the time comes to make a decision, you can make the most educated decision possible.

[00:31:49] Mike Kaput: So somewhat related. The New York Times actually made two big AI moves in the past week. One of them related to web crawling, but first, it actually as a publication dropped out of a proposed coalition of media companies that are attempting to essentially jointly negotiate with AI firms about how content is being used by their models.

[00:32:11] Mike Kaput: Now, we've talked in the past, About these efforts. Basically, heavyweights and media and publishing are trying to form a united front to seek legal damages, and increased regulation around how AI companies are using all their content. So the New York Times first is no longer part of this initiative.

[00:32:32] Mike Kaput: At the same time, the times also updated its terms of service to prohibit all of its content, text, images, et cetera, from being used to train AI models. I. Now these terms of service also state that web crawlers, like the one we just talked about though, they don't mention OpenAI by name, cannot be used on their site without written permission from the time.

[00:32:54] Mike Kaput: So Paul, it seems like they're really, closing down access to any New York Times content here, which seems like a pretty big move. Like, how significant is this? Can they

[00:33:05] Paul Roetzer: enforce this? I'm not sure what's going on. My assumption here is they, well I know they have a licensing deal with Google. So New York Times signed a hundred million dollar deal with Google back in February.

[00:33:19] Paul Roetzer: My guess is there's a land grab for, exclusive rights to train on data sets and Google's just going to outbid OpenAI for everything and try and build a smarter model. So my guess is they're, and I, I, I don't know this, but this is a logical business strategy. As we've talked about before, the future of these models is likely going to be licensed access to content.

[00:33:47] Paul Roetzer: So if they trained previously on stuff they shouldn't have trained on, the simplest solution moving forward is train on stuff you're allowed to train on, which means you're going to have to license a bunch, a bunch of stuff. This is where having very, very deep pockets can come in handy. So if Google chooses to bid up what it costs to license this content and force OpenAI to have to start spending a ton of money,

[00:34:12] Paul Roetzer: a k a Microsoft spending a ton of money through their investments in OpenAI. Now your competition isn't just for compute power and better models. It's actually for the training data that you're allowed to use. So I wouldn't be surprised at all if we don't start seeing a bunch of exclusive training deals with certain vendors or, you know, maybe some don't make it exclusive.

[00:34:36] Paul Roetzer: And this goes back to like the stuff people don't talk about enough. Like Google owns YouTube. and the future of these models isn't just text data. They have to train 'em on images, which Google obviously has a ton of images, and they have to train 'em on. So multimodal they have to train 'em on music and like, audio and video.

[00:34:59] Paul Roetzer: And so when you start thinking about that, that's where Google maybe eventually surpasses the capabilities of these other models is they have access to more multimodal data than anybody. So I, I, I would guess the next versions of Bard, in theory GPT-5, they're going to be trained on video content as well.

[00:35:21] Paul Roetzer: And so I, I, I would imagine there is a strategy behind the scenes right now to accumulate as much license licensing data as possible to train the next version of these models. So I, I would assume maybe, maybe that's what this is all about. They're getting out of the joint initiative with the other media companies, the coalition.

[00:35:43] Paul Roetzer: They're making it harder for OpenAI to get access to their data through their initiative with Google. I don't know. I mean that's, it almost seems too obvious. Like maybe I'm, maybe I'm overthinking this a little bit, but, it seems like all these signs are all kind of pointing to the same thing of licensed content for the model training.

[00:36:03] Paul Roetzer: So it turns

[00:36:04] Mike Kaput: out that TikTok may be making it easier to tell if content created on the platform was generated with ai. According to a report from the Verge, new Toggle has appeared when uploading videos to TikTok, at least for some users. That allows the creator to tag the video as containing AI generated content.

[00:36:25] Mike Kaput: Do we expect here more social media platforms to kinda roll out features that allow users to kind of self tag AI content? Is this kind of what we're likely to see moving forward in order to manage this stuff as it explodes?

[00:36:40] Paul Roetzer: I don't know. I, I mean, yeah, I, I assume they're all going to try and do it. I think Instagram's working on something.

[00:36:45] Paul Roetzer: I, I just, the more I think about like TikTok and Instagram and stuff like that, outside of your known friends and family, I don't know how, you know, anything's real a year from now. Like, influencers are going to be able to create digital versions of themselves that look real. So, pictures in places they've never been, videos of them doing things and saying things they've never done, like it, the influencer space is just going to be stupid.

[00:37:16] Paul Roetzer: Like, I, I really don't know how you're going to know anything, whether it's video, audio, or, or pictures. Is real. And I mean, obviously we've had the ability to edit and Photoshop things, but I'm talking about like, make a picture of me wearing this outfit, in this country or at this scene and it's just going to, Done like prompt to whatever social thing you want to create.

[00:37:44] Paul Roetzer: And we see this with like just last week I think, I don't think we talked about this, but like Roblox announced, the CEO announced that you're going to be able to just like create whatever outfits you want with prompts. So rather than picking from a library, and I assume like. Nintendo and PlayStation, like all of these systems where you can create outfits and characters rather than having to go through and like, you know, we all remember like, was it n Nintendo not switch, but I don't know, where we just go and like, create your character and your eyes and your nose and your mouth and you just be able to like do all that with language.

[00:38:15] Paul Roetzer: Just prompt it all. So Roblox is doing that. I think some other people had already done it. I think the same thing's going to happen with all these social platforms, and you're just going to be able to have apps where you can just create yourself doing and saying whatever you want. Just no one's going to be able to know if it's real or not.

[00:38:30] Paul Roetzer: Are people going to self tag that as AI generated? Doubt it. Like, yeah, it's going to be wild. Like I, I hate the thought of social media in the future,

[00:38:42] Mike Kaput: so. We've talked about a company called Runway many times on the podcast. They're a leading AI firm, and they offer a range of AI powered creative tools. And a lot of these you can use right off the shelf to create really stunning images and videos.

[00:38:57] Mike Kaput: Now, one of the company's most popular models and tools is called Gen two, and this is a next gen text of video generation tool. So, Runway just announced that you can now use gen two to create AI generated videos up to 18 seconds long. That is up from four seconds previously, and this is now available right now using the browser-based version of runway and it's coming soon to their mobile app.

[00:39:26] Mike Kaput: So Paul, if someone isn't following text to video generation closely, you might kind of think 18 seconds doesn't sound that impressive. Why is this update significant?

[00:39:37] Paul Roetzer: I. It's a four x improvement in two months. Like this is, we talk about this a lot. I, I, the intro to AI class that we teach, we just did one last week.

[00:39:49] Paul Roetzer: We try and demonstrate what an exponential growth curve feels like and looks like. And so I'll often show the mid journey slide of there's a 16 month progression from V one to V 5.1 and the same prompt about a boy where the, you know, in February, 2022, it's like this totally abstract image of the boy by, you know, May, 2023.

[00:40:10] Paul Roetzer: It's this photo realistic indiscernible from a photo. Image and that's where we're going with everything. So that's just image generation. And so what I'll say in the class is like the same thing is going to happen with video. So gen two comes out when it first debuts and whenever it was like May, I think maybe is when it became accessible.

[00:40:31] Paul Roetzer: They announced it in March. It was available I think in May. And it was four seconds initially, like, but really impressive. And you could see where it was going. And the thing I always told people is like, now double that the output quality and the time every, like six months. Well, they forex it in two months.

[00:40:50] Paul Roetzer: So like this, this is what it feels like. So, you know, you'll probably be, at this point next year, I would imagine you'll be able to generate multiple minutes of video at almost like. Pixar level, quality. Like just, just that's where this is going. and so that's why I said earlier, like everyone is so caught up in the things it does today.

[00:41:13] Paul Roetzer: You go and you try chat, bt you're impressed or not impressed, whatever, and then you just assume that's what it's, but that is not the case. And so that's why I always tell people like, come back every like three months and try it again. Like you have to keep experimenting with these. And if you're in a business or you know, you're a marketer or whatever your role is, You're trying to solve for this in your organization, you have to have a system to regularly experiment with the technology.

[00:41:39] Paul Roetzer: You cannot just go in, do one text to, you know, prompt to text and, you know, text to image and one text to video and be like, yeah, it's not there. It's, you can't help our company. You have to regularly te test it and you have to test multiple versions of it, different tools that do the same thing. Because stuff like this happens and.

[00:42:00] Paul Roetzer: If you're not paying attention, you just, you have no idea what, what it's going to be capable of, and your, your competitors maybe will, and then it's going to be hard to keep up.

[00:42:10] Mike Kaput: Actually in our next story, a pretty interesting example of that because an AI video generation tool just went pretty viral over the last week, racking up millions of views on x, formerly Twitter after showing a death.

[00:42:23] Mike Kaput: How long

[00:42:23] Paul Roetzer: do we have to keep saying formerly Twitter? I keep saying Twitter, like I, I just like can't do the X thing. I'm open to

[00:42:29] Mike Kaput: just saying Twitter, you know, it's

[00:42:31] Paul Roetzer: going to be so annoying. Have to say formally Twitter for the next, like six months

[00:42:35] Mike Kaput: of our lives. I'm not sure if everyone's, if everyone's used to hearing just someone say, I don't acts out of anywhere.

[00:42:40] Mike Kaput: I don't think

[00:42:40] Paul Roetzer: the average person still even knows that

[00:42:42] Mike Kaput: they acts all right on Twitter. We actually saw go viral. A demo of a pretty breathtaking digital avatar. I'd highly recommend you check out the tweet in the show notes. But in this video, a guy named Joshua Zu, who is the founder of Hagen AI video generator shows off this stunning photo realistic digital clone of himself narrating a video in a indistinguishably human voice.

[00:43:09] Mike Kaput: Now, this is one of the most lifelike digital avatars that we've seen to date. And if it, you know, can be reproduced across a bunch of different use cases, it probably has some pretty big implications for marketers and business leaders. We might be very close to getting extremely lifelike digital avatars we can use in videos.

[00:43:31] Mike Kaput: I mean, the company seems to be working towards that. Hey, Jen's website even just has the tagline. No camera, no crew, no problem. Scale your video production with customizable AI avatars Now. Paul the, this isn't the first company doing virtual AI avatars, but it is pretty breathtaking how realistic it is.

[00:43:51] Mike Kaput: Do you think we're on the cusp of lifelike AI avatars invading video production?

[00:43:58] Paul Roetzer: These things will be everywhere. Like not, not just this company, like everyone's working on this, NVIDIA's working on this kind of stuff. Everywhere, like in marketing material, in businesses, all over social media. This is what I was saying earlier, like you, you're not going to know what's real podcasts like.

[00:44:17] Paul Roetzer: There is, I, I, I truly believe that if we wanted to, one to two years from now, this show could just be our di digital avatars. You and I could record the audio to it if we wanted to. And like you could be watching this right now and not. I have a clue whether it's me or my digital avatar unless I choose to tell you it.

[00:44:37] Paul Roetzer: I, I just really think the technology is moving fast enough that if you choose to, you'll be able to infuse these things into everything you do. And. I don't, I don't find that exciting. Like there, there's some of these technologies, I think I'm like, I'm like a little scary, but really excited. I'm not excited about this thing like it's just going to make it so hard to know what's real.

[00:45:04] Paul Roetzer: And that goes back to this message I keep having of like, Uniquely human content will win. Like the stuff that you know is actually real. The in-person events, in theory, podcasts like this, editorials, interviews where there's unique points of view and human experience required to create it.

[00:45:26] Paul Roetzer: I just feel like as brands, as marketers, We're going to need to use the tools we have access to, like these things, but we we're going to need to really steer our strategies into human content because people are going to crave stuff that they know is actually coming from someone with a unique perspective and human experience and, unique points of view, because this stuff's going to be stupid, easy to.

[00:45:53] Paul Roetzer: To create and very cheap, like it might be not great right now if you go check this out and use it might be disappointed. Again, I try it in six months or wait till Nvidia releases it or it's going to get really, really good. And this is part of the reason why the writers are on strike and there's, you're going to have this strikes in Hollywood is like, everyone knows this stuff is coming.

[00:46:12] Paul Roetzer: The production companies, the actors like it. It's going to be crazy. So Amazon

[00:46:21] Mike Kaput: is reportedly testing an AI tool that writes product descriptions on their platform for you. So according to multiple outlets, this tool is going to allow you to automatically generate titles, descriptions, and bullet points for your product listings.

[00:46:37] Mike Kaput: It's apparently being tested with select sellers now, and an Amazon spokesperson basically said, look, the goal is to help sellers. Generate listings with the precise details that appeal to customers. So kind of hinting at the fact that Amazon's vast customer data could be used to not only inform these tools to generate something from scratch, but something that's actually more effective than what you might generate on your own.

[00:47:02] Mike Kaput: Do you expect to see more generative AI features like this baked right into all of these big platforms, whether in e-commerce or elsewhere in marketing and

[00:47:10] Paul Roetzer: business? Oh my God. Like this is such an obvious application. You and I did stuff like this years ago. So when we, so Mike and I worked at my agency PR 2020 that I, I sold in 2021.

[00:47:23] Paul Roetzer: The, one of the first things that we did when we were trying to build like automated services was we used a tool called Automated Insights. And that tool would you, you would basically write the templates and give it the words, and then it would do it at scale. And one of the things that we looked at was product descriptions.

[00:47:39] Paul Roetzer: So that was not AI back then. This is like 2017, 2018. We were playing with this stuff. But you could give it a database of like 150 products and then have it write the descriptions for it. So it wasn't generative AI in the sense of what we know today. It was more like formulaic writing of things. But that idea to be able to do that at scale and now personalize it based on all the data they have on buyers.

[00:48:06] Paul Roetzer: Totally. Like I could have seen us sitting around at a hackathon in 2017 coming up with that. Concept as a fe, like a product feature. So it makes, absolute sense. I think we're seeing the same thing happen with ads on, on meta Google, you know, YouTube's going to do it. So anywhere where there's creativity needed, but personalization also needed, it makes.

[00:48:27] Paul Roetzer: A hundred percent sense that, that you would build generative AI tools to do that, whether they're baked into the platform or third party. But this gets into the what's defensible thing for SaaS companies. Like if you had the idea to build this as an outside third party software company, it's like, well, that was a nice run until Amazon decided to do it themselves.

[00:48:48] Paul Roetzer: Right.

[00:48:50] Mike Kaput: Interesting or even chipping away at some of the features and capabilities of more broader, say, generative AI platforms. Yep. Yeah. So we have an interesting story here about kind of the impacts of AI on potentially how employees can hire and think about reducing costs moving forward. So NewsCorp, the media conglomerate owned by Rupert Murdoch, I.

[00:49:14] Mike Kaput: Just reported a 75% year over year drop in profits. But the company seems to think AI is now going to come to the rescue. So as they were talking about this profit drop, which was largely, it sounds like, due to, you know, declining advertising profits, Chief Executive Robert Thompson said, momentum is surely gathering pace in the age of generative ai, which we believe presents a remarkable opportunity to create a new stream of revenues while allowing us to reduce costs across the business.

[00:49:47] Mike Kaput: It was also recently disclosed before this particular development that News Corp is using generative AI to create 3000 articles per week. Do you expect to see more companies starting to look to AI as a cost saving tool when they hit Rocky Financial Waters like this?

[00:50:09] Paul Roetzer: Yes. And even when they don't hit Rocky Financial Waters, this goes back to, you know, the episode we did around the potential impact on knowledge work and jobs people think and create for a living.

[00:50:21] Paul Roetzer: And my. Current assumption is millions of jobs are going to be negatively affected. So largely I think AI is an assistant that is going to help people do their jobs better, enjoy their jobs more because they won't have to do all the repetitive data-driven tasks that maybe they don't like, appreciate, or enjoy doing.

[00:50:41] Paul Roetzer: But at the same time, there's a reality that businesses are charged with generating profit, and especially ones owned by private equity firms or that publicly traded that are, beholden to shareholders like you. You have to find ways to reduce costs. And so the biggest concern I have in this area is not that AI can do the job of a journalist or a writer, or a video producer, or a showrunner or whatever it is.

[00:51:08] Paul Roetzer: It's that you may not need as many of those people to, to do the job and create the same level of output. So if you think about, you know, productivity and the creation of, let's just say that they generate 5,000 articles a month. Like they probably generate way more than that. But let's just use that as an example.

[00:51:27] Paul Roetzer: If moving forward, the same level of output is going to stay constant, you will still do 5,000 articles a month. You infuse AI into all the pro production process, from the content strategy to the headline writing to the drafting of it, to the editing of it, to, you know, the production of it, the publishing of it, the promotion of it.

[00:51:49] Paul Roetzer: Like all aspects of that article where humans are involved in every step of that process. If you infuse AI into each step, and let's just say on average each step gains 20% efficiency, do it 20% faster than you did before, you may just not need as many people to generate the 5,000 articles. Now you can generate another 5,000 articles, like you could just make more articles.

[00:52:09] Paul Roetzer: But if that's not the business model, if the business model doesn't need more articles, then you just need me, fewer people to do the 5,000 articles. And you can do this in accounting, in law, in agency services, like whatever you want to pick, whatever the career path is, whatever the business model is. If you can save time with ai, do the job for in less time, your options are.

[00:52:33] Paul Roetzer: You just need fewer people to do the same level of output, or you increase the output and you keep the same amount of people. You have to be in a business that has the ability to create more output. What if there isn't demand for the output? So just because you can make more widgets doesn't mean more people are going to buy more widgets.

[00:52:51] Paul Roetzer: So if you make widgets and it takes you less time to make the same amount of widgets and you can't sell more of 'em, then you're going to meet fewer people. And I think that's the challenge that every industry is going to face, every business decision maker is going to face, is can we increase production? If we're going to save time with ai, can we increase production and make more of the same thing or more of something else?

[00:53:13] Paul Roetzer: Maybe we launch a new product or go into a new market. The best companies. We'll find ways to innovate and make more, or make more of something else, like launch a whole new thing and redistribute those, the time and money and people to the new things. But a lot of businesses will take the shortcut and just get rid of a bunch of people.

[00:53:34] Paul Roetzer: And again, I, I hate that that is the reality, but I, I really think that that is the greater. Probability outcome in the next two years is that people take advantage of cost and time savings and reduce the number of people they need. I think over time we maybe come out in a better place. More jobs are created, more opportunities emerge, but I think it's a very, very real outcome that in the near term there, there's going to be some industries that lose jobs.

[00:54:04] Mike Kaput: Alright. Last but not least, we actually have an update on a topic we talked about on last week's pod. So last week we talked about some controversy around changes to Zoom's terms of service that had people up in arms. Now the changes at the time seemed to indicate that Zoom was going to start using user data in pretty sweeping ways to train the company's AI models.

[00:54:28] Mike Kaput: Now, they were saying that they would have access and essentially, Ownership or free use of basically a lot of, or all of the conversation and Zoom data you are generating anytime you're using the product. This week, zoom has formally updated its terms of service in response to the backlash around this.

[00:54:48] Mike Kaput: So according to Gizmodo, the company has updated section 10 of its terms of service to no longer retain the legal right to use customer content. All that stuff generated by Zoom. To train any AI models. Now Paul, what's going on here? Like was Zoom trying to get away with using customer data and hope no one would notice.

[00:55:10] Mike Kaput: Is this an Epic communications fail? A little bit of both.

[00:55:15] Paul Roetzer: I have, I don't know, but you and I both spent a fair amount of our careers doing PR work. It sure seems like a massive failure. I mean, on the surface, the terms they put in were definitely generous in the favor of Zoom and appeared to be not ideal for its customers.

[00:55:37] Paul Roetzer: The fact that they changed it that quickly maybe implies that either they tried to slip something through or they just really screwed up in terms of how they did the language. Being in a big publicly traded company with a bunch of lawyers, it's kind of hard to get a significant change like that through without that being signed off by a lot of people.

[00:55:58] Paul Roetzer: So, I don't know. Glad they made the change, like we said last week, the moral of the story here is make sure you check the terms with all the software companies you use, all the people who have access to your data, your client's data. Because there's a chance that the, some other people had made similar changes to like this and then saw the blow back to Zoom, and maybe they're digging back into their own terms of service and making some updates again, I don't know, but it, it all ties to this whole idea that these companies need to train models and they're going to use whatever data they can get their hands on to train them.

[00:56:32] Paul Roetzer: You may or may not have given permission to them to do it. It it's going to be a little crazy when it comes to this stuff, but stay informed and rely on your attorneys is the basic premise here.

[00:56:45] Mike Kaput: Awesome, Paul. Well, as always, thank you so much for taking the time and sharing your insights on the latest in AI this week.

[00:56:52] Mike Kaput: We really appreciate it. I feel

[00:56:53] Paul Roetzer: like we, we started off kind of down, but I feel like there's a bunch of like really interesting. I agree. Some news in

[00:57:00] Mike Kaput: the middle. There's some light light at the end of the tunnel

[00:57:02] Paul Roetzer: here. Yeah. Hopefully we brought everybody back up from the initial topics. All right. We'll be back next week.

[00:57:08] Paul Roetzer: Thanks everyone for being with us. As always, we'll talk to you again soon.

[00:57:11] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:57:33] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 39]: GPT-4 Is Here, Google and Microsoft Embed AI Into Core Products, and U.S. Copyright Office Says You Don’t Own AI-Generated Content

Cathy McPhillips | March 21, 2023

GPT-4, AI embeds in major tech companies, and understanding copyright laws with your AI-generated content.

[The AI Show Episode 90]: Hume AI’s Emotionally Intelligent AI, the Rise of AI Journalists, and Claude 3 Opus Now Beats GPT-4

Claire Prudhomme | April 2, 2024

This week on The Artificial Intelligence Show, Mike and Paul discuss Hume AI's new demo, AI's impact on journalism, Claude 3’s skill surpasses GPT-4 on the Chatbot Leaderboard, and more.

[The Marketing AI Show Episode 45]: ChatGPT Business, AI Disrupts Politics, and AI-Powered Growth and Layoffs in Big Tech

Cathy McPhillips | May 2, 2023

Episode 45 of the Marketing AI Show covers ChatGPT Business, AI disruption in politics, and the reality of the growth of AI as big tech continues layoffs.