<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

53 Min Read

[The Marketing AI Show Episode 78]: The New York Times Sues OpenAI, Inside the “e/acc” Movement, and the Terrifying New Power of Deepfakes

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Kicking off the new year after a brief hiatus for the holiday's, our first 2024 episode looks at the latest advancements in AI. Join hosts Mike Kaput and Paul Roetzer as they discuss the significant legal battle between The New York Times and OpenAI/Microsoft, explore the concepts driving the “e/acc” Movement, and examine the implications of deepfake technology.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsors:

Many marketers use ChatGPT to create marketing content, but that's just the beginning. When we sat down with the BrandOps team, we were impressed by their complete views of brand marketing performance across channels. Now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions. Use BrandOps data to drive unique AI content based on what works in your industry. Visit brandops.io/marketingaishow to learn more and see BrandOps in action.

Today’s episode is also brought to you by Marketing AI Institute’s AI for Writers Summit, happening virtually on Wednesday, March 6 from 12pm - 4pm Eastern Time.

Following the tremendous success of the inaugural AI for Writers Summit in March 2023, which drew in 4,000 writers, editors, and content marketers, we are excited to present the second edition of the event, featuring expanded topics and even more valuable insights.

During this year’s Summit, you’ll:

  • Discover the current state of AI writing technologies.
  • Uncover how generative AI can make writers and content teams more efficient and creative.
  • Learn about dozens of AI writing use cases and tools.
  • Consider emerging career paths that blend human + machine capabilities.
  • Explore the potential negative effects of AI on writers.
  • Plan for how you and your company will evolve in 2024 and beyond.  

The best part? Thanks to our sponsors, there are free ticket options available!

To register, go to AIwritersummit.com

Listen 

Now

 

 

 

Watch the Video

 

Timestamps

00:05:19 — New York Times sues OpenAI and Microsoft for copyright infringement

00:23:38 — Inside the e/acc movement

00:42:20 — Ethan Mollick’s perspective on the growing power of deepfakes

00:51:51 — AI-powered search engine Perplexity AI raises $73.6M

01:03:35 — Microsoft’s new Copilot key is the first change to Windows keyboards in 30 years

01:05:08 — OpenAI’s app store for GPTs will launch next week

01:07:59 — Issues with Anthropic’s Claude

Summary

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work

The New York Times has sued OpenAI and Microsoft for copyright infringement, kicking off what could be a landmark legal battle over how AI systems like ChatGPT are trained.

In the lawsuit, the Times alleges that both OpenAI and Microsoft’s AI tools illegally trained on millions of copyrighted articles and materials from The New York Times specifically.

According to the text of the lawsuit: “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.”

What’s more, the Times claims that it was particularly targeted by OpenAI and Microsoft.

The lawsuit says that, while the companies copied from many sources, they gave Times content particular emphasis when building their LLMs. The lawsuit does not come with a price tag for damages. But it says the companies should be held responsible for “billions” in damages.

It also calls for OpenAI and Microsoft to destroy any models and training data that use copyrighted material from the Times.

Inside the e/acc movement

If you use X, formerly Twitter, regularly and you follow technology leaders there, you may have noticed that some of them share a short series of letters in their bios; you will regularly see the letter “e/acc” in the profile title of users, following their name.

This stands for “effective accelerationism,” and it’s a philosophy—perhaps even movement—that more and more influential people in AI and technology subscribe to.

The movement, broadly, believes that the best thing for humanity is advancing artificial intelligence as quickly as possible.

This movement serves as a counterweight to the voices in AI that believe more regulation, laws, and guardrails around AI are needed to deploy the technology safely.

Major players in Silicon Valley and AI, including famed venture capitalist Marc Andreessen, identify as e/acc, so it’s important to understand a bit about this movement if you want to understand the opposing battlelines and perspectives that will define AI in 2024 and beyond.

The growing power of deepfakes

Happy New Year everyone…you can’t trust a single thing you see or hear anymore! That’s the 2024 message from AI expert and Wharton professor Ethan Mollick.

Mollick went viral on X with a post showing a deepfake video he created of himself that is almost indistinguishable from a real video of him.

In the video, AI Ethan says things Mollick has never said in English, then says those things in Italian and Hindi. It’s a jaw-dropping example of just how fast deepfake technology has progressed—and it should make everyone hyper-skeptical of everything they see and hear online he says.

The most surprising part of the experiment is how easy it was to do. Mollick used just 1 minute of training data.

By the way, Mollick deleted his X thread of this after it went viral, saying others were mistakenly making him the face of the technology.

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Mike Kaput: we're coming in hot in 2024. it needs to be done. I want to start the new year by telling everyone you can't trust a single thing you see or hear anymore.

[00:00:11] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable.

[00:00:21] Paul Roetzer: You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:31] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:40] Paul Roetzer: Welcome to episode 78 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are back for the first edition of 2024. I feel like I was going to say it's been a month since we did this together, but I think it literally has been a month since we did this together. Yeah.

[00:00:59] Paul Roetzer: [00:01:00] Yeah. It was hard. Like for me, the first, so I think the last one we did was probably around the first week of December. And then Cathy and I did that special, like 15 questions veryone's asking about AI episode in the middle of December. That week between like the one we did and the one Cathy and I did.

[00:01:16] Paul Roetzer: There was so much happening where I was like, Oh man, we should really, maybe we should do an episode. I was like, no, we're taking, we're taking a break. We're not doing episodes. So, we, we did, you know, Mike and I think both kind of got away a little bit. Mike kept the newsletter going. So if you're a subscriber to the newsletter, I know we were still sending every Tuesday.

[00:01:35] Paul Roetzer: We did send out the newsletter with some links. There was definitely some stuff happening, in December, but. I think luckily for all of us, not a ton of like breaking AI news in December, especially the last two weeks of December. so it worked out well. I was able to kind of step back and, you know, clear my mind a little bit.

[00:01:55] Paul Roetzer: I listened to a lot of podcasts. I read a lot of stuff about AI, but I [00:02:00] tried to not share much and do much. And I actually. The first time I can remember in my entrepreneurial life, which goes back to 2005, I literally didn't work for like seven days stretch. It was phenomenal. I'm just hanging out with my family.

[00:02:13] Paul Roetzer: and, so yeah, so I hope everybody had a great. Holiday season and to the year and that the New Year's off to a great start for you. we have a lot of interesting topics today. It's not like breaking news topics, but like there's four. We actually had almost have like a fourth main topic today, basically, because it was just like some really interesting things to get into.

[00:02:35] Paul Roetzer: So we're going to get into all of that. First, this episode is brought to us by BrandOps. Many marketers use ChatGPT to create marketing content, but that's just the beginning. When we sat down with the BrandOps team, we were impressed with their complete views of brand marketing performance across channels.

[00:02:51] Paul Roetzer: Now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions. Use BrandOps data to drive unique AI content based on what [00:03:00] works in your industry. Visit brandops.io/MarketingAIshow to learn more and see BrandOps in action. So thanks to BrandOps for sponsoring the podcast.

[00:03:13] Paul Roetzer: And then we also have the AI for Writers

[00:03:15] Paul Roetzer: Summit coming up. So this was, we did the inaugural AI for Writers Summit at Marketing Institute in was it March of 2023, Mike? It was a year ago, March. And so we are back with the second edition of the AI for Writers Summit. This is going to be Wednesday, March 6th.

[00:03:30] Paul Roetzer: It's a virtual event. From noon to four Eastern time. there was, we, our goal in 2023 for the, for the inaugural was a thousand of registrations. It's free by the way. there's a, there's a paid option to get the on-demand, but the event itself is actually free to attend. We had over 4,000 writers, editors and content marketers there last year.

[00:03:51] Paul Roetzer: So, really amazing turnout. And I think it was just. You know, when we created that event, we were in that moment where ChatGPT was [00:04:00] really emerging. It was right before GPT 4 came out. I think GPT 4 came out like two weeks after the Writers Summit last year, if I remember correctly. Maybe it was right before it.

[00:04:09] Paul Roetzer: but basically a lot of writers were starting to wonder what is going on. What is the impact of these, of these tools? On our jobs, on our careers, and that was what we kind of set out to try and not necessarily answer, but just have open discussions about like help writers and editors and content marketers and brand leaders try and understand the state of.

[00:04:32] Paul Roetzer: what is going on with AI, the impact it's going to have on copywriting and creativity moving forward. And that's what we're going to try and tackle again this year is really look at the state of it, talk about some of the key technologies in the generative AI space, get into how it's impacting, you know, not only brand teams, but freelancers and journalists, and really try and advance the conversation in the community around this stuff.

[00:04:55] Paul Roetzer: So check out AIwriterssummit.com. Again, it's free. [00:05:00] There's a free ticket option. and then I think there's like a 99 maybe on demand option if you want to get it on demand. So, AIWritersSummit.com March 6th, Wednesday, March 6th, 12 to 4 Eastern Time. Mike and I will both be presenting at that summit.

[00:05:16] Paul Roetzer: All right, Mike, let's get into the main topics.

00:05:19 New York Times sues OpenAI and Microsoft for copyright infringement

[00:05:19] Mike Kaput: All right, Paul. So first up, the New York Times has sued OpenAI and Microsoft for copyright infringement. And this is kicking off now what could be a landmark legal battle over how AI systems like ChatGPT are being trained. In this lawsuit, the Times alleges that both OpenAI and Microsoft's AI tools illegally trained on millions of copyrighted articles and materials from the New York Times website specifically.

[00:05:50] Mike Kaput: According to the text of the lawsuit, quote, Through Microsoft's Bing Chat, recently rebranded as Copilot, and OpenAI's [00:06:00] ChatGPT, defendants, i. e. OpenAI. and Microsoft seek to free ride on the Times massive investment in its journalism by using it to build substitutive products without permission or payment.

[00:06:13] Mike Kaput: What's more, the Times claims that it was particularly targeted by OpenAI and Microsoft. So this lawsuit says that while the companies copied from many sources, they gave Times content particular emphasis when they were building their LLMs. The lawsuit does not come yet with a price tag for these damages, but it does say that the company should be held responsible for quote billions in damages.

[00:06:41] Mike Kaput: It also calls for OpenAI and Microsoft to destroy any models and training data that use copyrighted material. from the New York Times. So Paul, there's a lot to unpack here. First up, in a recent tweet, Walter Isaacson, who is the famous author of the Steve Jobs and Elon Musk biographies, he [00:07:00] said that lawsuits like this one from the Times quote, will be the most important cases for journalism and publishing in our lifetime.

[00:07:09] Mike Kaput: He seems to think it's a big deal. How big a deal do you think this is?

[00:07:13] Paul Roetzer: I think this one is a really big deal. One, it's the New York Times. two, the case appears to be very well made. I mean, I didn't read the whole 300 plus, page filing, but the analysis of it indicates it's a, it's a really big deal.

[00:07:31] Paul Roetzer: I think it's helpful to take a, a slight step back and understand like why this is happening and what's at stake. So, You kind of alluded to it a little bit here, but again, just for, like, foundational understanding purposes, these models are trained on data. So when we go and use ChatGPT, or Grok, or Claude, or Py, or whatever your chosen large language model in ChatBot is, it is trained on data.

[00:07:59] Paul Roetzer: the [00:08:00] way it's able to output, whether it's an email, or a newsletter, or an ad, or an image, is that it has this data that it learns from. Now, the higher quality the data, the better the model. So, if you're going to train a model that is able to write, You want it to learn from the best content available.

[00:08:20] Paul Roetzer: So if you take GPT 4, which is still the most powerful and capable model in the world, you want the best examples of writing and the greatest depth of knowledge. So you need legitimate sources, not just a bunch of Reddit boards and Twitter threads. So, in past episodes, this is why we've discussed how Google, meta to some degree.

[00:08:42] Paul Roetzer: Amazon, they may have advantages given the fact that they have their own proprietary data and platforms. So, again, if you're Google, you have YouTube as an example. You have way more than that, but YouTube is one key example. XAI slash Grok has Twitter data. If [00:09:00] they can get rid of the noise and misinformation, that can be really valuable.

[00:09:03] Paul Roetzer: But if you're OpenAI, Anthropic, Cohere, Inflection, and others, You don't have your own data. You are a language model company. You're building models on other people's data. The argument here is they may not have had the permission or the right, well they definitely haven't had the permission, they may not have had the legal rights to train on other people's data.

[00:09:24] Paul Roetzer: So, there is uncertainty around whether or not training the model is fair use, but these AI companies certainly knew going in that it was a gray area that was likely going to be challenged legally, we, we know that to be a fact, that there's internal discussions around questions of are we even allowed to do this.

[00:09:44] Paul Roetzer: But finding an answer to that is going to take a really long time. This isn't like this case is going to be settled in three months and OpenAI is going to have to destroy GPT 4 because it was trained on New York Times data and you can't extract that specific data set. That's not what's happening, going to [00:10:00] happen here.

[00:10:00] Paul Roetzer: This is going to take a long time. The other thing I think we need to consider in all of this is The media companies whose content is used to train these models struggle, like they rely on traffic, in many cases from search engines, to support their ad revenue models. That traffic and those business models are at risk as consumer behavior evolves, and maybe they don't get as much traffic from these search engines.

[00:10:29] Paul Roetzer: So, there's a lot of things, because, you know, the consumer, you and I, we may start going directly to the chatbots or to AI agents to find the information we need. We might not go search anymore, and therefore we might not land on the New York Times, and they might not be able to sell the ads. So there's a lot of things at play here.

[00:10:45] Paul Roetzer: when we get though into like the strength of the legal case, as you called out, like, we're not the attorneys here. We're, we're not going to be the ones that are going to say this is legit legal case and they've got great crowns. But what you and I do is we, we find the people and follow the people [00:11:00] who are authorities on these topics, who have way more information and knowledge on this.

[00:11:03] Paul Roetzer: So I know you and I both read the Cecilia Zanetti, ZanettI think is how you say her name. She's an IP and AI lawyer and formal general counsel at Replit, one of the AI companies that you and I both love. and so she had a great thread on Twitter that we'll, we'll link to. And I'll just call it a couple of the things she highlighted.

[00:11:24] Paul Roetzer: So she said, first, the complaint clearly lays out the claim of copyright infringement, highlighting the access and substantial similarity between New York Times articles and ChatGPT outputs. Key fact. This is again, quoting from her. New York Times is the single biggest proprietary data set in Common Crawl, which is used to train GPT.

[00:11:45] Paul Roetzer: Now, in her thread, she has a screenshot from the legal filing that shows the Common Crawl. So here's what it says, and again, this is coming right from the OpenAI New York Times filing. The most highly weighted [00:12:00] data set in GPT 3, Common Crawl, is a quote, a copy of the internet made available by a 5. 0. one C3 organization run by wealthy venture capital investors.

[00:12:13] Paul Roetzer: The domain www. newyorktimes. com is the most highly represented proprietary source and the third overall behind Wikipedia and a database of U. S. patent documents. Represented in the English language subset of a 2009 snapshot of Common Crawl accounting for 100 million tokens. What that's saying is, when these models are trained, most of them, I think, use this Common Crawl data set.

[00:12:41] Paul Roetzer: So it's not like OpenAI goes directly to New York Times and gets all of their data and scrapes it. I don't, I don't think at least that's how it occurs. They use the Common Crawl dataset, which is made up of New York Times data. But not only New York Times data. The number four on this list is the LA Times.

[00:12:59] Paul Roetzer: Number five [00:13:00] is the Guardian. Number seven is Forbes. Eight, Huffington Post. Eleven, Washington Post. Like you start going down this list. And you realize, we're just talking about the tip of the iceberg here, because if the New York Times has a case, then so does the Washington Post, and Forbes, and Huffington Post, like, all of them have the same exact.

[00:13:21] Paul Roetzer: potential issues. so that's a really big problem. Like if this opens up the floodgates for these lawsuits, New York Times isn't the only one that's in Common Crawl that's being used to do this. The second point she makes is, the visual evidence of copying in the complaint is stark. And then she shows an example where it literally gave a verbatim output of like four or five hundred words.

[00:13:45] Paul Roetzer: And so they kind of prompted it to where it gave this plagiarized output. This is straight up plagiarism, like they're not even, this isn't even a debate. so it's there in, in the reports. It's going to be very hard to dispute that. her take was OpenAI can't really defend this [00:14:00] practice without some heavy changes to the instructions and a whole lot of litigating about how the tech works.

[00:14:05] Paul Roetzer: It will be smarter to settle than fight. her fourth point, failed negotiations suggest damages from New York Times, OpenAI is already licensed from other media outlets like Politico, and then she says the refusal to strike a deal may prove costly. Especially as OpenAI profits grow and more and more examples happen.

[00:14:24] Paul Roetzer: My spicy hypothesis, she goes on to say, OpenAI thought they could get it out of it for seven or eight figures. New York Times is looking for more and ongoing loyalty. So the overall take here is licensing deals are going to be way easier than litigation. So, you know, I think that's going to be a key thing.

[00:14:43] Paul Roetzer: the other context I'll add real quick is Andrew Ng who, you know, you and I both follow, one of the leading AI people in the world right now and founder of Coursera and Google brain team and all those things. so he had a tweet that we'll again put in there and then a related blog post. [00:15:00] where he said, after reading the New York Times lawsuit against OpenAI and Microsoft, I find my sympathies more with OpenAI and Microsoft than with New York Times.

[00:15:07] Paul Roetzer: So this is kind of like a counterpoint, sort of interesting. He said specifically, number one, claims among other things that OpenAI and Microsoft use millions of copyrighted New York Times articles to train their models. he says, I understand why media companies don't like people training on their documents, but believe that just as humans are allowed to read documents on the open internet, Learn from them and synthesize brand new ideas.

[00:15:30] Paul Roetzer: AI should be allowed to do so too. I would like to see training on the public internet covered under fair use. Society will be better off this way though, whether it actually is, will ultimately be up to legislators and courts. And then I'll kind of finish my thoughts here, Mike, and I'd love to get your opinion here.

[00:15:49] Paul Roetzer: I had reposted that Andrew Ng article on Twitter and someone asked a question about. like training versus learning and whether or not it should be illegal to do this. And so my, [00:16:00] my response was, and I'll just kind of read this because it's simpler. The legal arguments between training and learning will be intriguing.

[00:16:06] Paul Roetzer: I could definitely see a path forward in which the courts allow the training learning because the AI companies succeed at convincing the judge or judges. It's not really different from humans. But the companies building the models, and potentially the end users, you and I, and everyone listening to this, are still liable for copyright infringement and plagiarism on the outputs, also like humans.

[00:16:30] Paul Roetzer: But, as I said earlier, it will be years before this is all settled, and my best guess is the AI companies end up paying a few billion to settle, without admitting wrongdoing, to make these lawsuits go away. And then they train all future models, GPT 5 and beyond, or, you know, whatever Bard's built on, Gemini.

[00:16:48] Paul Roetzer: They'll train all these future models on proprietary, licensed, and synthetic data. They'll just get around this by, we're just not going to train on stuff that we're stealing from people anymore. So I still think it's possible that the [00:17:00] leading AI companies, oh, this is kind of like a, a bigger idea. I think we talked about this on a prior podcast.

[00:17:05] Paul Roetzer: My one theory. is that it's possible these AI companies buy or build their own media companies to power future models. Then they control the source data and they get to influence the narrative and public in the process. So if you think about like Bezos owns the Washington Post, Benioff owns Time Magazine, right?

[00:17:28] Paul Roetzer: So he's got all the archives for Time Magazine, like if Salesforce wants to change it. So it's already actually kind of started happening. And now you have like, Elon Musk's owns Twitter. So you're seeing it happen. And I almost wonder if that isn't the play because OpenAI and others can pay millions or billions in licensing fees and basically rent the data.

[00:17:49] Paul Roetzer: Or they can just buy the media outlets for less and scrap a dying advertising model that's barely sustaining journalism as is. Like, journalism is dying. You can't, you can't fund local journalism through [00:18:00] ad models. And so, in this great ironic twist, there's a chance AI actually saves journalism rather than steals from it.

[00:18:09] Paul Roetzer: but then we deal with the fact that these AI companies now not only determine, like, knowledge, but they get to control what is truth. Thank you. It's such a fascinating topic. So going back to your original question, is this a big deal? It's a huge deal, I think, because it's not just one case. This has ripple effects throughout, you know, the impact on search, the impact on journalism, the impact on how the models are built.

[00:18:38] Paul Roetzer: it's, it's going to be so crazy to watch how this all plays out. I don't, what are your, I mean, do you, Do you have a feeling one way or the other? Like, is it going to be illegal? Like, what are they going to do? I don't know.

[00:18:49] Mike Kaput: Yeah, I don't, you know, I'm still in kind of my first draft of thinking about the topic, but I largely agree with you that I don't see a future [00:19:00] where realistically the demands in the New York Times lawsuit or anyone other's to shut these models down or destroy training data.

[00:19:08] Mike Kaput: Actually happens. I mean, how many times has that ever happened? And do we anticipate it to happen? I certainly could be wrong, but I think that kind of leads my line of inquiry to kind of what comes next. And that's kind of where I wanted to, you know, wrap up this discussion and double click a little more on the licensing side of this.

[00:19:28] Mike Kaput: Because we talked a little bit about the Times, you know, said in its lawsuit tried to negotiate some type of content licensing with OpenAI. That fell through. We've seen some other reports that we'll link in the show notes that OpenAI has actually offered some media firms like one to five million dollars annually to license their content, which seems really, really low to me.

[00:19:52] Mike Kaput: Is there any possible future here where licensing actually. works? Because I'm more and more in your camp [00:20:00] where I think, why not just buy the information, the content itself in the form of the outlet.

[00:20:07] Paul Roetzer: In the near term, it could, but I just feel like whoever owns the data has the power in the future.

[00:20:14] Paul Roetzer: And I can't see the AI companies giving up that power. So like, I could see them playing nice until they don't need to anymore. like we know Google. Has licensing deals. We know Apple's been in negotiations, you know, rumor is that they're going to do something significant by Q3 of this year with Surrey.

[00:20:33] Paul Roetzer: And, you know, definitely doing something with their own language models already, they're building them. so they're also trying to license data. They have their whole news platform. Like if you're an Apple news reader, like they have access to all that. And then they could extend licensing deals with those companies.

[00:20:48] Paul Roetzer: But what I had heard, and I'll try and see if I can find the article to put in the show notes, was that Apple doesn't just want licensing, like they want future rights to this stuff to build other content on [00:21:00] top of or build other models on top of. And so I think that the licensing is going to get complicated.

[00:21:07] Paul Roetzer: But the reality is the media companies need the AI companies probably more than the AI companies need the media companies. And so right now the media companies have the leverage because there's a pretty good chance that the AI companies probably weren't allowed to do what they did. So they may have to pay, you know, some fines for that.

[00:21:28] Paul Roetzer: but moving forward, there's belief they can use synthetic data. Like, okay, you're not going to play ball with New York times. We'll go do a deal with the Washington post. Like there's other ways to get the data and build these models. and so I just feel like right now, again, media companies probably have some leverage in this cause they're, they need this data to do this, but.

[00:21:49] Paul Roetzer: You know, fast forward one, two, three years down the road, and I think that leverage will probably shift to the AI companies, whether it's because they just buy the media companies, or they find other ways to get proprietary or [00:22:00] synthetic data to train on, or they just do deals with other partners and move on from them.

[00:22:04] Paul Roetzer: So. I don't know. It's going to be really fascinating to watch.

[00:22:08] Mike Kaput: Yeah. And one final note here, which could be a whole topic for a future podcast. It occurs to me that there might be an interesting discussion moving forward around sort of what we might call niche media properties, like in different verticals.

[00:22:21] Mike Kaput: I mean, eventually you could see some interest from model companies in getting more specialized content around different areas. If that ends up being valuable to training versions of their models or fine tuning existing ones.

[00:22:35] Paul Roetzer: Yeah. And that's, you know, do you see a fracturing where there isn't like a general model where you go in and just assume ChatGPT is going to tell me anything or like, you know, for example, I mean, who could build a better product model than Amazon, like all those, you know, millions or hundreds of millions of products and they know everything about them.

[00:22:51] Paul Roetzer: It's all sitting in databases. So, you know, if I wanted to interact with any kind of find a product for anything, any lifestyle, any need, [00:23:00] You would think that's something Amazon could build better than anybody. and so is that, you know, are there travel agent bots? And like, do you actually interact with individual specialized bots?

[00:23:08] Paul Roetzer: But I know like the plan for like these large models would be to, in essence, have AI agents that are specialized in all these areas. And so I could still go to ChatGPT and I'm looking for a product, but it's actually pulling the AI agent that's trained on product data. And that's the one that's actually feeding the outputs back to me.

[00:23:27] Paul Roetzer: So I, you know, I think at the end of the day, the big models probably still win. We want to go to one place like a Google versus. 15 places to find our specialized data.

00:23:38 Inside the e/acc movement

[00:23:38] Mike Kaput: All right, our second big topic for this week. If you If

[00:23:42] Paul Roetzer: I'm ready for this one.

[00:23:44] Mike Kaput: Buckle up, everybody. Refill your coffee, or if you're listening to this at night, maybe a stiff drink or something.

[00:23:50] Mike Kaput: But if you use X, formerly Twitter, and you follow certain technology leaders there. You may have noticed that some of them [00:24:00] share a weird kind of short series of letters that are listed in their bios, so you'll regularly see these letters E forward slash a. in the profile title of users, and it often follows their name.

[00:24:14] Mike Kaput: So, you know, Paul Roetzer, parentheses, E forward slash ACC. That's not your profile, but

[00:24:19] Paul Roetzer: And I do not have that in my profile.

[00:24:21] Mike Kaput: That's what it would look like. so you'll see this more and more, and it actually stands for a term called, quote, Effective Accelerationism. Now, this is a philosophy, and maybe you might even call it a movement, that more Or a cult, we'll get to that part for sure, that more and more influential people in AI and technology and Silicon Valley specifically have started to subscribe to.

[00:24:49] Mike Kaput: Now this movement broadly believes that the best thing for humanity is to advance artificial intelligence as quickly as possible. And the movement serves [00:25:00] as kind of a counterweight to all these voices in AI that believe more regulations, laws, guardrails around AI are needed to actually use the technology safely.

[00:25:10] Mike Kaput: So we're seeing a lot of major players in Silicon Valley and in AI at large, including people like the famous venture capitalist Mark Andreessen, Identify as e/acc is how we're going to refer to it. throughout the rest of this segment, E/ACC e/acc. So it's important to really understand a bit about this.

[00:25:31] Mike Kaput: It may seem a little weird at first, but it really, really is integral, we think, to getting inside the head of some of the people that are defining AI in 2024 and beyond. So, Paul, at first glance, this is kind of some weird sci fi techno cult sounding type thing that probably your average person is not going to think super relevant to them.

[00:25:55] Mike Kaput: But it actually has some really practical implications, I think, for everyone trying to [00:26:00] understand and adopt AI. So could you maybe unpack this for us and why, tell us kind of why it matters?

[00:26:06] Paul Roetzer: Yeah, so as Mike said, this one is, It definitely has a little sci fi, out there feel, but it is very important because there are very influential people in technology who believe this all to be true.

[00:26:23] Paul Roetzer: so I will say we are presenting this as not, like there's elements of this I totally understand, like I sympathize with. elements of their thinking. I would, I would not consider myself e/acc. Is, is actually like how you pronounce E/ACC. I'm certainly not e/acc. but this is definitely the accelerationist side.

[00:26:49] Paul Roetzer: This is the technology at all costs movement. So, I've been aware of it for a while. I don't know how long I've, I've kind of. generally known and, [00:27:00] but more just like seeing the e/acc symbol on people's Twitter accounts and been kind of annoyed by it. At first I probably thought it was like a crypto thing, and so I just ignored it.

[00:27:08] Paul Roetzer: Like I ignored most crypto stuff, for a couple of years, but, so I listen though to Lex Fridman. So last week I'm listening to the Lex Fridman podcast and it's with this guy, Guillaume Verdon. and so He tells the story of how he created this movement. So, basically, the creator of e/acc is an alternate persona on Twitter known as BethJezos.

[00:27:36] Paul Roetzer: So, Jeff Bezos switched over. so the Twitter handle's actually based BethJezos. So, there's a few things already where I'm just like, Hey, this is not my thing. This is, again, why I haven't really dove into this previously. It sounded political and it sounded like all this other stuff, so I was just kind of ignoring it.

[00:27:56] Paul Roetzer: But once I listened to this podcast, I realized like, oh, that was [00:28:00] my bad. Like I shouldn't have been, not paying closer attention to this. So he creates this alternate persona. This is not known to the world. Beth Jezos was an unknown person. It's kind of like the guy, who created Bitcoin. We still don't know who that guy is.

[00:28:16] Paul Roetzer: He's just like a pseudoname, basically. So, it's like that. We didn't know who Beth Jezos was. It wasn't a real person in, in our knowledge. so, the movement itself looks like it originated around summer 2022. So, it's about a year and a half old. the reason he did the interview and explained this all is because he was He called it doxxed.

[00:28:40] Paul Roetzer: I don't think that's technically what happened. Doxxing is like someone's private information or location. he was more technically probably unmasked. Forbes did an expose where they basically tried to figure out who Beth Jezos was because he was starting to have influence over Silicon Valley [00:29:00] and the future of AI and technology.

[00:29:02] Paul Roetzer: And so they made the argument that because he was over 50, 000 followers on Twitter, it was for public good, basically, that people know who he is. So Forbes, contacted his investors in, I assume, a kind of Feb November time frame last year. and alerted them that they knew it was Guillaume and that they were going to publish an article, stating as much.

[00:29:26] Paul Roetzer: And so Guillaume, begrudgingly did an interview with Forbes that came out December 1st last year and explained this whole movement and said, yes, in fact, it's me. Now, if you listen to the Lex Friedman podcast, he's still pissed that Forbes did this. Like he, he, he didn't think that Journalists were allowed to do something like this, so he, he wasn't happy about it.

[00:29:49] Paul Roetzer: but in the interview with Fridman, and to a degree the Forbes article, he explains, and now again, Guillaume is a quantum physicist, just a genius guy who [00:30:00] spent his early career studying black holes and information theory, like really crazy stuff, crazy awesome stuff. He worked on the quantum team at Google, building quantum computers.

[00:30:11] Paul Roetzer: And he basically left to start a company called Xtropic because he believes that there are limitations to the current pursuit of quantum computing and he's trying to build a thermodynamics based computer. I am not going to get into all of like that stuff. Yeah, don't worry, stick with us. Yeah, we're, we're going to get there.

[00:30:28] Paul Roetzer: So he, he leaves, starts in Xtropic, creates this e/acc movement and he explains how they engineered the movement to spread like a virus. So if you're a marketer, like, I, my ears kind of perked up. I was like, you did what? Like, how, how does that work? And they, in essence, realized that memes, are treated favorably by Twitter's algorithms.

[00:30:49] Paul Roetzer: And that if they built the movement around memes, it would spread like a virus online. And that's what they did. So that's how they kind of created this whole thing. So what is [00:31:00] e/acc? now we kind of get into the meat of this. And again, bear with us that it's a little. out there, but it's really important.

[00:31:08] Paul Roetzer: So, we went and polled the original post that they put up about effective accelerationism, as Mike said, that's what e/acc means. It does play off of effective altruism, which they hate, because they feel it was a bunch of rich people who found ways to funnel money through non profits and steal the money, basically.

[00:31:25] Paul Roetzer: So, they're not fans of effective altruism, but they did play off of it to do effective accelerationism. So they state, in their initial post, The overarching goal for humanity is to preserve the light of consciousness. it gets, it gets more intense. second, technology and market forces as they define as techno capital, which I'll get to in a moment, are accelerating in their powers and abilities.

[00:31:55] Paul Roetzer: So techno capital is a really important term to [00:32:00] understand here because they use it all the time. And I had no idea what it meant. So I actually went to Grok. I figured, well, this might be actually a pretty good use case for Grok. So I said, what is TechnoCapital in terms of the e/acc movement? At first it said it couldn't tell me.

[00:32:16] Paul Roetzer: that it was just, you know, in development and go search on the internet basically. And I asked it again and then it told me. I don't know why that is. So it said, TechnoCapital refers to the economic and social power associated with technology innovation and corporate sectors. It's a term that highlights the importance of technology and the impact on the development of capitalism.

[00:32:36] Paul Roetzer: Some people believe that techno capital is the driving force behind the evolution of society, while others see it as a tool for corporate control and domination. Regardless, it's undeniable that technology, so it's basically technology and capital. Now, I'm going to read you ChatGPT's output as well, because it actually gives like some really good, interesting context.

[00:32:55] Paul Roetzer: So, ChatGPT says, in the context of accelerationism movement, particularly the branch [00:33:00] known as e/acc, It refers to the concept that blends technology and capitalism. Accelerationism is an intellectual and political movement that holds the process of capitalist technology and social change should be accelerated rather than resisted or controlled.

[00:33:14] Paul Roetzer: This gets into what Mike was saying. They don't like laws and regulations. They just want. Accelerated all costs. In this framework, techno capital is seen as a driving force of societal evolution. It represents the idea that the advancements of technology, like AI and quantum computing, under the dynamics of capitalism, leads to an exponential increase in technology, technological growth, and societal transformation.

[00:33:39] Paul Roetzer: This concept suggests that the infusion of technology and capital creates self propelling systems that move towards ever greater levels of complexity and integration. so basically blah, blah, blah. Okay. Then going back to their notes, they say technology market forces, techno capital are accelerating in their powers and abilities.

[00:33:56] Paul Roetzer: This force cannot be stopped. Techno capital [00:34:00] can usher in the next evolution of consciousness, creating unthinkable next generation life forms and silicon based awareness. I'm going to linger there for a moment. Next generation lifeforms means, by definition, likely not purely human. And silicon based awareness means the chips themselves become aware of themselves.

[00:34:23] Paul Roetzer: that, these the AI becomes aware of itself, basically. These new forms of consciousness by definition will make sentience more varied and durable. We want this, they say. Those who are first to usher in and control TechnoCapital, have immense agency over future consciousness. Humans have this agency right now, and they can affect what happens.

[00:34:45] Paul Roetzer: And it's basically a set of ideas and practices that seek to maximize the probability of this happening. So my take, to kind of summarize this, is accelerate AI at all costs. Regardless of the implications, the [00:35:00] negatives, no regulation of companies building the technology, technological advancements solve everything.

[00:35:08] Paul Roetzer: They are, they are of the belief that you cannot allow a few AI companies, like Meta, Google, OpenAI, to capture all the value and control the laws and regulations. What we've talked to is about regulatory capture that OpenAI and Google are like saying, Hey, regulate us. These guys are saying no regulation.

[00:35:27] Paul Roetzer: Do not allow that because they're just going to control everything. They believe that safe AI comes from market choice. So this was the one that I was really listening for when I was listening to this, because I still am unsure. about the open source movement for these large language models, because they have no guardrails.

[00:35:43] Paul Roetzer: Anything can be done with them. Their belief is, and this gets into the techno capitalist thing, that capitalism decides what wins and what loses. That even if bad actors use LLMs to do bad things, [00:36:00] Capitalism is the guardrail, that there is no incentive for those companies over time or those individuals over time if no one buys what they're creating.

[00:36:10] Paul Roetzer: So they believe that capitalism rewards the winners and extinguishes the losers that don't create value and align with societal values and norms. In essence, they let the market decide. and then the last thing is Humans are likely not the ultimate form of intelligence, and to them, that's okay. we should accelerate, even if it means obsoleting ourselves in our current form.

[00:36:37] Paul Roetzer: And even said at one point, like, we owe it to whatever the future of humanity looks like, the trillions of people to come, that if we're not the ultimate life form, we do everything we can to create. a more intelligent light form that can make us in a planet and do all this stuff. So again, as we warned you, this is really sci fi and kind of crazy sounding, but why it [00:37:00] matters, going back to what Mike said at the beginning, there are people building AI and running some of these AI companies who believe this stuff.

[00:37:10] Paul Roetzer: and that affects all of us. So we have to understand their beliefs and motivations, the tools that they're building, the computers that they're building in the near term will impact our careers and companies, but the beliefs they hold and their ability to bring those beliefs to life through movements and as they call them mind viruses.

[00:37:33] Paul Roetzer: will impact humanity in, in like the coming decade. This isn't 30 years out stuff. Like this is the stuff that they think is like within re/acch. So I'll, I'll stop there. I know it's a lot to process and Mike, maybe we can kind of unpack it a little bit, but again, like it's, we try and sometimes push a little bit in this podcast to like challenge ourselves and others to [00:38:00] expand your understanding of what's going on.

[00:38:04] Paul Roetzer: because It matters long term, but in a lot of cases, it matters in the very near term around the decisions you make, the companies and vendors you choose to work with, like understand what these CEOs believe and why they're building the companies they're building it. Again, it's, it's not to write your emails.

[00:38:20] Paul Roetzer: Like that's not why most of these companies exist.

[00:38:24] Mike Kaput: So to kind

[00:38:25] Mike Kaput: of really quickly summarize these steps, we have this anonymous. TwitterX account that's promoting what sound like some crazy views under this e/acc moniker. It gets enough attention that eventually journalists start looking into, Oh my gosh, what is this random account that all these influential people in Silicon Valley are following?

[00:38:47] Mike Kaput: They're able to unmask the account. It turns out to be a Silicon Valley, essentially tech guy, who has a very deep physics background. He believes this extreme accelerationist philosophy which [00:39:00] was essentially started by engineering a viral online movement to promote it, and a core of this philosophy is the idea that advancements of technology under capitalism equals growth slash transformation slash where we have to go as a species, and this cannot be stopped, and it may mean that machines end up subsuming humans as the most intelligent life form.

[00:39:29] Paul Roetzer: Now, that does sound wild. It's like a movie plot, doesn't it?

[00:39:33] Mike Kaput: I think it is, honestly.

[00:39:34] Paul Roetzer: It's some of the movies. The way you just summarized that, I'm like, that's a good trailer for a movie. Yeah. Yeah.

[00:39:39] Mike Kaput: So what strikes me is if we took this out of, into any other context, and I said to you, Hey, 7 out of the 10.

[00:39:48] Mike Kaput: biggest leaders in AI believe in economic growth at all costs or believe in philanthropy over profits. You would pay [00:40:00] attention, wouldn't you? You'd say, , that's really interesting. That's a big majority of people believe this kind of thing that likely motivates their decisions. Why they work on things, why they do what they do.

[00:40:12] Mike Kaput: So I think that to me is really the key takeaway here is understanding this in the sense that it motivates more people than you might think. We don't know for sure. Thankfully, some of them put it in their bio. So if you go search for that, you can find them, but I would be willing to bet. A significant portion of AI leaders.

[00:40:33] Mike Kaput: Hold some or all of these views. Would you agree with that statement?

[00:40:37] Paul Roetzer: I certainly some of them and I think you know even like a Sam Altman They'll use as an example like he actually tweeted at Beth Jezos one time You can't out accelerate me and yet Sam is also the guy fighting for regulation. Elon Musk claims that he started OpenAI because Larry Page was, [00:41:00] anti humans, that he, he felt that if AI took over, that was inevitable.

[00:41:04] Paul Roetzer: So he created OpenAI to combat this exact concept, and yet Elon Musk owns Neuralink, which merges human minds with computers. Like, so yeah, it's just, it's fundamental. And what, the other thing, and maybe we'll kind of like move on from here, because I think this is just a lot to process, There are what they call forks off of this.

[00:41:26] Paul Roetzer: So what you're going to see is other E slash other letters. Because what's happening is people are like, Yeah, I totally actually agree that technology and capitalism will drive this. But I don't, I'm not in with the let's replace humans thing. So I'm going to like create a fork or a different variation of this movement that believes this.

[00:41:47] Paul Roetzer: And they actually encourage that. So they're kind of pushing the most extreme beliefs. And then off of that, they encourage these forks where people, you know, kind of find their own comfort level with what they're going to push. And I think [00:42:00] that's probably the thing that ends up being the greatest impact.

[00:42:04] Paul Roetzer: There's going to be all these forks of these other AI leaders who maybe don't want to publicly say, Yeah, I'm cool if humans aren't the thing 50 years from now. They're never going to publicly say that. But they may have some varying beliefs, but you're going to start to see the common threads between these, these people.

[00:42:20] Paul Roetzer: Excellent.

00:42:20  Ethan Mollick’s perspective on the growing power of deepfakes

[00:42:20] Mike Kaput: So, I guess I'm not going to apologize for this, but we're coming in hot in 2024. it needs to be done. Sorry, folks, but I want to start the new year by telling everyone you can't trust a single thing you see or hear anymore. That's the topic of our third main news item today. And it's a 2024 message from AI expert and Wharton professor, Ethan Mollick.

[00:42:47] Mike Kaput: because Malik recently went viral on X with a post showing a deepfake video that he created of himself. That, honestly, is basically indistinguishable from [00:43:00] a real video of him. He starts off with a real video and transitions to a clearly labeled AI deepfake, and it is very hard to tell the difference. In the deepfake version, AI Ethan says things that he has never said in English.

[00:43:15] Mike Kaput: then transitions to him saying those things perfectly in Italian and in Hindi. I don't believe he knows either language, and in any case, this is totally AI generated. Now, it's this really jaw dropping example of just how fast deepfakes have progressed, and it should make everyone hyper skeptical of everything takeaway.

[00:43:39] Mike Kaput: He says that right up front. And the most surprising part of the experiment was how easy this was to do. Malik says he used just one minute of training data to create this 60 plus second video that is a very, very good deepfake. if you are looking for this thread on X, Malik actually [00:44:00] deleted it after it went viral because it turns out people were kind of mistakenly making him the face of this kind of creepy deepfake technology.

[00:44:08] Mike Kaput: He was like, no, no, no, this is just an experiment. This is not what I do. So he deleted that, but the full videos on YouTube, he breaks it down in his sub stack newsletter. So we're linking to both of those. You can check it out for yourself. But first up, Paul, I have to say like this surprised even me because it surpasses anything I've personally seen to date.

[00:44:29] Mike Kaput: And I've actually seen and heard Ethan Malek in person speak at our event and watched him on video multiple times and honestly I Had a really hard time telling the difference between real and fake here So I guess my first question for you is how much trouble are we in now with deepfake technology?

[00:44:49] Paul Roetzer: You know when when we published our marketing artificial intelligence book in the summer 2022 I think it came out. We had a section in there about You Deepfakes and how you [00:45:00] needed to prepare for them. And from a crisis communication standpoint within, you know, your, your businesses, because we were heading in this direction very quickly.

[00:45:07] Paul Roetzer: I'm not sure we realized how quickly it was happening. but I mean, the technology advanced very, very fast last year. And I think it's a, it's a major problem. Like I don't, I think the problem is like. On the surface, there's all these fascinating use cases and people like, oh, cool, I'm going to deepfake myself and I'm going to create a video too, like Ethan did and show myself doing these things and prove it works.

[00:45:36] Paul Roetzer: And, it, to me, it's such a slippery unknown slope of what are we creating? Like if you're developing these videos of yourself, giving that training data to HeyGen, who's HeyGen? Who's the CEO of HeyGen? Are they an e/acc? Like what, you know, people are like blindly trusting. AI startups with their [00:46:00] likeness and like giving them training data to go build these things that can be wildly misused.

[00:46:07] Paul Roetzer: and so, you know, I think that it's going to be a problem on, on so many levels, like the LinkedIn posts, you know, that I had put up about this was, And I said that deepfakes and synthetic media are going to be a major problem moving forward, and I called out a special in the upcoming US election cycle.

[00:46:28] Paul Roetzer: So while there's these fascinating business use cases, when you opt in and choose to create your own content, it's going to be just as easy for other people to create deepfakes of you and other people. So, as you started out, like, you can't trust anything. Like, I even found myself in the last 48 hours, watching videos on like X and be like, is this real?

[00:46:52] Paul Roetzer: Like, I don't know, is this actually this person? Like, I don't even know if I can trust this anymore. so you can't trust anything you see online [00:47:00] unless it's coming from a verified source. So, you know, I hate to even give this example, but like us, like, I mean, how hard would it be to deepfake me talking in my basement?

[00:47:10] Paul Roetzer: I'm guessing not. Very hard. I haven't tried, but I'm guessing it's not that difficult. And so if it's coming from a person you recognize, what you really have to do is the next step is, but is it coming from a verified channel of theirs? Is it on their YouTube channel? Did they post it from their social media accounts?

[00:47:32] Paul Roetzer: We have to very quickly, as a society, train people, including our kids, like I have to have this conversation with my 12 year old and my 10 year old already about what to trust, and, like, which channel is it coming from? Where did you see it? Like, all these things, and so going into the elections for 2024 in the U.

[00:47:53] Paul Roetzer: S., like I'm, we're not the only ones dealing with this globally, and I know we have listeners outside of the U. S., we have to deal with this, like, [00:48:00] right now, because If he can do that with a minute of training data, and we have to assume bad actors can do similar things with other people's likenesses, we need to figure out ways to accelerate education, but we also have to accelerate the authentication of media.

[00:48:14] Paul Roetzer: I don't know how you do that. so the other call to action I had in my LinkedIn post was like PR professionals. If we have any public relations professionals listening, you have to get defensive. Deep fake strategies in your 2024 crisis communications plan, because there's nothing stopping them from doing this with an executive in your company, a board member, whatever it is, and causing chaos.

[00:48:39] Paul Roetzer: So, the final note I'll make here is someone had commented on the LinkedIn post about like, you know, they don't see how this affects elections. And so I, unfortunately, like I was sitting there last night trying to watch the Miami Buffalo game, because I was like really interested and I was like, I got to respond to this.

[00:48:55] Paul Roetzer: So I like shut off the game and I was like, I got to think this through. So what I had [00:49:00] replied, and I think you can kind of like carry this then over into the business world. I said, I'm not a political strategist, obviously, but here's my thinking. they ask specifically about, like, how could this really sway an election?

[00:49:10] Paul Roetzer: There aren't that many undecideds. And it's like, well, actually, there are. It's like, Gen Z, like 18 to 34, 49 percent of them are undecided who they would vote for. They're basically independents. Like, that's a, that's a pretty big swing. in any election, moving an election three, four points changes the outcome of the entire election.

[00:49:27] Paul Roetzer: And that's where all the money goes, is trying to change the minds of those. 4 percent basically. So it's an undecided to have a significant impact on outcome elections. Billions will be spent trying to influence them. By definition, they are harder to lump into any specific interest group or target with common messages around hot button issues.

[00:49:45] Paul Roetzer: So you need to get highly targeted and personalized. So rather than spending millions running 5 to 10 ads through traditional media, maybe you create 5 to 10, 000 ads, videos, and memes. Now if you're a bad actor, say a foreign government that [00:50:00] wants to influence an election, or our own politicians who are willing to cross a moral and ethical line to persuade voters, not that either of those things ever happen, you have the power to generate synthetic content at scale and hyper target your audience through online channels.

[00:50:14] Paul Roetzer: You can even have the content appear as though it comes from people they trust, celebrities, influencers, etc. i. e. deepfakes. So my overall thoughts are synthetic content deepfakes will absolutely be used by bad actors in the election cycle. AI will give politicians superpowers at targeting and influencing voters.

[00:50:33] Paul Roetzer: And the third is if undecided voters can't swing an election, then why do politicians target them at all? AI excels the influence and persuasion to target these people. So again, not being a political strategist, I would love to be wrong here, but this all seems inevitable at this point. And so the election is an easy one to play out.

[00:50:50] Paul Roetzer: But again, you can play out the impact on your own company or your own career, your own online persona. Like if you, if you have an online persona, this is [00:51:00] reality. Like we're heading in this world. And if the final time I'll have is I keep seeing like these AI influencers. Like, people are making deepfakes of them.

[00:51:08] Paul Roetzer: Like, doing whole podcast interviews where someone creates a fake JCal or whatever. I think I've seen it. And they like, play it off like it's funny. I'm like, they're stealing your persona. Like, how is this funny? How is this an example of technology done right? Why are you amplifying people stealing your persona?

[00:51:29] Paul Roetzer: Like, I'm sending a cease and desist letter. Like, I really don't understand that mindset at all. yeah, like you said, this is like, it's a heavy topics, but like, we got to talk about this stuff.

[00:51:41] Mike Kaput: It's real. Yeah. Yeah. And with just how quickly it's accelerated, we have to talk about it now. We can't wait until, you know, everyone's taking a breath to start the new year.

00:51:51 AI-powered search engine Perplexity AI raises $73.6M

[00:51:51] Mike Kaput: I mean, it, this is coming at us fast. So. All right, let's dive into some rapid fire topics in some slightly more positive [00:52:00] news. Perplexity, an AI powered search engine that we talk about and use at the Institute, has just raised almost 74 million. And that values the company at 520 million. Participants in the investment round included notable VC firms and NVIDIA and Jeff Bezos.

[00:52:21] Mike Kaput: Now, in the world of AI, a 520 million valuation, shockingly, can seem kind of low in comparison to some of these billion dollar unicorns that are in generative AI. But it's worth remembering the company has only been around since August 2022. So, a little over maybe a year and a half in existence.

[00:52:40] Mike Kaput: Perplexity's CEO, interestingly, also used to work at OpenAI. So, what's the big deal with Perplexity? Why do we talk about it a bunch? unlike Google, Perplexity functions more like a chatbot. You ask it natural language questions like, say, what are some of the top use cases for AI and marketing, [00:53:00] and it responds with a comprehensive answer complete with citations from websites and articles for every single piece of information that it provides.

[00:53:09] Mike Kaput: You can also ask follow up questions in a single thread to drill down further into topics. Now Paul, first I wanted to get your thoughts generally on Perplexity, the fundraise, and then I can also share a few points about the tool as I've quickly. Found it to be kind of a powerful piece of my own AI workflow as well.

[00:53:30] Paul Roetzer: Yeah, it was interesting they announced this funding last week because you and I were just having this conversation because I know you are a user of Perplexity Pro, which is 20 a month. Yeah. I am not. But you hear about this company so much. I mean, it is one of the hotter names, certainly, even before the funding from Jeff Bezos.

[00:53:47] Paul Roetzer: Now, there's not Beth Jezos, but actually Jeff Bezos funded. Although I think Yom may have actually been in the funny round too, ironically enough. So, the people we follow in this [00:54:00] space talk about perplexity. Like the other AI leaders talk about perplexity as an example of an interesting startup.

[00:54:05] Paul Roetzer: So, it's been on my radar. But honestly, every time I look at it, I'm like, I don't get it. How, how is this replacing search for me? It's probably not. I already paid 20 a month for ChatGPT and it's just piping GPT 3. 5 in the free version and GPT 4 in the paid version. Like, why don't I just use ChatGPT? And so I go in and I'll like try a couple searches in it and I'm like, I don't know.

[00:54:30] Paul Roetzer: Like, okay. Like It seems interesting. I guess it's a little different user interface, but I knew it was using Claude and GPT 4 and I already had those. So I just, I just didn't get it. So this morning at the gym, I listened to an interview on the Cognitive Revolution podcast, which is a great podcast, by the way, with the founder and CEO of Perplexity.

[00:54:51] Paul Roetzer: And I was like, well, I'll give him a chance. Like, I'll see, maybe he can explain it to me. And unfortunately, he actually struggles himself to kind of explain the things I was [00:55:00] wondering. Like, why would I use this instead of ChatGPT? that being said, it was an interesting interview for sure. And it doesn't change my perception that the people I follow believe in this company, so there's something here.

[00:55:13] Paul Roetzer: So I don't take this as. I don't think the company is worth pursuing or following or trying yourself. I just still haven't found it, the use case myself. So I'm anxious to hear yours, but they're talking about how they are completely relying on Google and Microsoft for their data. and so they want to build their own search index so they don't have to rely on Google APIs to do it.

[00:55:31] Paul Roetzer: And they're completely relying on OpenAI and Anthropic for their language models and how they want to build their own language models, because otherwise. et cetera, So they're very dependent upon other people. This is what we talked about earlier. Like they need their own data to make this work. he talked a little bit about how he thinks you can kind of train these models on like, one to 10 billion pages of the internet instead of a trillion, but when they asked him, like, well, how do you use Copilot?

[00:55:57] Paul Roetzer: Cause there's this toggle for like turn Copilot [00:56:00] on and off. And I was like, what, what is that? Like, why would I turn it off? And so the interviewer actually said like, well, how do you, when do you use Copilot and the CEO is kind of struggling like, honestly, like if we knew. We have just made it like, a required part of the feature, but like right now it's just a choice.

[00:56:16] Paul Roetzer: So my overall take here is, it seems like most users at this point are AI enthusiasts who are experimenting with it, like trying to find the use case. It's shown some promise. So people like you and me are just like, all right, I'll pay the 20 bucks a month for three months and try the thing out. They have not hit escape velocity with like the average consumer that's going to like switch from Google and start using this tool.

[00:56:42] Paul Roetzer: And they don't seem to even have any clue how they'll get to that point, but they're having crazy growth. It's most likely coming from people who are curious and AI enthusiasts in tech, the tech crowd, though. Like they're, they're nowhere near breaking into the general lexicon of users. So that, [00:57:00] again, having only tried like five searches in it and played around a little bit and not seen the like, Oh, I totally get why this is different.

[00:57:07] Paul Roetzer: that's my kind of high level at the moment. But what has been your experience as someone who actually pays for it and uses it?

[00:57:13] Mike Kaput: Yeah, I largely agree with you. I think the company itself doesn't necessarily know what it wants or needs to be. However, I'm going to break down kind of how I come at it from a few different quick ways.

[00:57:25] Mike Kaput: First, I'm going to kind of talk through perplexity versus traditional Google search. Like, why do I find it to be increasingly the go to? And then second, can talk a little more about perplexity versus ChatGPT for instance. So, first up, like, it is pretty clear to me. Even the free version. I pay for the copilot version because, like you, I had no idea what the difference was until I like, and we'll talk about that in a second, but I was like, oh, okay, like, you can use a better version of this more often.

[00:57:54] Mike Kaput: I'm using it quite a bit. Let's do it. Let's test it out. The free version of perplexity, I would say [00:58:00] alone, which I used for months is in my opinion, just. So much smarter than a traditional Google search, even when you have Google's AI augmented search generative experience turned on. I'll be honest, Google search feels a bit medieval to me compared to perplexity because Perplexity is extremely fast, it provides comprehensive answers, and pretty good sources, as I've seen, you know, without clicking every single one, pretty quickly, and once you start using something like this, you do start to realize just how cluttered and The cumbersome Google's kind of UX and ads and random search results you have to pick through, how hard that is to navigate.

[00:58:43] Paul Roetzer: Because at the moment, they don't have an ad model, they are purely revenue driven. I mean, they're losing money as you assume, but the 20 bucks a month is their The revenue model.

[00:58:52] Mike Kaput: Correct. Yeah. There are no ads as of right now that could change very quickly, but today it's a pretty streamlined [00:59:00] experience.

[00:59:00] Mike Kaput: I also find the summaries of search results, you know, it's looking at several or dozens of different search results for different searches as well. and summarizing those into a few comprehensive paragraphs to help you answer whatever queries you're trying to find. I find those summaries to be way better than the ones provided even by Google's AI.

[00:59:21] Mike Kaput: again, it's not, I'm not like doing a in depth experiment with thousands of searches, just kind of my personal, opinion. And I love this idea that it is looking at a lot of different sources. So, the Pro version uses GPT 4 to kind of understand your conversational queries. and it actually will go perform a bunch of different web searches for you and then synthesize that.

[00:59:46] Mike Kaput: It will even ask you follow-up questions, which I find to be pretty helpful before it completes this search. So in one example, it may you, if you say, Hey, I don't know if this will actually work, but as an example, if you said, Hey, I'm looking [01:00:00] for a bunch of low carb recipes online and I want them to ideally have.

[01:00:08] Mike Kaput: a heavy component of using lots of vegetables. It may then say, Hey, which vegetables are you actually interested in? Give you a bunch of options. You can choose to answer that question or not. And then it goes and does all this research for you. Another piece of this I like is you have these threads of conversations.

[01:00:25] Mike Kaput: So you can ask all these really smart follow up questions, do additional related searches, and kind of build on the knowledge that Perplexity is giving you. Again, a lot of this is just UX stuff, but I find it so powerful given how Core search and research is to what I do every day that it's well worth paying for.

[01:00:43] Mike Kaput: But again, most of these benefits you can actually just get from the free version. So that's what's so great. I would highly recommend checking that out. the CEO Massad actually called having a pro account, a 10 point IQ boost. I have to say, I agree with him just with [01:01:00] how. Fast you can get to knowledge.

[01:01:02] Paul Roetzer: That's really, I hope they put that on their homepage, just like a great endorsement. Good. It should

[01:01:05] Mike Kaput: be the number one headline, . They need some better marketing for sure. So then this second question of like why perplexity instead of ChatGPT plus, this is a little murkier because. I actually find them both good at different things and complimentary.

[01:01:23] Mike Kaput: I would not recommend in any stretch of imagination, one over the other, like only using one, if you can afford it. Obviously ChatGPT Plus is the go to. Perplexity is an additional helpful. Perplexity is not designed to do the things that ChatGPT does. However, when it comes to strictly research, ChatGPT Plus can certainly find links on the internet, summarize them for you.

[01:01:50] Mike Kaput: I personally still just find it a little hit or miss for replacing search behavior specifically. if you want to say, hey, summarize a topic for me or [01:02:00] explain something like I'm five, nothing is better. Then ChatGPT But if you're actually looking to collect a bunch of online resources, work through those, kind of deeply research certain topics, I just found Perplexity faster, more accurate, and kind of more intuitive.

[01:02:16] Mike Kaput: So I'd almost, I just view Perplexity as almost strictly like search slash research. other than that, I'm using ChatGPT. for everything else, like summarization, ideation, you're not really looking to use perplexity for that, but I do love how much quicker perplexity is in terms of an overall search experience.

[01:02:39] Mike Kaput: So that's kind of so far why I kind of talk it up. Yeah, it totally depends how much, like, research is such a core part of my job that it's like a no brainer to spend 20 bucks a month on it. But if you're not doing a ton of it, you're definitely not replacing ChatGPT or using this instead of it.

[01:02:58] Paul Roetzer: Yeah, and I'm, I mean, for me [01:03:00] personally, like, I'll, I'll keep experimenting with it.

[01:03:02] Paul Roetzer: I've moved the icon, you know, up in my, on my iPhone so I see it more often and, you know, think to, to test it out. But again, I would put me in the bucket of the AI enthusiast. That's just curious about technology. so yeah, I mean, I'll, I'll keep playing with it for sure. I I'm really interested to see where it goes, but like I said, it's a company backed by some, some pretty significant people who think there's something there.

[01:03:30] Paul Roetzer: So that usually means it's worth paying attention to. For sure.

01:03:35  Microsoft’s new Copilot key is the first change to Windows keyboards in 30 years

[01:03:35] Mike Kaput: Alright, our next topic is that we all know Microsoft is all in on AI, but it now wants its devices to reflect that. The company actually announced a new co pilot key. which is a key on a keyboard that will ship on certain new PCs and laptops that are coming from Microsoft's partners.

[01:03:54] Mike Kaput: This key will give you fast access to Windows Copilot, which is Microsoft's AI powered assistant, and you [01:04:00] just press it to launch the application in a single keystroke. It sounds like this key will replace the existing menu key on Windows keyboards, And if you don't have Copilot, the key will instead open Windows Search.

[01:04:13] Mike Kaput: Now, this may be a small little thing, but it is actually a pretty big change. It's the first major alteration of the Windows standard keyboard in almost 30 years. So it will have a pretty big effect, even though it's just one tiny key. Now, Paul, it sounds like Microsoft is telling us that ready access to an AI assistant, it's going to be kind of the go to function on your computer when you're working moving forward.

[01:04:39] Mike Kaput: Is that kind of your take on this announcement?

[01:04:41] Paul Roetzer: Yeah, I think it's just further proof that this isn't stopping. This isn't some, you know, trend or bubble that we're going to move on from AI in 2024 to the next hot thing. It's literally just going to be infused into everything we do and how quickly that infusion process happens in your company and your industry is up for debate.

[01:04:58] Paul Roetzer: That may be take a few [01:05:00] years, but AI as a whole, is just going to become an integral part of everything we do as, as professionals.

01:05:08  OpenAI’s app store for GPTs will launch next week

[01:05:08] Mike Kaput: So it should, I should note for this next topic, we're recording this on Monday, January 8th. we're right around noon at the moment, and OpenAI has announced that the GPT store will be launching sometime this week.

[01:05:22] Mike Kaput: So, as a reminder, the GPT store is this thing that's going to offer GPTs, which are built by other users, for you to go download. GPTs are the custom versions of ChatGPT that you can create yourself. These were announced at the developer day. in late 2023 that we covered on a previous podcast. So OpenAI basically said, hey, if you're interested in sharing your own GPTs, once the store launches, you have to follow their usage policies and brand guidelines, which they provided.

[01:05:51] Mike Kaput: You have to verify what's called your builder profile, which is just a quick thing with your name and a website and your basic info. And then you have to make [01:06:00] sure your GPT is set to be shared with the public. Now, previously OpenAI had indicated there's some type of possibly revenue sharing component for popular GPTs, but right now it's totally unclear what that is.

[01:06:15] Mike Kaput: now Paul obviously will be covering this, I would imagine, on next week's podcast or whenever the GPT store launches, but why should people be excited or paying attention to the launch of this store? Like, what does this mean to your average professional out there?

[01:06:29] Paul Roetzer: I mean, big picture, you know, are they building something like the app store ecosystem from Apple?

[01:06:35] Paul Roetzer: Like, is this the prelude to that? And these are the early days. So it's something to keep an eye on. but I think there's like a guy we all will follow, official Logan Kay on Twitter is, is, is, handle, but it's Logan Kilpatrick who does. Developer relations for OpenAI's great on Twitter. He shares all kinds of inside information and, really good follow.

[01:06:57] Paul Roetzer: But he kind outlined it real quickly is GPT [01:07:00] solve a lot of problems. The one he's most excited about is the first time user experience. A lot of times people just don't know what to do. And so by creating these GPTs, you create a, a gateway or an entry point for people. he said this is really just the start.

[01:07:11] Paul Roetzer: today it might not be super hard to go build your own version of something, but that will change over time as there's more built around gpt or, the third was he said, you. You being able to rebuild a GPT is not necessarily representative of the average user. This is like responding to someone else, that we're not solving for the expert builders.

[01:07:29] Paul Roetzer: Basically, they're saying we're trying to solve to widen the net of how many people can use this technology and get a great value from it. And that it's a long term thing, but this is going to keep evolving pretty quickly as they keep building. So, you know, I don't think it's going to be like, Earth shattering, like GPT 4 came out kind of level, like, Hey, everything changes as soon as the GPT store hits, but I think it's going to be a slow build and it's obviously going to be a key part of OpenAI's go to market strategy moving forward.

01:07:59  Issues with Anthropic’s Claude

[01:07:59] Mike Kaput: So in our final [01:08:00] topic today, our friend Chris Penn over at TrustInsights. ai, who is an AI expert and a good friend of us here at the Institute. He posted about a worrying issue with Anthropic's popular AI assistant, Claude. Now, like us, Chris uses Claude for, among other things, it's really powerful summarization capabilities.

[01:08:23] Mike Kaput: But he recently ran into a huge problem. He went to load his weekly transcript for his newsletter into the latest version, Claude 2. 1, in order to create a YouTube summary. It's something he's done countless times. Now this transcript talked exclusively about OpenAI's, custom GPTs and the GPT store like we just did, and it did not mention Claude or Anthropic at all.

[01:08:49] Mike Kaput: But then Chris says on LinkedIn quote, Claude intentionally rewrote itself into my summary and wrote out OpenAI again, nowhere in my source transcript is pic [01:09:00] or Claude mentioned it should not have done this. So instead of just summarizing the existing content according to Chris's prompt. Claude added something new, something about itself, and removed a reference to an opposing AI assistant.

[01:09:15] Mike Kaput: Now Chris, who knows more about this stuff than most people on the planet, was really shocked by this. He said, quote, This to me immediately makes Claude less trustworthy. I didn't ask for net new copy. Just summarization. So it should just be processing the tokens that are present in the source material.

[01:09:33] Mike Kaput: Highly problematic. Now, Paul, what do you see as going on here? Do we need to be worried about the outputs from Cloudinary?

[01:09:41] Paul Roetzer: I would love to hear a response from Anthropic because I have no idea how that happens unless someone programmed it to do that. Like, that's not a language model hallucinating. That is Somebody thought it was funny or there would be some competitive advantage to like replacing OpenAI when it's mentioned with [01:10:00] Anthropic.

[01:10:00] Paul Roetzer: Like I'm with Chris on this. There's a problem. Like it makes me question what's going on in Anthropic. And I think someone's got to take ownership of this and say, yeah, we screwed up and it shouldn't have done that. because you do lose trust very quickly with something like that somebody, again, may have thought was funny or cute that makes you not necessarily trust the output of the model.

[01:10:21] Paul Roetzer: Now, Anthropic is not the company I would have expected that from. If you told me Grok did this, I'd be like, it's par for the course. But yeah, Anthropic, I'm a bit surprised, that something like that would occur. I don't know how it occurs accidentally, so.

[01:10:36] Mike Kaput: We'll, link to Chris's post, but he shows the prompt he used as well.

[01:10:40] Mike Kaput: Chris is one of the best people out there when it comes to prompting these things. So it is not a one line prompt. It's paragraphs of context and content, and he's been using these tools for longer than most people. So I'm pretty sure he's not doing anything wrong in the prompt to even somehow cause this.

[01:10:58] Mike Kaput: All right, everyone. Thanks so much [01:11:00] for being with us. I want to just quickly note here that like Paul mentioned at the top of the episode, we have a weekly newsletter that goes out with everything happening this week in AI. We cover both what we've talked about today on the podcast, as well as tons. of topics that we don't have time to get to in a single episode.

[01:11:21] Mike Kaput: So if you would like to stay up to date on all the stuff happening in AI every single week, you can go to marketingainstitute.com/newsletter Thanks Paul for all your help in decoding what is happening in AI. We're off to a Hot start in 2024 here.

[01:11:41] Paul Roetzer: Yeah, for sure. That was I again, a lot.

[01:11:43] Paul Roetzer: Appreciate everyone staying with us. We had a, a huge year of growth in 2023. the podcast went from 5,000 downloads in 2022 to 260,000 in 2023. So we appreciate every one of you that listens in. if this is your first time, welcome to the show. [01:12:00] Um. Definitely, you know, subscribe and, you know, please pass along and share if you find value in it.

[01:12:06] Paul Roetzer: We'd love to keep growing this audience and this community and we're just grateful that you take the time to listen each week. So we'll talk to you again next week. We will be back a little bit. That'd be the 16th. So, thanks again for being with us. We'll talk to you soon.

[01:12:19] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:12:41] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 89]: A New In-Depth Sam Altman Interview, The “Inflection Point” for Enterprise Generative AI Adoption, and Inflection AI’s Big Shakeup

Claire Prudhomme | March 26, 2024

The AI Show analyzes tough news for generative AI companies, including Sam Altman's latest interview, a16z's enterprise research, changes at Inflection AI, and more.

[The Marketing AI Show Episode 72]: Our Hands-On Experiments with GPTs, Is AGI Coming Soon?, and New AI Wearable From Ex-Apple Veterans

Claire Prudhomme | November 14, 2023

In this week's episode of The Marketing AI Show, we detail our hands-on experiments with GPTs, talk AGI progress, look at a new AI wearable, and more.

[The Marketing AI Show Episode 60]: AI Is Going to Eliminate Way More Jobs Than Anyone Realizes, AI’s Impact on Schools, and the New York Times Might Sue OpenAI

Cathy McPhillips | August 22, 2023

Back to school: AI’s impact on education, what parents, students, and educators need to prepare themselves for. Plus, more on jobs and even more on AI and the law.