<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

39 Min Read

[The Marketing AI Show Episode 44]: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Episode 44 of the Marketing AI Show with Paul Roetzer and Mike Kaput covers stunning results from ChatGPT plugins, major Google AI announcements, problems with AI training, and more.

Listen or watch below—and keep scrolling for a summary of the show.

This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

Listen Now


 

Watch the Video

 

Timestamps

00:03:58 — The inside story of ChatGPT’s potential

00:18:39 — Google AI updates

00:35:13 — How AI is trained

00:44:04 — StableLM suite of language models

00:46:44 — AI Drake

 

 

Summary

Stunning results from ChatGPT plugins

The way we all work is about to change in major ways thanks to ChatGPT—and few are ready for how fast this is about to happen. In a new TED Talk, OpenAI co-founder and president Greg Brockman shows off the power and potential of the all-new ChatGPT plugins…and the results are stunning. Thanks to ChatGPT plugins, ChatGPT can now browse the internet and interact with third-party services and applications, resulting in AI agents that can take actions in the real world to help us with our work. In the talk, Brockman shows off how knowledge workers will soon work hand-in-hand with machines—and how this is going to start changing things months (or even weeks) from now, not years. Paul and Mike talk about capabilities that caught their eye, and what this means for the future of work.

Google just announced some huge AI updates

However, some within the company say Google is making ethical lapses in their rush to compete with OpenAI and others. There were three significant updates: Google announced that its AI research team, Brain, would merge with DeepMind, creating Google DeepMind.

It was also revealed that Google is working on a project titled “Magi.” It involves Google reinventing its core search engine from the ground up to be an AI-first product, as well as adding more AI features to search in the short term. Details are light at the moment, but the New York Times has confirmed some AI features will roll out in the US this year and that ads will remain a part of AI-powered search results.

Finally, Google announced Bard had been updated with new features to help you code. Bard can now generate code and help you debug code. As these updates rolled out, reporting from Bloomberg revealed that some Google employees think the company is making ethical lapses by rushing the development of AI tools, particularly around Bard and the accuracy of its responses.

What problems arise during training AI tools?

AI companies like OpenAI are coming under fire for how AI tools are trained, and social media channels are pushing back. Reddit, which is often scraped to train language models, just announced it would charge for API access, in order to stop AI companies from training models on Reddit data without compensation. Additionally, Twitter recently made a similar move. And Elon Musk publicly threatened to sue Microsoft for, he says, “illegally using Twitter data” to train models. Other companies are sure to follow suit.

An investigative report by the Washington Post recently found that large language models from Google and Meta trained on data from major websites like Wikipedia, The New York Times, and Kickstarter. The report raises concerns that models may be using data from certain sites improperly. In one example, the Post found models had trained on an ebook piracy site and likely did not have permission to use the data it trained on. Not to mention, the copyright symbol appeared more than 200 million times in the data set the Post studied.

And if that wasn’t enough, StableLM and AI Drake were discussed!

Listen to this week’s longer-than-usual episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.

Links Referenced in the Show

Read the Interview Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I'm very optimistic, long term about the impact of ai. I think it's going to do amazing things for the workplace, for businesses, for society, but I think we gotta be very realistic that this is very tangible technology that is going to be infused into our daily workflows and processes, whether we want it or not.

[00:00:18] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:38] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:47] Paul Roetzer: Welcome to episode 44 of the Marketing AI Show. I am your host, Paul Roetzer, along with my co-host Mike Kaput, chief Content Officer at Marketing Institute and co-author of our book, marketing, artificial Intelligence, AI Marketing, and Future of Business, which by the way, if you have read it and found it valuable, we would love if you took a few minutes to share a rating and review on Amazon.

[00:01:11] Paul Roetzer: One, we really appreciate the sport, and two, it actually makes a big difference in others discovering the book and starting their AI learning journey. So again, if you've, if you've read the book, we appreciate that. And if you have a moment, roll review on Amazon. All right. This episode is brought to us by brand ops.

[00:01:29] Paul Roetzer: Brand Ops is built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks. Leaders. Use it to know which messages and activities will most effectively improve results. Brand ops also improves your generative marketing with brand ops.

[00:01:48] Paul Roetzer: Your content is more original, relevant to your audience and connected to your business. Find out more and get a special listener offer. Visit brand ops.io/marketing ai show. That's brand ops.io/marketing ai show. And this episode is also brought to us by the fourth Annual Marketing AI Conference, or Macon, returns to Cleveland, Ohio This summer, join us July 26th to the 28th for the largest and most exciting event.

[00:02:19] Paul Roetzer: Yet the conference brings together hundreds of professionals. We're actually going to be announcing the agenda. I guess this is like, first time I'm saying this, the agenda should go live. My event team's going to kill me if I'm, I'm wrong on this, but in the next, like 10 days. So, it's about 80%. We're going to announce all the breakout sessions and a few of the main stage sessions.

[00:02:40] Paul Roetzer: There's, there's going to be probably another, six to eight announcements over May and June on some of the other main stage items. But, yeah, just a heads up. Check out macon.ai, in the coming weeks here, and we're going to have the agenda go live. So this year's event's going to have, I, we're, we're trending towards 500.

[00:02:58] Paul Roetzer: I don't know what the number is we're going to land on. Honestly, I don't think anybody in the event world right now knows how to project, how many people are actually going to show up at these events. We are trending way above, what we thought we were going to attract. So I would say Macon is. Getting lots of interest this year.

[00:03:13] Paul Roetzer: It's going really well in terms of attendance numbers. So we'll be at the Cleveland Convention Center right across from the Rock and Roll Hall of Fame. We would love to see you in person. Mike will be presenting, I'll be there, you know, obviously doing some talks. And, we're going to have an amazing collection of speakers and, great community of attendees for you to network with.

[00:03:32] Paul Roetzer: So, again, you can check that out Early Bird pricing is going on right now. macon.ai. Hope you see you there. All right. Onto the show again, if you're new, Mike and I pick three main topics to go through each week. Kind of look at what's going on in the world of ai. We pick three things that we think are going to be most relevant and interesting to you, and then we throw a few things in rapid fire at the end, if we can't fit everything into the main topics.

[00:03:55] Paul Roetzer: All right, Mike, it's all you. All

[00:03:57] Mike Kaput: right. First up this week, Paul, we got an in-depth inside. Look into the story behind ChatGPT's potential on the way we work. And the takeaway is the way we all work is about to change in major ways. Thanks to ChatGPT and few are actually ready for how fast this change is going to happen.

[00:04:19] Mike Kaput: And we say this because. In a new TED Talk OpenAI co-founder and president Greg Brockman showed off the power and potential of the new ChatGPT plugins. So these are the plugins that can help ChatGPT, browse the internet and interact with third party services and applications. And the results are pretty stunning because we basically got a preview.

[00:04:45] Mike Kaput: Of AI agents that can take actions in the real world to help us with our work. In the talk, Brockman shows off some ways that knowledge workers will soon work hand in hand with machines and how this is going to start changing things months or even weeks from now, not years. This is a change that appears to be here.

[00:05:07] Mike Kaput: Now, Paul, when you watched this, what were some of the capabilities from this talk that he showed off that kind of jumped out at you

[00:05:14] Paul Roetzer: as notable? Well, first I just thought it was a, I was watching this on Saturday morning and, and I was, I just thought it was a stunning contrast to what we had seen earlier in the week from Google when they did their 60 minutes segment on, with Bard, and the Fox thing with Elon, whatever that was.

[00:05:37] Paul Roetzer: So, Again, like I don't, I don't know if people know my background, but I was a PR major, so I came out of college, out of the journalism school with a public relations degree and. Spent, you know, the first five years of my career doing pr, crisis communications, media relations work, and then started my own agency and we did some PR work.

[00:05:56] Paul Roetzer: We didn't do a ton, but barrier in my career was a lot of what I did. And, and that's 60 minutes thing with Google. I mean, I love Google. Bard's going to probably be amazing, but that was really painful to watch. Like it was just a pure. PR play. Sundar doesn't do interviews, and so just the fact that he was even there doing that interview, was interesting.

[00:06:18] Paul Roetzer: But then it, it was just like, I forget the guy's name that was doing the interview, Scott something. I don't, I don't watch 60 minutes very often, so I don't, I don't know all the details, but, it was as though he had never seen GPT-3 or four, like it was this stunning, like, oh my gosh, I can't believe it's doing this thing.

[00:06:37] Paul Roetzer: And I get that he's probably trying to simulate how viewers would react to the technology, but it came across to me as like, did you not do research before you did this interview? Like had you never seen generative AI technology? Because there was nothing that Google showed in that. Demo on 60 Minutes.

[00:06:54] Paul Roetzer: That was anything new. It was just that Google was doing it with Bard, but like the tech was six months old to the average person who knew what they were looking at. So anyway, so I don't know. So the 60 Minutes thing was just painful and, I don't even want to get into the Fox thing with Elon. It was just this dystopian joke and it was to distract from other stuff that was happening and.

[00:07:16] Paul Roetzer: The world and in Elon's week and just really bad. So to watch this just pure demonstration of real world technology with real world use cases, was kind of a breath of fresh air in a week of PR bluster, in the industry. And I like that was the first thing that jumped on me. The second is, I think Greg should be in the public more.

[00:07:42] Paul Roetzer: You know, I think we've talked previously about Sam Altman and, you know, some of his, you know, I guess his own personal, statements about his weakness as like the front man for this whole thing. And as a CEO and, you know, maybe he lacks empathy toward the average person, his, in his own kind of words from l Freeman interview where Greg just comes across as a very intelligent, Technically minded person who has the ability to kind of explain in a very simple way how things are working.

[00:08:15] Paul Roetzer: And I just feel it felt very authentic to me. And so the, if you haven't seen it yet, it's about a 15 minute TED Talk presentation followed by a 15 minute interview with Chris Anderson. And the whole thing just felt very real, and I just appreciated that after going through the week of what we had gone through.

[00:08:32] Paul Roetzer: So, with all that being said, I think that the, when he started getting into actually showing ChatGPT Live connected to the browser and connected to the plugins. And again, if you, if you haven't heard our past episode or haven't like followed along with plugins, basically what's going to happen is in ChatGPT, you will have the ability to have these plugins that go out to different sites and enable you to get kind of real-time data out of those sites and then be able to take actions on them.

[00:09:01] Paul Roetzer: So he showed the example of making a menu, based on an image. He had the generative AI create an image of a, of a meal, and then, used the AI to assess what were the, food items within that image. And then used that to build a shopping list, and then in theory, you could just check out and have, those products available to you.

[00:09:24] Paul Roetzer: So it was this whole incredible demonstration going from just a single prompt of, you know, create a meal for me to, I'm going to order the items that are in this image I created so I can make this meal. And then that was fascinating enough. But to me, the real powerful one is probably the most simple one, which was asking Excel to analyze datand this is the one I've been hot on for years.

[00:09:47] Paul Roetzer: It's like whether you use a business intelligence tool to create your charts and analyze your information, or you just have it built within your marketing platforms like HubSpot, for example. Think about what it takes to get insights out of that data. in Excel you have to learn how to run pivot tables.

[00:10:06] Paul Roetzer: Like the average marketer has no idea how to build a pivot table. Like, and then even when you do, like I used to do this all the time, we would run, analyze research data. I would like spend five hours relearning how to use Excel and like maximize its value and the different, you know, shortcuts and stuff.

[00:10:24] Paul Roetzer: And then I wouldn't have to do it again for six months. And then I'd go back and be like, man, how do you build a pivot table again? And I would spend an hour relearning how to do this stuff just to get insights out of the data. And the example he showed was you just go in and say, find, find me this anomaly, or find me this, you know, forecast this or find that.

[00:10:44] Paul Roetzer: And it just, it does it and then it builds charts on it. And then I thought the thing that was most interesting is he was looking at year over year data. And he was comparing 2023 to 2022. Well, obviously we only have four months of data in 2023. So the chart that it built showed a drop off. So he said, okay, project out the rest of 2023 based on that data.

[00:11:07] Paul Roetzer: And it actually was then able to just go through and build an updated projection so it understood exactly what he was asking. It delivered exactly what he wanted. And that to me was the one where you start to really see. How this technology is going to be infused into everything we've talked about.

[00:11:23] Paul Roetzer: Microsoft 365 co-pilot and Google Workspace, and you start to see how all of the things that the average knowledge worker does is going to be assisted in a really efficient way. So again, if if just me going in and having to relearn how to run pivot tables and then how to run the analysis off of those pivot tables and then building the charts like what he showed in a minute and a half.

[00:11:47] Paul Roetzer: If I had wanted to do that myself, as someone who has worked in Excel for 20 some years, I probably would've taken an hour or two because I would've had to have gone back and refied out how to do it all. You know, made all these decisions, figured out what I want to ask of it. And so just that is such a tangible demonstration of the kind of efficiency we're going to see gained by knowledge workers that I just, I thought it was really, really well done and really simple.

[00:12:15] Mike Kaput: So talk to me a little bit more about the impact you, we might see here, because you posted about this on LinkedIn and you mentioned that we're going to start seeing humans and machines working together, quote, not years from now, but months, maybe weeks from now. What led

[00:12:32] Paul Roetzer: you to say that? The plugins are real.

[00:12:35] Paul Roetzer: That was what led me to say it. So I have the browsing plugin in ChatGPT right now, and I've had it for weeks and it, it changes things like once you can, you know, again, it's like, I always kind of backtrack. I don't, you know, how familiar are people with ChatGPT or how much they've experimented. But the problem with ChatGPT and other AI writing tools and language models right now, as they make stuff up, so, Hallucination is the technical term for it.

[00:12:59] Paul Roetzer: So to be able to connect to a browser where it can verify facts and cite the sources, or at least cite information that supports what was created by the language model. That's really interesting. But when you start, Adding these other plugins to connect to your, you know, as a marketer, you can start to imagine being connected to a social media platform or a CR r m platform or your email platform, where it can go.

[00:13:22] Paul Roetzer: And now not only extract information in real time based on a prompter question, but it can take action on your behalf based on this stuff. So once you see this and you realize, okay, well as soon as they start turning these plug-ins on. We're all going to have access to this. Or, as soon as Microsoft turns 365 co-pilot on, for everyone who has Microsoft, or as soon as Google turns on Google Workspace ai, this, this stuff is, it's not like there's some technological breakthrough that has to occur for the average knowledge worker to have access to this technology.

[00:13:55] Paul Roetzer: The only thing that has to occur is these tech companies have to turn the features on that already exist. And if they decide to do that tomorrow, then you got access tomorrow. If it's a week from now, then you got access then. So that's why I'm saying like there's a chance it could be a few months, like maybe they're going to run into some issues with the testing and realize, okay, we gotta do some more work.

[00:14:14] Paul Roetzer: But the tech exists already it, it literally is just them turning it on and saying, okay, here the first a hundred thousand users, you have access to this. And based on OpenAI's release schedule, I cannot imagine if they're showing this and they've already talked about it a month ago. That they're going to wait six months to release this.

[00:14:31] Paul Roetzer: . So I'm under the assumption, basically any day now, like you're going to go into ChatGPT if you're paying your 20 bucks a month and you're going to have plugins, and then that plug plugin ecosystem is going to explode. I mean, you're going to have hundreds of 'em or thousands of 'em by the end of the year, potentially, depending on how much OpenAI pushes the release of these things.

[00:14:51] Paul Roetzer: So you

[00:14:52] Mike Kaput: gave a really good example of in your own work this, you know, thing that might take you an hour to do, typically you're able to do in a couple of minutes. Now there's obvious productivity gains immediately from using this technology, but do those productivity gains over time impact jobs

[00:15:09] Paul Roetzer: too? I don't know how they don't like, I really don't.

[00:15:12] Paul Roetzer: And I, you know, we talked about this at length last week. You know, in episode 43 about knowledge work, but I really just don't understand how you could watch something like this and then go watch, you know, like the co-pilot go, you know, Microsoft 365 co-pilot, minute and a half demo or the workspace demo, and not arrive at the conclusion that knowledge work jobs are in trouble.

[00:15:36] Paul Roetzer: Like they're, we're just not ready for this. And we talked about this and you know, in the last episode, if you didn't listen to it, we gave kind of some ways to start. Moving in a positive direction, like actions you can take to try and avoid this outcome where we lose a bunch of knowledge, work jobs in the near term.

[00:15:53] Paul Roetzer: But again, having talked to a lot of these CEOs, having talked to a lot of, you know, the investors who, you know, are pushing for efficiency within organizations, and when you look at the real world applications, once you can do this stuff, I really just don't see a scenario where, where it doesn't have an impact.

[00:16:15] Paul Roetzer: So again, I'm, I'm very optimistic, long term about the impact of ai. I think it's going to do amazing things for, for the workplace, for businesses, for society, but I think we gotta be very realistic that this is, this is very tangible technology that is going to be infused into our daily workflows and processes, whether we want it or not.

[00:16:36] Paul Roetzer: By the end of the year in most industries and in most companies, and let's be real, like they're not ready, like most enterprises are just not even close to being ready for what this stuff's going to enable. So I again, yeah, it's like our whole call to action last week was, you gotta accept this and take action.

[00:16:54] Paul Roetzer: Like you cannot just pretend like it's not going to transform knowledge work. It is. And I don't even know how you debate that. Like, I dunno if I said this last time, but somebody actually called it a clown shoes opinion, like re replied to my LinkedIn thing with, and I was like, that's great. Like, like that's very productive way to think about this.

[00:17:13] Paul Roetzer: Like, good luck. Like if that's what you actually think. And, and that was like, you know, I could sit here and listen to a very real argument of like a five to 10 x productivity in some roles and in some industries where you could see a massive, massive transformation. I was just making the argument like maybe it's like 20 to 30%, like, but even that is transformational.

[00:17:34] Paul Roetzer: In most organizations, five to 10 X is really, really hard to comprehend. And maybe, maybe that isn't what you get to, like, you don't get five, 10 x in every profession, in every industry, but it, it. It's hard to argue you won't in some, like, coding, like . It seems absolutely doable within coding, and I think writing is another one where it's going to be, you know, 20 to 30%.

[00:17:57] Paul Roetzer: I feel it's an in insanely conservative estimate for the efficiency that can be gained in writing. Mm. For internal external communications and things like that. But we'll see. I mean, I just, I just still encourage people. I think it's way safer right now to move forward under the assumption that knowledge work is going to be transformed in the very near future than to pretend like it's not and be wrong Six months from now, then you're going to be in trouble.

[00:18:22] Paul Roetzer: Like I think it'd just be way better to watch this demo, go look at other real world demos, figure it out for yourself, come to that conclusion on your own. But you know, I think it's really important that people accept this tech is going to be with us very soon.

[00:18:39] Mike Kaput: So next up we have. Another story about how fast everything is moving.

[00:18:44] Mike Kaput: So Google just announced some huge AI updates, but some within Google are, saying the company is actually making ethical lapses because they're rushing to release features and new products in competition with OpenAI and others. So first, let's run through really quick these updates, and then we'll talk about that ethical piece.

[00:19:05] Mike Kaput: The first update is that Google announced its AI research team internally called Brain would merge with DeepMind, which is the company that Google acquired in 2014, and headed by AI leader Demi Hassabis. So this merged entity will be called. Google DeepMind and essentially unify the company's AI research and development efforts.

[00:19:28] Mike Kaput: Now, at the same time, Google also revealed, or it was revealed, rather, that Google is working on a project titled Magi, and it involves Google reinventing its core search engine from the ground up to be an AI first product. And it also includes adding more AI features in the short term to the search engine that we all use every day.

[00:19:50] Mike Kaput: Now, details are really light at the moment, but the New York Times confirmed that some of these AI powered features will roll out in the US by the end of this year, and the ads will remain part of AI powered search results, at least for the immediate future. Now. Last but not least, Google also announced that Bard has been updated with new features to actually help you code.

[00:20:14] Mike Kaput: So like some of the other generative AI coding tools out there, Bard can now generate code and help you debug code. I. So as these updates are rolling out, we got some reporting from Bloomberg that revealed that some Google employees actually think the company is making ethical lapses because they're moving too fast.

[00:20:33] Mike Kaput: And the criticism appeared to center around Bard specifically, some employees expressed concerns that Bard's responses were just not accurate or helpful. And others actually said some of the responses were downright dangerous advice. So in one, High profile example, Bard kept providing responses on how to land a plane when prompted.

[00:20:54] Mike Kaput: And every one of those responses, if you followed it, would have crashed the plane. So it sounds like Google's internal staff are actually starting to push back a bit on some of the pace of change in innovation happening in the company and seem to have some legitimate reasons for doing so. So I want to unpack these one at a time.

[00:21:14] Mike Kaput: First off, what did you think of the merger? Between Google Brain and DeepMind. So you filed DeepMind for a long time. Since it's beginning. Why is this such a big

[00:21:25] Paul Roetzer: deal? A little history lesson for people who aren't familiar? So, Google Brain was started in 2011. It came out of the X Labs at, at Google, and it was founded by Jeff Dean, Greg Carrado, and Andrew Umm.

[00:21:40] Paul Roetzer: So Andrew Umm may sound familiar to some people. He went on to be the chief. Scientist at, Baidu. He was, he founded Deep Learning AI landing ai, and he's the chairman and co-founder of Coursera. So Andrew is a, you know, a major player in the modern age of ai. So Google Brain is a massively influential research lab.

[00:22:06] Paul Roetzer: They also, are the lab that attention is all you need came out of, which we have talked about on this show before. Attention is all you need is the research paper from 2017 that created the transformer architecture, which is the basis for generative ai. It's the basis for GPT GPT-3, four, whatever.

[00:22:24] Paul Roetzer: Two of the eight authors of that paper went on to found character.ai, which is a, you know, a language company and cohere, which we've talked about a number of times on the show, which is also a language model company. So Google Brain is a massive enterprise. It, it was there to innovate, but it was also there to, in theory, commercialize that innovation.

[00:22:43] Paul Roetzer: DeepMind was founded in. 2010 by Demi Sabba, Shane, legen, Mustafa, Solomon. Demis and Shane are still with Deep Mind. Obviously Mustafa went on to, co-found. Inflection ai, which is backed by Reid Huffman, Reid Huffman, LinkedIn, and formerly PayPal. Inflection is one of the companies we've talked about that's working on, ai, human interaction, the ability to give machines action.

[00:23:13] Paul Roetzer: So there's some major players, Reid, read Genius Makers by Cade Mets. If you want to like really dig into the story of these, these different research labs, it's fascinating. Okay, so now I'll say as someone who has, watched closely this space for the last decade, I don't see how this works. Like, I, I could be completely wrong here, but every interview I've ever listened to with Demis, and I've probably listened to most interviews he's ever done, They, the reason he sold to Google was because he was a researcher, an academic researcher, and he believes in the future of artificial general intelligence, basically solving all intelligence and saving humanity.

[00:23:56] Paul Roetzer: Like he's, he is very clear in his mission of why DeepMind was created. And the reason he sold it to Google, which was talked a little bit about in that 60 minutes PR stunt, which was actually probably the best part of it, was the interview with Demis. He sold it to have access to their compute power, to, to have access to the ability to advance his mission, to solve intelligence.

[00:24:18] Paul Roetzer: Demos, didn't do anything ever to be able to like save. Google's ad business . And like figure out how to build a better search engine. Like, I've never once heard him talk about any motivation to do any of the things that right now are critical to Google's near term future. So it just seems like a forced marriage of two research labs that from everything I've ever heard, don't even really work together.

[00:24:50] Paul Roetzer: . Like they, they don't. They're not complimentary necessarily, that it just seems like they're being forced into this arrangement because Google's in this really crazy spot where all of a sudden they have to solve for some really challenging, things on the commercial side of the business. And so I have, I have no idea what the agreements are, how this stuff's going to be structured, but just from like a 30,000 foot view, it just seems to me like six to 12 months from now, we're going to read some stories about.

[00:25:21] Paul Roetzer: How this is not working as they had hoped. So yeah, that's my first thought. My second is, I mean, to me, deep mind and OpenAI are obviously two of the most important research labs ever. You could maybe throw meta in there, and Google Brain, certainly to a degree. So I think you're, you are consolidating some of the brightest minds.

[00:25:50] Paul Roetzer: In human history and you're putting them together and maybe something magic comes out of this, I hope, but I also, I really don't hope that Demi's vision and mission for DeepMind gets lost in this crazy competition that has been created all of a sudden, because I've said before, I think DMIs is, is going to end up being one of the most important.

[00:26:14] Paul Roetzer: People in human history, like what he's working on solving there and what they've already done with alpha fold and the predicting of proteins. They're working on solving human biology. They want to get into climate change, nuclear fusion, and clean energy. Like they're working on some amazing stuff at Deep Men, and I just, I hope it doesn't get lost.

[00:26:34] Paul Roetzer: Then I gotta think. If it does, then this falls apart really fast because again, that. He's very, very clear that that is what he's working on and it's not this commercialized stuff. So we'll see. Should be interesting. It will

[00:26:48] Mike Kaput: definitely be interesting. And so, you know, on the second part of this, there's obviously not much information yet on the Magi updates, but it sounds like something is at least coming.

[00:27:00] Mike Kaput: Do you see. Or predict a major change coming in Google search and it's, and ripple effects on marketers and business people that

[00:27:09] Paul Roetzer: rely on search. Again, I have no insight information on this, but the more I think about this and kind of like consider what's happening, I don't know that there doesn't come a day where Microsoft regrets what they've done, like .

[00:27:26] Paul Roetzer: If there was one company when it comes to aI just. I would never bet against it's Google. Like the researchers, they have the history with ai, the data that they have. You know, if they choose to build a multimodal engine where you can train it on YouTube videos and all the other proprietary data they have access to.

[00:27:50] Paul Roetzer: I just feel like once Google kind of like they got stunned, like they got hit first and they weren't ready for it. There's. You know, you hear this kind of analogy of like a wartime kind of company and, this idea that you're, you know, really fine tuned against a highly competitive environment where it, it's like a winner take all kind of feel.

[00:28:15] Paul Roetzer: And, everybody's putting their best stuff forward. That's not Google. Like they, they have just been, they've been the dominant player with no real challenge, insane innovation, but they were allowing. Like deep mind to lose a billion dollars a year. Cause they were working on this amazing future stuff.

[00:28:33] Paul Roetzer: And like it just, it was just going good. And then somebody shows up and takes a shot at 'em and it sort of stuns 'em. And they're not designed to react quickly. There's probably too much middle management stuff. The ethics stuff will get into like, there's these layers of. Ethics, where basically the ethical teams are pretty much there to say, Nope, don't release it yet.

[00:28:55] Paul Roetzer: Don't release it yet, don't release it yet. And then OpenAI's like, screw it. We're releasing it. And then it's like, well, but our ethics team says we can't release it yet, and OpenAI did anyway. Now what do we do? And so I feel like a lot of what Google's been doing right now is just getting out in the market saying, Hey, we're working on stuff like this is the 60 Minutes thing.

[00:29:15] Paul Roetzer: Should they have done it? No, it was terrible, but. They didn't want to wait till May 5th or whenever their developer conference is to say, here's what we're doing. Because shareholders are saying, what are you doing? Like, you know, you can't wait months before you start talking about this. Somehow Apple gets away with it, like nobody's grilling Apple about their ai, and Apple will just show up, do some amazing stuff, and go back, you know, in a shell for six months.

[00:29:38] Paul Roetzer: But it doesn't work that way for Google because it's direct competition from Microsoft and OpenAI. So I feel like. They, they may have woke a sleeping giant and I think before 2023 is up, whether it's Magi or whatever they're going to call it, once Google gets their stuff together, I mean, just like look out from a competitive perspective, but also from what we're going to have access to.

[00:30:05] Paul Roetzer: And again, that's why I keep telling people like, if you think knowledge work is safe, just because it feels better to you to believe that's fine. But I'm talking about tech that exists right now where we're arguing to see disruption. People aren't even considering, oh my gosh. Like what if Google figures out a safe way to come out with something even more powerful?

[00:30:29] Paul Roetzer: Or they start doing what they're doing with Bar, where it's like, okay, now it can code. They have a lot of data about how to do this stuff that other people don't have, and if they can find ways to securely release it, I just think. Is it going to be a really fascinating, you know, 2023 and beyond, they're not going to go down quietly.

[00:30:50] Paul Roetzer: Like I just, I would not bet against Google at some point here, figuring this stuff out.

[00:30:55] Mike Kaput: Yeah. And it sounds like as they make these moves based on some of the comments around ethics that their team has had, that there's definitely some tension with moving into that wartime footing. Do you agree with those critiques that there have been ethical lapses, and is this just a Google problem or is this something every AI company

[00:31:19] Paul Roetzer: encounters?

[00:31:20] Paul Roetzer: I don't even think it's debatable. I don't think Google would debate it. Like I was listening to an interview, I forget who is Big Tech podcast? Like I can't remember who it was. But they were talking about like a internal document from December that got leaked that was a Google doc that said like they knew that there was issues with copyright in the training of these language models.

[00:31:42] Paul Roetzer: Like they haven't publicly acknowledged it, but they knew they were going to probably get sued based on this. It was one of the reasons they weren't moving forward, not the reason, but a hundred percent. Like they, they know that there's, these things are dangerous, but, OpenAI released it and OpenAI's belief.

[00:32:00] Paul Roetzer: If you listen to the interview with Greg Brockman is like, we know they're dangerous, but we feel like it's way better now in these early phases to put them out there, find the dangers, fix the dangers, versus waiting till the tech is three years more advanced and then throwing it out into the world today.

[00:32:18] Paul Roetzer: And here you go. And all of a sudden everybody's like, what in the heck? Like, We don't even need knowledge work jobs anymore. We don't need writers. We don't like whatever that future state is. And I'm not saying that's what's going to happen, but that's open. AI's feeling is as this tech gets more and more advanced, the impact is going to be even greater.

[00:32:36] Paul Roetzer: So we would way prefer to put it out into the world now. Yes, it's not going to be perfect. Yes, it's going to make mistakes. It may have some ethical lapses, but. It's better that we learn and focus on how to improve it than just release the end product three years, five years from now, whatever it is.

[00:32:52] Paul Roetzer: Where Google couldn't take that approach. Like if they made a mistake, you know, it cost them share. You see all the same thing with meta. They've released a couple of things. They had to pull back. Microsoft released that TA bot years ago. They had to pull back. So for whatever reason, it took OpenAI who just, I don't know, didn't care, but just didn't have as much to lose to put this product out into the world.

[00:33:14] Paul Roetzer: And the, all the other labs that had these ethical teams in place to prevent harm from being done. It, it, it was just a barrier and it was probably the right barrier. But now the question is, you know, do they have, I don't want, I don't want this to come across as uncaring, but like, do they have the luxury of adhering to all those same ethical guidelines they used to?

[00:33:38] Paul Roetzer: And the answer in this environment is, it doesn't appear that way. Like I wouldn't want to have to be the CEO making those decisions. But, the reality is CEOs that have been running these major tech companies in, when things were good, when everything seemed to keep going up and to the right, like growth was good, competition wasn't that stiff.

[00:34:00] Paul Roetzer: You weren't going to have competitors coming out of nowhere. And, and those companies, you know, were built on culture and built on, you know, doing everything the right way and having this amazing brand. And, and sometimes that's just not the kind of company you need in an environment like this.

[00:34:18] Paul Roetzer: Like, you know, I think some of these tech companies that never had layoffs before and now they lay off 10%, 15% of the workforce, they may have to lay off another 25% of the workforce. Like, and I'm not saying go to the Elon Extreme at Twitter. But somewhere between what historically we had with these tech companies and what Elon did is probably the sweet spot.

[00:34:41] Paul Roetzer: And in that environment, a lot of these, these things that you did previously, they just aren't going to hold up whether they should or not. But yeah, there's, there's no doubt that their Google's now going to have to do things that six, 12 months ago would've gotten blocked from happening because of. You know, the ethical policies they had in place and responsible AI policies that I think they're just shifting, what is, what is what they're willing to do right now due to the competitive environment for better or for worse.

[00:35:13] Paul Roetzer: So our third

[00:35:14] Mike Kaput: main topic today is also related to some of the moves that have been made. In the interest of competition, so AI companies like OpenAI, Microsoft, Google Meta, et ceterare starting to come under fire for how their AI tools have been trained. So one high profile example of this that happened recently is that Reddit just announced it would charge for a access.

[00:35:39] Mike Kaput: So the ability to connect to their services. From third party apps in order to use their data in various ways, this will now be charged for. And one of the main motivations here is that Reddit is one of the big sites often scraped to get data to train language models. So they're trying to stop AI companies from training models on Reddit data without compensating Reddit.

[00:36:04] Mike Kaput: And they've been very clear and vocal about that. Twitter recently made a very similar move. They started charging for a p i access, and Elon Musk has publicly threatened to sue Microsoft for, he says, quote illegally using Twitter data to train models. So on top of all this, an investigative report by the Washington Post just came out that found that large language models from Google and Meta.

[00:36:32] Mike Kaput: Trained on data from many major websites. Now, historically, we just weren't sure exactly what sites were being trained on, so they found that websites like Wikipedia, the New York Times, Kickstarter, many, many others were used to train these models. Now, here's the issue. It's not necessarily always a problem to be using a website to train a model, but the report found.

[00:36:55] Mike Kaput: That there was data being used from certain sites that could create some serious issues. So in one example, the Post found that these models had trained on data from an ebook piracy site. So they're training on books that they have access to, but don't probably have the permission to use. Additionally, the copyright symbol that we'll see within the circle appeared more than 200 million times in the data that the post studied, basically pointing the finger at and confirming the fact that these models have, in some cases, used copyrighted data in order to train their outputs.

[00:37:33] Mike Kaput: So let's first talk about these companies restricting a p i access. What kind of impact do you see this having on? Companies that develop these

[00:37:43] Paul Roetzer: models, they're going to pay more for the data. I mean, the companies that are the source of the training data want to get paid, and it's a very logical play.

[00:37:53] Paul Roetzer: So if you have data that is that valuable to them, then it makes sense that you want to get paid for it. The average, like corporate brand or blog or, you know, you're. You're not going to, probably, this isn't going to change how you do what you do. But you know, in theory as a marketer, as a a company, you've create a bunch of content and you put it out there for free and you allow Google to index it.

[00:38:19] Paul Roetzer: So it shows up in the search results and the exchange, the value exchange, the consideration from a legal perspective is, we're going to send you traffic for your data. Now the question becomes, The bigger thing that all of us are trying to figure out is, well, if the language model is used to build a chat interface that just answers the question and they're using our data to answer that question, but no one ever comes to our site anymore, where's the value exchange?

[00:38:47] Paul Roetzer: that's the great debate about what is the future of search and seo. It's like we're just creating all this content still and nobody's finding it from organic search. But we we're not going to solve that on this podcast episode. So yeah, I think the basic takeaway here is if you've got proprietary data, you're either going to train your own model like Cora did.

[00:39:05] Paul Roetzer: You're going to just, you know, train a language model of your own or you're going to charge for the data. Or maybe it's both. But yeah, I think it's a natural outcome that these companies with the data want, want to get paid for the date that she's to train the models.

[00:39:20] Mike Kaput: So it's not the first time as. They are training models on this data that we've heard concerns around copyright, but it does seem like we are confirming or proving that at least some of these models are being trained on copyrighted material.

[00:39:34] Mike Kaput: Now, I mean, realistically, what could happen here, we've seen lawsuits we'll probably see more, but are they going to be able to shut down? Are companies going to be able to shut down these models from training on this

[00:39:46] Paul Roetzer: data? I don't, I don't think so. Not a lawyer took business law in college. That's about the extent of my lawyering.

[00:39:53] Paul Roetzer: Paid a lot of legal bills through my years for IP related stuff, you know, so I've, I, I've spent enough time in my career working on intellectual property stuff to be, you know, Educated in the space, but certainly not a lawyer. My guess has always been they're going to pay massive penalties at some point.

[00:40:13] Paul Roetzer: Like at some point it's going to kind of come down to this, but I think that the key for you as a marketer, business leader is, and I've actually heard this come up in some recent conversations with organizations, is your generative AI policies, you and your responsible AI principles and your company.

[00:40:34] Paul Roetzer: You have to address the fact that you may be using technology that was built illegally. And you have to make sure you're okay with that. Like this isn't, you're not going to get in trouble. So if you go use OpenAI GPT-4 in your company and you're using it to like generate ID ideas and outlines and some drafts for things or whatever it is, are you going to get sued because you use GPT-4.

[00:40:58] Paul Roetzer: Again, not legal advice, but I can't fathom a scenario where that occurs. However, they may get sued and it will likely be found that they did the thing. I just referenced that like Google knew that copyright was going to be an issue because their models were probably trained on some stuff that it, it shouldn't have been trained on.

[00:41:18] Paul Roetzer: So are you going to choose not to use large language models if it becomes obvious that they were in fact built? On some illegal data. Again, my guess is no, like I don't see this changing the trajectory. I do think it's going to impact in Europe. Like we talked already about Italy and yeah, I think others are going to follow on the GDPR because one of the issues I saw brought up last week, and I don't know if it's a tweet, so unfortunately, I'm sorry I can't cite it, but, It was saying that one of the issues they're going to run into in Europe is that you have to, and I think even related to gdpr, you have to be able to, request your data back.

[00:41:59] Paul Roetzer: . Or not be used Well, if it's trained on something I put out into the world, they can't go into that language model and get Paul's training data out of it. So the fact that they can't adhere to, to the law might be a problem. So I do think that the way they build these models is going to have to evolve.

[00:42:17] Paul Roetzer: I could see that being a scenario where it ends up that new regulations make it kind of illegal to train it the way they have. Maybe they pay a fine for past issues, maybe they don't. But I do think there's going to be a scenario where they have to. Reimagine how these language models are trained. And I think the answer in the near term for most corporations is going to be, you're going to train custom versions of these models, where it's going to be trained largely on your own data, moving forward.

[00:42:47] Paul Roetzer: Now, the foundational model might be an issue, so like, for example, One of the ways you could see this being solved is through one of the things we're going to talk about, stable lm. Like these language models that are open source foundational models. Well, those open source language models may have the exact same issue and likely do have the exact same issue as these other ones.

[00:43:09] Paul Roetzer: So I would say it's just as a marketer business leader who's listening to this podcast, there's no real action to take here other than an awareness that. The models you are going to be using, likely have some legal cases that we're going to hear about over the next 10 years around how they're trained.

[00:43:30] Paul Roetzer: I don't think it's going to affect what you're going to do day to day with them. Other than the fact that either way you are likely going to be building custom versions of these models for your company where you're really confident that all of your proprietary data is going to remain yours and not go into some future version of the language models.

[00:43:51] Paul Roetzer: So you, your data's not training their models. I think that's going to happen with or without this other stuff playing out in. In courts, but lots of lawyering ahead is, is what I would say.

[00:44:03] Mike Kaput: All right. We've got a couple quick rapid fire topics, and you alluded to the first one, which is that stability ai, which is the company behind the stable diffusion image generation model.

[00:44:15] Mike Kaput: They just released an open source language model called Stable lm. So here's how they put it. They say quote with the launch of the stable LM suite of models. Stability. AI is continuing to make foundational AI technology accessible to all our stable LM models can generate text and code and will power a range of downstream applications.

[00:44:36] Mike Kaput: They demonstrate how small and efficient models can deliver high performance with appropriate training, so they're basically releasing a powerful version of a language model. But for anyone unfamiliar with this space, The open source nature of this is a big deal. So we, you know, it means anyone can access and use the models for their own purposes versus a company say like OpenAI completely owning the access to and the development of the model.

[00:45:05] Mike Kaput: Paul, how important was this announcement

[00:45:07] Paul Roetzer: to you? Yeah, I think it's a big deal because stability, AI is a major player to keep an eye on moving forward. I mean, they've been a major player in the image generation, Cy stable diffusion, but, It's been obvious they were going to be a player in the language model space as well.

[00:45:22] Paul Roetzer: We did talk about Amazon Bedrock on the last episode, and, and this is, this is one of the models that'll be available through Amazon. You'll be able to go in and, and get their model. So yeah, I mean, stability, AI is a company to keep an eye on. And, going back to the copper issue, they're, they're getting sued right now on their image generation technology.

[00:45:39] Paul Roetzer: They're the one that got caught, reproducing Getty image watermarks in their image gen outputs. So, I would say they're, they're interesting from a innovation perspective. They're also in interesting from a legal perspective because they are definitely pushing the envelope of, things that I think will be challenged legally in the next year or two.

[00:46:08] Paul Roetzer: And, unapologetic about it. So they're, they're a really interesting company. I could see, Them being painted as the villain in a number of cases moving forward, but they don't seem to care. They act. Ahmad actually, the CEO seems to sort of relish the role that they're currently playing of challenging the norm.

[00:46:33] Paul Roetzer: Not in, not condoning it, like I'm not, I'm just kind of stating an observation that they're a company you're going to hear a lot more about for a lot of different reasons.

[00:46:44] Mike Kaput: Well, it really seems like with our topics today that legal action and controversy are a theme running through here because our last topic today is, I'm just going to throw this out here, AI Drake, an

[00:46:56] Paul Roetzer: anonymous is I've seen like Drake with the A.

[00:46:59] Paul Roetzer: Oh yeah. R A I K E.

[00:47:02] Mike Kaput: So an anonymous TikTok user, someone that was not like some big person with a big following, used AI to generate a fake song called Heart on My Sleeve, and it is a jaw hopping, the realistic, completely AI generated song. Between a simulated version of the rapper, Drake and the Artist The Weekend.

[00:47:24] Mike Kaput: This song got to like 10 plus million views very, very quickly before being taken down across a variety of platforms because the song drew a very negative response, both from Drake who posted on Instagram about it and his record label, universal Music Group, U M G, which is like one of the biggest record labels out there.

[00:47:47] Mike Kaput: In addition to getting the song taken down, U M G is now asking Spotify and Apple Music to block AI companies from training models on their catalogs. Now, As of right the second. U M G has not taken formal legal action here. I don't even know if they know the person responsible, but this seemed like a pretty big deal.

[00:48:08] Mike Kaput: It did. It was one of those stories, because it's Drake, it got everyone paying attention. The response was very visceral, very immediate. What were your thoughts when

[00:48:17] Paul Roetzer: you saw this inevitable? Like, I mean, it was obvious we were going to land here very quickly and now we're here. The backlash is shocking to me, even though I expected it.

[00:48:31] Paul Roetzer: Of the people who are upset that they're restricting other artists from doing, stealing other people's stuff and building these synthetic versions. I'll be, I don't know. I'm going to be really interested to see how this one plays out. I mean, again, from, from a, someone with a PR background, like the PR side of this for Drake and the running the risk of like, A lot of people who, especially the younger generation, who seen nothing wrong with this and maybe thought the song was awesome and want to have access to it, and then like Drake kept them from getting access to it, and does that actually hurt him at all?

[00:49:10] Paul Roetzer: . From an audience perspective, I don't know. I haven't had a lot of time to think about this one and the ramifications downstream, but certainly from a legal perspective it's like, okay, here we go. Like it. These are the kinds of things it's going to take. To set legal precedent around the taking of copyrighted material.

[00:49:31] Paul Roetzer: And so I think that's why we wanted to make sure we at least gave this a nod in the rapid fire this week, is it's a developing story. I could definitely see this being a topic we're going to probably come back to again and again because I could see this really, accelerating the legal, cases around some of the copyright issues we've been talking about on the show for the last couple months.

[00:49:56] Mike Kaput: Yeah, definitely. Kind of a weird and fascinating story. You know, we all, I think most of us remember who are old enough, remember the legal battles over Napster and piracy sites in the beginning of the, in the early two thousands. But, this is just a whole different animal with completely new. Creative coming out from

[00:50:14] Paul Roetzer: artists that, did you listen to the song, by the way?

[00:50:16] Paul Roetzer: You and I are both hip, hip hop fans. Did you? I,

[00:50:18] Mike Kaput: I listened to parts of it and I've seen also on TikTok. It's quite, often now we're seeing, they're doing like Tupac and Biggie covering songs. We're doing new songs, and I was like, this is so

[00:50:30] Paul Roetzer: crazy. It's going to blow up to both of them. It's going to be a Whackamole game.

[00:50:33] Paul Roetzer: Like they, they're going to knock out like the Drake one and you're going to have a thousand other ones. It, it's. It is going to be such an interesting time and

[00:50:43] Mike Kaput: we will continue to cover it because that's all we got for you. Bring it to you today, Paul, but there's plenty, plenty more going on in the world of ai.

[00:50:50] Mike Kaput: Really appreciate you as always, kind of unpacking everything for us. Thanks

[00:50:55] Paul Roetzer: again. Yeah. And thanks to all of our listeners, and again, kinda like I know with the book at the beginning, if you're loving the podcast like we're getting, I mean it's amazing all the people that reach out to me on LinkedIn every week, just that are podcast listeners that, you know, I don't, I don't know personally, but kind of getting to know through the podcast community, If you have a chance, leave a five star review on Apple or Spotify.

[00:51:15] Paul Roetzer: We'd love to, you know, have your support kind of building this podcast audience and continuing to deliver value there. And, just spread the word. If you're enjoying it, it's, you know, we hope we're bringing up a lot of important conversations that maybe aren't happening otherwise. And. The more people in on the corporate side and business, maybe nonprofit, wherever, whatever your career path is or business is, the more we can get these conversations seated within those organizations, the better chance we have of.

[00:51:42] Paul Roetzer: Really advancing AI in a positive way in the business world and society. So, you know, people a, you know, ask me what they can do, like give a, give a review, give a rating, and help us get this podcast further discovered so we can spread the word and all work in the same direction to responsible application of ai.

[00:51:59] Paul Roetzer: So thanks to everybody for listening. We will talk to you again next week.

[00:52:03] Paul Roetzer:

[00:52:04] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:52:25] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Cathy McPhillips | October 3, 2023

This week's episode of The Marketing AI Show covers AI advancements from ChatGPT, Anthropic, Meta, Spotify, and more.

[The Marketing AI Show Episode 53]: Salesforce AI Cloud, White House Action on AI, AI Writes Books in Minutes, ChatGPT in Cars, and More

Cathy McPhillips | June 27, 2023

This week's episode of the Marketing AI Show covers a week (or three) in review of tech updates, responsible AI news, ChatGPT’s latest, and more.

[The Marketing AI Show Episode 37]: ChatSpot from HubSpot, Generative AI Market Deep Dive, and ChatGPT and Whisper APIs

Cathy McPhillips | March 7, 2023

This week's episode covers what's next for generative AI and ChatGPT with the announcement of new APIs and HubSpot’s new tool.