This week’s Marketing AI Show focuses in large part on the 2023 State of Marketing AI Report, published last Thursday, August 24, 2023. Paul and Mike break down some key findings and discuss their takeaways. Also, generative AI is in the news—with pirated books as well as fine-tuning of large language models.
Listen or watch below—and see below for show notes and the transcript.
This episode is brought to you by MAICON, our Marketing AI Conference. Main stage recordings are now available for purchase, and a $50 discount code is mentioned at the start of the show.
Watch the Video
00:03:52 — Pirated books are powering generative AI
00:12:14 — The 2023 State of Marketing AI Report findings
00:24:16 — GPT-3.5 Turbo fine-tuning and API updates
00:30:53 — Nvidia’s blowout earnings and its ripple effects
00:33:57 — Meta launches its own AI code-writing tool, Code Llama
00:38:16 — Google releases August 2023 broad core updates
00:43:01 — Duet AI is now available in Google Workspace
00:45:57 — Google launches TextFX
00:49:54 — Musk demonstrates Tesla FSD 12 via live stream
00:55:57 — LinkedIn introduces AI-powered post-generation feature
Pirated books are powering generative AI
The Atlantic just released a major investigative journalism piece that proves popular large language models, like Meta’s LLaMA, have been using pirated books to train their models—a fact that was previously alleged by multiple authors in multiple lawsuits against AI companies.
The article states, “Upwards of 170,000 books, the majority published in the past 20 years, are in LLaMA’s training data. . . . These books are part of a dataset called “Books3,” and its use has not been limited to LLaMA. Books3 was also used to train Bloomberg’s BloombergGPT, EleutherAI’s GPT-J—a popular open-source model—and likely other generative-AI programs now embedded in websites across the internet.”
According to an interview in the story with the creator of the Books3 dataset of pirated books, it appears Books3 was created with altruistic intentions. Reisner interviewed the independent developer of Books3, Shawn Presser, who said he created the dataset to give independent developers “OpenAI-grade training data,” in fear of large AI companies having a monopoly over generative AI tools.
The 2023 State of Marketing AI Report findings
Marketing AI Institute, in partnership with Drift, just released our third-annual State of Marketing AI Report. The 2023 State of Marketing AI Report contains responses from 900+ marketers on AI understanding, usage, and adoption. In it, we’ve got tons of insights on how marketers understand, use, and buy AI technology, the top outcomes marketers want from AI, the top barriers they face when adopting AI, how the industry feels about AI's impact on jobs and society, who owns AI within companies, and much more. Paul and Mike talk about some of the most interesting findings from the data.
OpenAI announces the ability to fine-tune GPT-3.5 Turbo
OpenAI just announced a big update: You can now fine-tune GPT-3.5 Turbo to your own use cases. This means you can customize the base GPT-3.5 Turbo model to your own needs, so they perform much better on use cases that may be custom to your organization’s specific needs. For instance, you might fine-tune GPT-3.5 Turbo to better understand text that’s highly specific to your industry or business. You might also fine-tune models to sound more like your brand in their outputs or remember specific examples or preferences when producing outputs, so you don’t have to spend resources and bandwidth on highly complex prompts every time you use a model. Notably, OpenAI says: “Early tests have shown a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks.” They also note fine-tuning for GPT-4 will be coming this fall.
Plus…the rapid-fire topics this week are interesting, so stick around for the full episode.
Links Referenced in the Show
- The Authors Whose Pirated Books Are Powering Generative AI
- The 2023 State of Marketing AI Report
- GPT-3.5 Turbo fine-tuning and API updates
- Nvidia's blowout earnings ripple across tech, highlighting winners and questions
- Elon Musk demonstrates Tesla FSD 12 in a live stream on X, not a single line of code is used to build this non-beta version of Autopilot
- Meta launches own AI code-writing tool: Code Llama - The Verge
- Google releases August 2023 broad core update
- Duet AI in Google Workspace
- TextFX with Google
- LinkedIn introduces AI-powered post generation feature to help save time | Technology News - The Indian Express
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: it's amazing how they do what they do. And so when you start to look at this and you realize the science behind what rappers do and what poets do, it's actually intriguing. And so someone who I would consider myself a relatively creative writer, I wouldn't think to do this. And so for me it's more about like what we've talked about, AI's potential is a true.
[00:00:21] Paul Roetzer: Augmentation of human ability. This is the kind of thing I want to see more of
[00:00:27] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.
[00:00:47] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.
[00:00:57] Paul Roetzer: Welcome to episode 61 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, good morning, Mike. Afternoon. Morning. Afternoon. I left vacation early to come and do this. I don't even know what day it is or what time it is. Yeah, it is Monday, August 28th, Mike and I recording, around noon Eastern time.
[00:01:20] Paul Roetzer: That's more for my own personal reference as to where we are in. So we are back for our weekly episode with our three big topics and a bunch of rapid fire. This episode is brought to us by Mayon 2023, which happened already. If you missed it, end of July, you can now get all the main stage and some featured breakout sessions.
[00:01:42] Paul Roetzer: 17 sessions in total on demand. So if you haven't checked that out, or if you were there and want to relive it, it is MAICON.ai, m a i c o n.ai. There is a button that says Buy MAICON 2023 on demand. And while you're there, you can get your tickets for MAICON 2024, which is going to be September 10th to the 12th, 2024 in Cleveland.
[00:02:07] Paul Roetzer: And tickets are already selling at quite a risk pace. I'm pleasantly surprised with. How much interest there already is in the 2024 event, which is awesome to see. So, if you grab the MAICON on demand, though from 2023, it is AI Pod 50 is your promo code. It'll get you $50 off. So again, it's MAICON.ai and you can check out, all of the events from this past year and look forward to next year's.
[00:02:33] Paul Roetzer: All right, Mike. Let's get started. We got some good stuff today, some interesting topics today to cover in our main topics as well as the rapid fire. So let's get going. All right,
[00:02:42] Mike Kaput: so first up, the Atlantic just released a major investigative piece that proves that popular large language models like Meta's Llama have been using pirated.
[00:02:54] Mike Kaput: Books to train their models. Now, this is something that has been alleged by many different authors in several different lawsuits against AI companies, but it looks like we now have proof that this has been happening. So the article says, quote, upwards of 170,000 books the majority published in the last 20 years are in Lama's training data.
[00:03:16] Mike Kaput: The books are part of a dataset called Books Three, and its use has not been limited to Llama books. Three was also used to train Bloomberg's Bloomberg, GPT E, Luther AI's, GPT J, which is a popular open source model, and a likely other generative AI programs now embedded in websites across the internet.
[00:03:38] Mike Kaput: What's really fascinating is they also interviewed the creator of this books three Data Set of Pirated Books, and it appears, at least according to the founder, that Books Three was created with altruistic intentions. So they interviewed the independent developer named Sean Presser who created this dataset, and he said he created it.
[00:04:01] Mike Kaput: To give independent developers open AI grade training data. He's afraid that large AI companies are going to develop a monopoly over generative AI tools. So he says he created the dataset in order to give the little guy ways to train their own gen AI tools. Now, obviously this doesn't alleviate the concern that these hundreds of thousands of books are under copyright claims.
[00:04:26] Mike Kaput: So Paul, as we've. Dive more into this topic. Do these revelations materially impact the copyright concerns and the lawsuits being currently levied by authors against these major AI companies?
[00:04:41] Paul Roetzer: I don't know. I mean, the one journalist who did this, it was quite an impressive beat of investigative journalism.
[00:04:50] Paul Roetzer: So if you haven't read the article toread how the author did this, like the lengths they had to go todiscover what books were in there, because what you realize is that they either went to great lengths to hide the fact that these books were there, or somehow the way that these things are generated into the data set.
[00:05:12] Paul Roetzer: Stripped out a bunch of identifying information, but he had to go through this very specific process to actually compile this list and figure out which books were in there, which authors were in there. Again, either by design or by accident. It was made to appear as though it was covered up, how this was done.
[00:05:30] Paul Roetzer: So pretty crazy story. Even just the investigative journalism side of this. There was a quote in there though to talk about the copyright side. Rebecca Tushnet, a law professor at Harvard, stated that the law is unsettled when it comes to fair use involving unauthorized material with previous cases, giving little indication of how a judge might rule in the future.
[00:05:53] Paul Roetzer: So the article wasn't about like, this is going to sway one way or the other was really just pre presenting data and facts and. Kind of addressing the fact that we just don't know. And the key argument, a lot of these the companies are going to make is the fair use argument from my understanding of it. And it doesn't seem like it's clear whether that'll be a justifiable argument or not, but again, it's just weird that the companies seem to have gone to such great lengths to not disclose that they used this data, they know they used it, and if they felt.
[00:06:29] Paul Roetzer: Okay. About it morally and ethically. I don't think they would've hidden it so much. But the other thing that came to mind for me when I was reading this was like, you and I have talked about this before, like. ifG GPT four and Lama and Claude and all these models train on just the internet, they go and consume all this content.
[00:06:48] Paul Roetzer: All of our, you know, marketing branded content, our, our, you know, our company's blogs, they go and consume Wikipedia and c n n and all, all these sources and, but there's a bunch of really. Junky content on the internet too. So maybe it's reading stuff on Reddit that isn't well written or it's reading all these sources.
[00:07:08] Paul Roetzer: And if you think about like all this text on the internet, the average of it probably isn't that great. Like it's, it's probably, there's probably a lot of subpar writing on the internet. So how did it learn to write so well? Like, and so if you start thinking about the fact that, you know, some of these models obviously fed it really well done books.
[00:07:32] Paul Roetzer: Then if you're developing the models and you are waiting professionally published books heavier than you are Reddit community boards, it starts to make sense where you train these models to write like the best of the human writers. And so to do that, you need the best content. And so if you take 170,000 published books, you're going to assume that those, that those writing, that training set, Is above standard for the rest of the content you find on the internet.
[00:08:04] Paul Roetzer: And it starts to make sense how they could build these things to actually write like the best humans. And I'm not saying GPT four used this data. Maybe they did, maybe they didn't. I don't know. But it, it just starts to make more sense how these things learned to be so good at writing.
[00:08:24] Mike Kaput: Where do we even go from here?
[00:08:26] Mike Kaput: Like is it possible to resolve, I mean, like we've proven 170,000 mostly copyrighted books. There's been many high profile authors, including people like Stephen King who are incensed that their books have been used to train these models. There's lawsuits happening right now. The Genie is kind of out of the bottle though when it comes to the model being trained and out in the wild.
[00:08:49] Paul Roetzer: Right. I think so. I mean, again, I keep coming back to it seems like a lot of these models used content that was questionable at best, whether they were allowed to use it, they may claim fair use moving forward, but they seemed to be going out of their way to hide whatever it was they used because they either know it was illegal or they think it might have been illegal.
[00:09:15] Paul Roetzer: So it seems like. If nothing else, people were very, these companies were very, aggressive in using stuff that. Might not have been allowed to be used. And I just think that the future ones won't do that. Like they're, they know they have, they're, they're being watched closely now and that future laws and regulations may catch up to them.
[00:09:40] Paul Roetzer: And I think the play moving forward is to just try and license, license the stuff. So we talked in past episodes about the licensing issues with the New York Times and Open AI is trying to negotiate with them. Google already has the same thing I was saying before, like imagine if. You can tune these models to heavily weight.
[00:09:58] Paul Roetzer: Examples of content from the New York Times. It's exceptional writing. These are professional journalists, so if your models learn to write like the authors of the New York Times, It's going to be better than just teaching it on some general content on the internet, like corporate blogs and stuff, for example.
[00:10:16] Paul Roetzer: So I assume that the play moving forward is to try and license with the best examples of writing possible, including books, I would imagine. And then to train 'em, you know, in a more. Ethically responsible way moving forward. So I don't know though, like, I mean, I just kind of assume this article is probably going to pop up in a lawsuit somewhere as a reference of we know what you did and here's the examples of what you did.
[00:10:42] Paul Roetzer: So I don't know, it's just a space that's going to be so fascinating to watch. Word, not IP attorneys. We've said this a million times. I've even put stuff like this up on LinkedIn and people who are IP attorneys will comment on it and they offer wonderful insights, but it's really clear still that no one knows how these cases will play out.
[00:10:59] Paul Roetzer: It may have just depends on which kind of, judge's rule on the case. I don't know. So I'm curious with
[00:11:06] Mike Kaput: you being an author yourself, like how do you feel about the tension here between. The fact that, you know, these data sets have created useful generative AI tools that we all use, but also the fact that, your books are probably part of it.
[00:11:23] Paul Roetzer: I honestly don't know how I feel personally about this. I is this one of those topics where we just, I. Comment on it. And, and I don't step back and say like, what would I actually be upset? 'cause someone did ask me on LinkedIn. Was your just curious, like, was your book in it? It's like, I don't, I have no idea if my book was in it or not.
[00:11:37] Paul Roetzer: Or any of my books or either of your books. Like, I don't know. And I, I don't know that I don't have a strong opinion one way or the other, honestly, at this point. Probably 'cause I haven't studied. The law enough to know if fair use is a viable argument here or if it's going to be upheld as a viable argument.
[00:11:57] Paul Roetzer: So I don't want to like take the easy way out here, but I actually don't have a strong feeling one way or the other right now on this. I just, I'm more observing it and very curious about how this plays out.
[00:12:12] Mike Kaput: So next up, Marketing AI Institute just partnered with Drift to release our third annual state of marketing AI report.
[00:12:21] Mike Kaput: So the 2023 State of Marketing AI report is now out and it contains never before seeing data from 900 plus marketers. On how they understand, use and adopt ai. And we're really pleased with the report because we've got tons of insights now on things like how marketers are actually buying AI technology, the top outcomes that they're looking to get from ai, the top barriers preventing them from adopting ai.
[00:12:50] Mike Kaput: How people in the industry feel about AI's impact on jobs and society who owns AI within companies and much, much more. So we wanted to devote one of the topics today to actually talking through some of the most interesting findings from our data. Now, we won't go through every single data point. We'd encourage you to go to state of marketing ai.com to download the report.
[00:13:14] Mike Kaput: That link will also be in the show notes. But we did want to cover some of the highlights that jumped out to us. Given that this year is the far and away the most people that have ever taken it over the three years we've done it, and it's been such a huge 12 month period for artificial intelligence.
[00:13:32] Mike Kaput: So Paul, first up, what findings jumped out at you the most from this research?
[00:13:37] Paul Roetzer: This, it's always one of my favorite. Pieces of content we put out each year because I'm always so intrigued to see how the responses change. And certainly with ChatGPT, we expected some pretty dramatic shifts in the responses, and we certainly did see that in some cases.
[00:13:52] Paul Roetzer: So, a few that, that jumped out to me. The first was 64% of marketers say AI is either very important or critically important to their success over the next 12 months. This is a question we have been asking for three years now. And this was one where we saw a large change in sentiment. So in 2022, only 51% said that.
[00:14:12] Paul Roetzer: So a 13 point shift in, that response was very notable. So we're definitely seeing more urgency for marketers. The next one was we asked about what are the benefits that they're seeking? And 77% say reducing time spent on repetitive, repetitive tasks is the top outcome they want to achieve. Pretty.
[00:14:35] Paul Roetzer: I guess like what we assumed, but to see that significant number, I think it was larger than by like 20 points of the next closest one. Yeah. The 98% of all marketers say they're already personally using ai. Makes sense. Now, again, we always have to, be transparent with the bias of our sample, which is.
[00:14:54] Paul Roetzer: These are largely people who are subscribers or followers of marketing AI Institute. So the way we promote the survey from April to July when it was open is through our newsletter, through popups on our site, through our webinars, podcasts. We mentioned it a few times. So the people who were taking this are already predisposed to be interested in AI because they're following the institute in some way.
[00:15:18] Paul Roetzer: So 98% may not hold up across the 11 million marketers worldwide, but our experience is, it's probably a pretty good, representation. When we do our Intro to AI class, we've asked previously like, how many of you have experiment with Chad GPT? And that number two months ago was in the 90 percentile.
[00:15:36] Paul Roetzer: So it's, I think everybody, if you, if you account for Chad GPT being a part of this question. It's pretty reasonable to assume it's in the 90%. I mean, if you haven't experimented with AI at some capacity yet and you're a marketer, I'm not really sure what you've been doing for the last nine months.
[00:15:53] Paul Roetzer: So, but at 98%, but then we asked, like, we trying to kind of categorize that. So I thought it was interesting. 45% was the largest percentage. They said they're experimenting with it. 29% say it's infused into their daily workflows. So I would, I, I've said this before on this, like I would put myself into the experimenting category.
[00:16:11] Paul Roetzer: I have not changed any of my personal workflows because of ai. But I do use it all the time totest and see if it can enhance what I'm doing. So that one I liked. The other couple that jumped out to me was 78% still say they don't have internal AI focused education and training. Like this has been the number one finding for the last couple years is the lack of education and training.
[00:16:38] Paul Roetzer: And then it's always the number one answer for what are the biggest obstacles to AI adoption is lack of education and training. This tous is always like for three years running. the urgency and the critical nature of education and training just continues to get reinforced, even though we saw it improve a little bit this year.
[00:16:57] Paul Roetzer: More have AI training and education in development. But still the vast majority of organizations have no training. And this, this one like expands beyond just marketing. 'cause we're asking marketers, does your organization have training? Not just the marketing team. And they're saying, no, we don't have it.
[00:17:14] Paul Roetzer: And then the last one that kind of plays along with that, Only 22% of organizations have generative AI policies, which we've talked about many times on this show before about being an important first step people can take. You can do that today, like set parameters for your employees of what they're allowed and not allowed to do, and then only 21% have, AI ethics policies or responsible AI principles.
[00:17:37] Paul Roetzer: That was a new question this year. we haven't previously asked that, so we don't have benchmark data for it. Again, so much work to be done. The nice thing, I think with that final one around generative AI and responsible AI is we can fix that. Like as an, as an industry, we can, we can start solve 'em for that next week, like start putting those things in place.
[00:17:58] Paul Roetzer: So I really hope, like when we start looking out to next year, I would love to see way more people say, yes, we have internal ai, focused education and training, and I think we will see a big shift in that. And I would love to see 50% plus saying, yeah, we have general AI policies and responsible by principles.
[00:18:16] Paul Roetzer: Like if we do our job over the next 12 months, hopefully we can really help move the needle on those two things in particular because those are very important to us.
[00:18:26] Mike Kaput: So we're looking at. The vast majority of people lacking training and this support essentially in the form of policies to help guide them as they're trying to use AI in their career.
[00:18:38] Mike Kaput: And what really jumped out as well from the data is that when we asked who owns AI within the organization, the two overwhelming roles, either on their own or together, with joint ownership, were CEOs and CMOs overwhelmingly owning. Artificial intelligence. Now, I'm curious about your take on the fact that we also found CMOs lag behind other C-Suite roles when it comes to ai.
[00:19:04] Mike Kaput: So we ask all these questions about ai, understanding, confidence, evaluating the technology to buy it and using it yourself. As you mentioned, and among the C-Suite CMOs were most likely to say they had a beginner's understanding of ai. They were least likely to say they had an advanced understanding.
[00:19:21] Mike Kaput: They were also least likely to have high or very high degrees of confidence in evaluating ai. And compared to other C-suite roles we surveyed, they were the least likely to be infusing AI into their daily workflows. So what was your takeaway here? Seeing. How important CMOs are as a piece of this puzzle to get that formal training and adoption and strategy in place.
[00:19:42] Mike Kaput: But then at the same time, they're struggling like many other people to understand ai.
[00:19:48] Paul Roetzer: CMOs have a lot on their plate. It doesn't really surprise me because they've got a lot of things to deal with and they don't have a lot of time to be figuring this stuff out. So, you know, I think we've seen it over and over again just with leaders in general at, at organizations.
[00:20:04] Paul Roetzer: Again, we've, you know, talked with universities, we talk with corporations like larger enterprises, talk of small mid-size businesses. People have full-time jobs like, and, and AI in some ways feels like something you really need toset aside time to learn. I mean, you and I have the luxury of this is what we get to do for a living now is we get to think about this 24 hours a day basically.
[00:20:26] Paul Roetzer: And, and sometimes I think we forget that a lot of other people don't have that luxury, that, that, that the time they get to think about AI is like the once a week they get to listen to our podcast and they're trying to like soak it all in and hear all the key things, but then they gotta go back and like, solve for today and tomorrow.
[00:20:42] Paul Roetzer: And a lot of times for their job, that does not mean. Getting to watch an online course or read an AI book, things like that. So I think this is, I don't know how you solve this really quickly because CMOs are pulled in so many different directions. It's a hard thing for them to really be able to dedicate energy to.
[00:21:03] Paul Roetzer: So I think for a lot of CMOs, this is going to take a very. Intentional effort over the next few months to start carving out an hour, two hours a week, whatever it's going to be, where they can really level up their knowledge and their, confidence with AI so that they can more confidently lead moving forward.
[00:21:21] Paul Roetzer: But I'm just empathetic to the fact that it's not easy. Like we talk about the need for this and the urgency of it. Then you get to the reality of people's schedules are crazy and they have a lot of competing interests for what their, for their time and their resources. And it's just unrealistic to think everyone's just going to be able to drop everything they're doing and figure this stuff out right away.
[00:21:43] Paul Roetzer: So hopefully what we're doing helps move this along. And if the podcast is. People's like window into this each week. Great. But I think that moving forward, it's just going to become, right now, maybe it hasn't, its surfaced to the top, but I feel like this is going to become a priority for a lot of CMOs, whether, you know, they're ready for it to be or not moving forward.
[00:22:05] Mike Kaput: So we actually, you and I presented the findings on a webinar last week, which people can find by going to the site and going to our resources and then webinar section and Nick watch it on demand. But a really popular part of that presentation was the advice you gave to leaders at the end, these five essential steps to begin thinking about taking.
[00:22:24] Mike Kaput: And I'd highly encourage anyone listening to go watch the full webinar to get the full impact of these. But could you quickly walk us through what those five. Essential steps are.
[00:22:35] Paul Roetzer: Yeah, any regular listeners have heard us say these five steps before, but I think they always bear repeating. So the first is education and training individually and for your team and for your company.
[00:22:46] Paul Roetzer: A lot of times marketers are going to have to be the ones to lead on this within their organization, you know, to drive the communications around the impact and importance of ai. So education and training. Creating an internal AI council. We've talked about that many times. We're seeing more and more organizations Every week I'm having conversations with people who are telling me, oh, we started the AI Council.
[00:23:04] Paul Roetzer: You know? You know, thanks for the recommendation, and it's really cool just to hear the different ways people are approaching that there's no one way to do an AI council. Basic premise is get people together who are interested in solving for this in the company. Could be two people, could be 20 people, but start there.
[00:23:20] Paul Roetzer: Responsible AI principles, generative AI policies. We talked about that as a key. Finding only like 21 to 22% of people, organizations have done this to date. So you can really get out ahead of things by doing that. The one that's a little more complicated, we'll try and share more guidance on how to do this step moving forward.
[00:23:38] Paul Roetzer: But conducting an AI impact or exposure assessment for your people, for your teams, how AI is going to change their job over the next 12 to 24 months and start being proactive in preparing them and reskilling them and upskilling them. And then building an AI roadmap that prioritizes use cases and, overall campaigns and strategies around building a more intelligent company.
[00:23:58] Paul Roetzer: So those are the five things we generally talk about as key for people. And it doesn't matter if you're a small business with five employees or a, you know, an enterprise with 50,000 employees. Those steps are, are relevant to you. So another big
[00:24:14] Mike Kaput: topic of discussion this week is that OpenAI just announced a big update, and you can now fine tune GPT 3.5 turbo to your own use cases.
[00:24:25] Mike Kaput: So Turbo being kind of a variation on the GPT 3.5 base model. This basically means you can customize this model to your own needs so that it performs much, much better on use cases. That may be custom to your company's kind of specific needs or desires of how you want to use some of these large language models.
[00:24:44] Mike Kaput: For example, you might fine tune GPT 3.5 Turbo to better understand text that's highly specific to your industry or business. You could also fine tune models to sound more like your brand when they. Create their outputs, or you might even have it. Remember specific examples or preferences when producing outputs, so you don't have to spend all these resources and bandwidth creating highly complex prompts like every single time you're using this model.
[00:25:13] Mike Kaput: I. OpenAI actually says that early tests have shown a fine-tuned version of GPT 3.5 turbo can match or even outperform base G PT four level capabilities on certain narrow tasks. They also note in the announcement of this fine-tuning, that fine-tuning four G, PT four will be coming in the fall. So Paul, first off, why is the ability to fine tune models like this such a big
[00:25:43] Paul Roetzer: deal?
[00:25:45] Paul Roetzer: So the models themselves are kind of general or horizontal is the way to think about it. Like they're trained on the, you know, the same data sets that everybody has access to. So again, like go back to the books three example. Let's say all the major models used books three, they're all trained on that.
[00:26:00] Paul Roetzer: Now be think of taking a, a foundation model and being able to train it just on your specific data, just on the way you want things done, just on your proprietary access around customers or data sets within your organization, so now you can really make them much more personalized for your organization.
[00:26:18] Paul Roetzer: I will say this is not like as a marketer or a c e O of a company, you are not going to go do this yourself. This isn't like a feature in ChatGPT where you turn on and now you can just start doing this. This is like if you access their A P I. This is more for the developer audience. Basically, I. If you access their a p i, you're now able to do a whole bunch more fine tuning.
[00:26:38] Paul Roetzer: And supposedly your data stays your dataaccording to what they're saying. So this was something that was a really big deal for the developer group. Last week. You saw a lot of conversation around this and the ability for GPT four in the fall is a huge deal, because basically these models that are already impressive.
[00:26:57] Paul Roetzer: When you can give them proprietary data, they seem to get much more valuable, in terms of what they can do for your organization. So I think that this is, we've talked about before, like these ability to kind of create these vertical versions of these models or these ones that are personalized to your organization is sort of the next unlock for them.
[00:27:16] Paul Roetzer: And I think we're going to be racing toward that kind of capability. Moving into the, you know, the second half of 2023. So if I'm a
[00:27:25] Mike Kaput: business leader or marketing leader, I'm not fine tuning models myself, but as I'm thinking about bigger AI strategy, what are some of the top use cases in marketing and business that I might have for fine tuning large language models in, you know, concert with a developer or hiring someone to help me
[00:27:43] Paul Roetzer: do this?
[00:27:44] Paul Roetzer: I mean, a good way to think about this is if you go into GPT four and ask it to write an email for you, it's going to write a really good email. Or if you ask it to write a letter, you know, from your c e o, it's going to write something that seems like a decent letter. Or if you ask it to do like a landing page for your company, it's going to develop a de a decent landing page.
[00:28:00] Paul Roetzer: But now imagine you could train it on your 20 best performing emails, or the last five letters that your c e O actually wrote, or video scripts that you know have performed really well. Or the top performing landing pages. So now what you're able to do is not just train it on data, but like performance-based data and things that are specific to your organization.
[00:28:23] Paul Roetzer: That's, that's the difference here. That's what we're talking about, is the ability to have these things really start to learn based on things that have previously worked or things that are specific to your organization. That starts to differentiate the content it's going to create. Do it in certain tones, styles, things like that.
[00:28:40] Paul Roetzer: That's what we're looking at here, is when you start to look at these marketing or business use cases where language is created, that it's actually tailored to the kinds of content your organization wants to put out. Not if we all go and give the same prompt to GPT four to write a letter and give it the exact same thing.
[00:28:57] Paul Roetzer: It's going to sound roughly the same. The words aren't going to be exactly the same, but it's going to. Roughly look the same, but now if you say, but write it like these 20 emails and use this tone, that's where you start to get differentiated with the content that it outputs.
[00:29:13] Mike Kaput: Do you think that having this ability to fine tune some of the open AI based models will lead to say, bigger enterprises getting more involved with building on top of open AI's ecosystem of
[00:29:24] Paul Roetzer: models?
[00:29:25] Paul Roetzer: And that's the great question we've seen before is like, there's so many options now with these models. You can go to an open source, one like LAMA two, you can go to Amazon and get access to Ros clawed and Cohere and Amazon Titan. You can, you know, so whether you build with like proprietary closed models, you build open source models.
[00:29:46] Paul Roetzer: That's the debate right now in a lot of these enterprises is what do we do? And I generally, what I keep seeing more and more of is you're likely going to have a symphony of models. Mm. It is probably not going to be a single model for every company. You may have a customer service language model that's trained on customer service data.
[00:30:04] Paul Roetzer: You may have one that's specific to the healthcare industry. That's more for the operation side of the business, the medical practice side. You know, it just, this is the part where we're not certain what this landscape looks like. I. It seems like we're moving in a direction of multiple language models that are kind of tuned for the organization and the use cases, and then ideally kind of access the same databases so they have access to proprietary data that feeds them and continues to build them and their capabilities for your organization and for you individually.
[00:30:33] Paul Roetzer: Like there may be a language model that learns you, and that's kinda like that Inflection pie is the example there where it starts to learn the conversations you individually have. So I think there's like a personal aspect to this and then there's the business aspect. Gotcha.
[00:30:47] Mike Kaput: All right, let's dive into some serious rapid fire topics.
[00:30:52] Mike Kaput: So first up, Nvidia, which is a leader in making chips, that power AI software reported an astonishing 101% year over year revenue growth. To $13.51 billion in its latest earnings. So this is very, very significant, not only because of the success of Nvidia, but NVIDIA's become kind of a bellwether to judge how sustainable the AI boom is.
[00:31:20] Mike Kaput: So a lot of people were looking very closely at its latest earnings to see if they would exceed expectations, and they frankly crushed it. People that partner with Nvidia, like Apple, Amazon, Microsoft. We just saw a major partnership announced with our friends at VMware as well. All these companies are seeing knock on effects of this in a positive way as their stocks are rising.
[00:31:45] Mike Kaput: The massive demand that NVIDIA is seeing from AI chips is also forcing everyone with data centers to really revamp their infrastructure to support the growth of these chips being sold and coming online. So I'm curious, why is Nvidia such a bellwether for where AI is going and the sustainability of the AI boom we're currently seeing?
[00:32:09] Paul Roetzer: I mean, the simplest way to think about it is these language models. We talk so much about the computer vision models, like all, all of the generative AI space that is really captured everyone's imagination with this stuff. It, it's powered by Nvidia, so none of this happens. It's how, it's how it's able to do what it does.
[00:32:25] Paul Roetzer: They are by far the largest stakeholder in this space. You know, control the vast majority of the market share of the chip world. So they're just the thing that makes it possible. Like you really can't train these models without NVIDIA's chip. So everyone's lining up. I think I heard, was it, you know, we'll talk about Tesla in a minute, but I think like Elon Musk placed like a $4 billion order for Nvidia chips or something crazy like that.
[00:32:50] Paul Roetzer: It's all a race to see who can build the biggest cluster of these chips together. And interesting. I just started reading Chip Wars by Chris Miller that sort of tells the story of the chips and how they're made and you know, the battle for em. So it's, it's just a fascinating space and I've personally, we're not giving, investing advice on this show, but I personally started investing in Nvidia like seven or eight years ago.
[00:33:13] Paul Roetzer: I think I started buying the stock when it was at $25. So it's been fun. It's been a fun ride to watch the world, realize what NVIDIA is doing and wake up to it. It, it's, it's just an amazing company. And if you go back at the history of them, like this isn't what they started to do. Like they, they were largely used in video games in the early two thousands.
[00:33:35] Paul Roetzer: But the company I think was formed in like the mid to late nineties. It's not like this is like some AI company that just popped up three years. It's just a brilliantly run, very strategic company that saw a massive opportunity and was, you know, now is just in this insane position where they're just pumping out chips and, and, and money.
[00:33:57] Paul Roetzer: So in other
[00:33:58] Mike Kaput: big AI player news, meta has released Code Lama, which is a new AI tool built on top of its LAMA two language model. And this tool can generate new code and debug existing code to boost programmer efficiency like LAMA two. Code Lama is open source and free for you to commercially use in your own projects or retool or customize in your own products.
[00:34:25] Mike Kaput: Now, according to the Verge Meta Claims Code, Lama performed better than publicly available large language models based on benchmark testing, but they did not specifically name which models it tested against. Meta also said that Code Lama scored 53.7% on the code Benchmark human eval, and was able to accurately write code based on a text description.
[00:34:50] Mike Kaput: So this isn't the first programming copilot tool we've seen out there, but how significant is it that we now have one supported by a major AI player that is fully
[00:35:01] Paul Roetzer: open source? I mean, obviously what Meta's done with LAMA really shook things up in the language model space, putting out LAMA two, you know, which in many cases seemed to be on par with like a GPT 3.5, but making it open source was a big deal.
[00:35:17] Paul Roetzer: I. It appears as though at the moment accelerating coding, you know, making coding way more efficient. Increasing the productivity of coders is probably the most valuable thing these models have done so far. So, you know, we hear about like writing efficiency improving, but coding efficiency, like the numbers you see are, are crazy in terms of what people are not able to produce.
[00:35:41] Paul Roetzer: And this to me is one of those spaces where, They can always make more like by unlocking, say 50% efficiency gains. You can write more code. We can see more innovation, more new companies built, because now it's to the point where almost anybody can start building things that they can imagine, because these tools, in many cases, they're allowing human language, just like natural language to be the way you build stuff.
[00:36:09] Paul Roetzer: So I think that's where we're going. And we've talked about Rept on this show before. R E P L I T, as a company, like their mission is to create a billion developers basically like their, what's happening right now with this, these rapid advancements in AI's ability to write and debug and improve code is that people like you and me who aren't coders will be able to build apps.
[00:36:32] Paul Roetzer: Like, we'll be able to imagine something and just go into a tool and start explaining what we want to create. So when we start thinking about like later this year and into next year, the kinds of disruption that can happen in different industries, what used to happen is like we would sit around and have an idea like, oh, it'd be cool if somebody built this in the legal industry or be fun if we built this thing for the golf industry.
[00:36:52] Paul Roetzer: And that was it. Like it would end it like, yeah, that'd be cool. We have no ability to do that. I think what's going to happen in the very near future is we're going to be able to say, oh, let's pop in. Wherever it is, like Code Lo or whatever, and let's like, let's build an M V P of that. Let's build a minimum viable product of that idea.
[00:37:07] Paul Roetzer: While you're sitting at happy hour having the idea, you also start building the M V P with no coding ability, just like telling it what you want it to do. And then we're going to have like a sample app. And I think that's the thing a lot of people don't realize is the rate of innovation and like entrepreneurship, that's about to happen because we're democratizing the ability to build things.
[00:37:29] Paul Roetzer: And so Code Lama it is, it's fascinating immediately to developers. But it's more intriguing with what it might open up the possibilities to do, where anyone can basically build anything they can imagine. And it seems like that's a world that's actually within reach right now of what these things are going to make possible, and that's exciting and terrifying, honestly.
[00:37:54] Paul Roetzer: Like as someone who's had lots of ideas of things to build through the years and no ability to build them myself, it's really, really cool to think. That we might be able to just on the fly build stuff. I mean, that's, that's awesome. But I also, I could sit back and think of ways that that could go really wrong.
[00:38:14] Mike Kaput: Okay, so we also have some big updates from Google that are a bit AI adjacent. They have indicated that they have an August 20, 23 core update rolling out right now, and then it may take up to two weeks to complete. So these core updates happen, relatively regularly, and they have significant impacts on search engine rankings for marketers and businesses now.
[00:38:40] Mike Kaput: There's not a lot of details about what the core update involves. Search engine land reminds us that there aren't really specific actions you can take to recover if you have a negative ratings impact. And that also may not actually signify anything is wrong with your site. You may just be, Google may just be updating its preferences in how it is ranking and prioritizing pages.
[00:39:03] Mike Kaput: It's really good to remember as these types of updates roll out, 'cause I still get questions about this. As you know, Google isn't punishing outright AI generated content. They are, however, saying continually and often that content must be helpful, authentic, and user-centric, no matter how it's created.
[00:39:23] Mike Kaput: So they're not saying, just because AI created it, we're going to dinging you in search rankings. But you do have to avoid kind of this tendency for some people to use AI to generate low. Level content. So as we see more updates like this, Paul, what advice would you give to marketers specifically creating content using ai?
[00:39:43] Paul Roetzer: I mean, I don't know that it changes. I think you summarized it. Create useful, helpful content that's meant to actually benefit people and, and not your search rankings. So keep an eye on your search, see if you've gotten dinged. But you know, I think that the. The desire or like the attraction is going to be increasingly there for brands to take shortcuts and use AI to just create a ton of content.
[00:40:10] Paul Roetzer: And I think over and over again, we're going to find it just doesn't work. I have to assume at some point they're going to be able to better tell what was written with AI content. I would imagine, especially if you're using like Google Workspace or Google Bard, like Google's going to probably be able to tell better than not ifyou use their tools to write stuff.
[00:40:30] Paul Roetzer: I don't know. I just, I keep going back to build an authentic content strategy that's meant to benefit people and don't worry about this stuff. So for us, it's like we're just going to show up and we're going to talk. Each week we're going to do a podcast, we're going to turn it into videos, we're going to turn the transcript into summaries of the podcast or into the, you know, for the blog.
[00:40:50] Paul Roetzer: We're going to create some social shares to help people. We'll build some, you know, video shorts, like we're just going to put information out there. And if it benefits people, wonderful. Like that's the goal of it. If in the process, our organic search rankings increase, awesome. Not, it is not the reason we're doing it.
[00:41:08] Paul Roetzer: Yeah. But you know, and I think what we have found, at least ourselves, and you and I did this for 16 years, you know, when I was running my agency, like just creates stuff that helps people. And like I get that there's a whole SS e o industry who, you know, tries to. Find more clever ways to, you know, win at search and stuff.
[00:41:28] Paul Roetzer: And, you know, we certainly played those games plenty in our day, when we were, you know, the early teens and stuff when you're trying to figure out the algorithms all the time. And I think what we've always found is like, if you stop trying to solve for it all the time and, and you just. Just create good content that helps people.
[00:41:47] Paul Roetzer: You usually come out ahead. And I know that a lot of marketers and s e o people don't necessarily want to hear that 'cause you want hear there's some. Like more specific strategy to follow and, and again, having, I mean, I'm 23 years into my career I've yet to found a way to beat Google at anything. I haven't spent a ton of time trying to solve how to beat Google, but what I have found is when you just do authentic stuff, you usually, it usually works.
[00:42:12] Paul Roetzer: And so that's always been my advice. Even when I was running the agency and we were advising clients on content strategy. It's what we always did then. Like we never played the game of what's the algorithm? How do we get ahead of it? Like, right. It was just, we never got, and you ran more clients than I did like.
[00:42:27] Paul Roetzer: Do we ever try and like beat Google, like, here's the algorithm change, like here's how we're going to adapt it and get ahead of them until the next, like it just wasn't a game we played. Right?
[00:42:36] Mike Kaput: Yeah. You don't want to be on that hamster wheel of, of continually trying to keep up with random algorithm changes.
[00:42:42] Paul Roetzer: Yeah. So just, just create good stuff that helps people. Like, it's such a simple philosophy, but I, I think it just works. Well,
[00:42:49] Mike Kaput: so super important to remember as well now that it's so easy to click a button and create something that might sound passively good, but you really need to be thinking about the authenticity and quality of it.
[00:43:00] Mike Kaput: Yeah. So Google has another, announcement. They're kind of progressively rolling out their Do It, AI and workspace, which is the suite of generative AI capabilities across Workspace app, and. Duet includes a Help me Write button in Google Docs and in Gmail that generates content for you and augments the writing process.
[00:43:24] Mike Kaput: Now, this product also has the ability to automatically generate slides in Google Slides. And help you organize data in Google Sheets. It's essentially their co-pilot across all the Google workspace apps Now. It's been progressively rolling out. Paul, you said you now have access to it in your personal Google account, and you've been playing around a little bit with it.
[00:43:46] Mike Kaput: What are your thoughts on it so
[00:43:48] Paul Roetzer: far? I've only tested a few times because again, it's, I don't spend a ton of time on my personal Gmail and I certainly don't create a ton of Google Docs, so I wasn't even aware I had it. And then I, and then I went in one day and I saw the Help Me Write button. I was like, oh, okay.
[00:44:00] Paul Roetzer: There it is. So I think for right now, our focus here is just to make people aware it's there. Like we've talked so much about what happens when Microsoft 365 co-pilot is there for everyone, and what happens when Google Workspace has AI built in for everyone. So, I would say if you don't, if you haven't experiment yet in your personal account, go get it.
[00:44:23] Paul Roetzer: Request it, go. See if it's already there and start playing around with it, because that's the capability that's going to be coming to corporations. And so now would be a time to start experimenting with it, see what it's capable of, compare it to the outputs of other AI writing tools, maybe you're using in your corporate side.
[00:44:40] Paul Roetzer: I have found, what I've tried, what I've started doing is, When I want to see the button, I'll, like, if I'm working on adoc for the institute, I'll actually go into my personal workspace and start working in it there because I have the help me write button there and then I'll bring that content back over to my institute admin account.
[00:44:57] Paul Roetzer: I can't do it in our workspace one for the institute. So we have requested the access. I don't know how they're deciding who gets it, but I requested it as an admin of our institute account, but it has not turned on there yet. But yeah, it's, it's there. You can go into like Google Labs, the workspace.google.com.
[00:45:17] Paul Roetzer: I think we'll put the link in how you can request it, but go check it out, like see what it does. I'd love to hear feedback. I put it up on LinkedIn and not too many people had experimented with, honestly, like, I was kind of surprised. I expected a lot more input from people, but it doesn't seem like many people are actually testing it yet.
[00:45:33] Paul Roetzer: And just to make it like a little more confusing. So there, duet AI is what, they're kind of like branding it, but it's actually powered by their palm language model. So Palm is kind of like their GPT. 3.5. I don't know what it's equivalent to, but Palm is what's actually powering duet, and then duet is like in workspace.
[00:45:54] Paul Roetzer: Gotcha. Okay.
[00:45:57] Mike Kaput: Well, you may also be able to experiment with another cool Google AI tool because they also launched something called the Text FX Project. This is a suite of 10 creative writing tools. They say, quote, are for wrappers, writers and wordsmiths. This is just an experiment they're running in their Google Lab section of kind of their online site.
[00:46:19] Mike Kaput: It's not a finished product. It includes some really cool features that might provide you or people you know, with a fun, accessible way to try out sophisticated AI on your own. So some of these features include something called simile, which will create a simile about anything or concept. The unexpected feature makes a scene more unexpected and imaginative.
[00:46:43] Mike Kaput: Fuse will find intersections between two things. Scene will generate sensory details about a scene, and there are six more features here. Now, Paul, you were experimenting with this tool bit in the past few days as a creative writing assistant, what kind of things did you learn about it or what use cases should the audience kind of be thinking about when they're exploring this?
[00:47:06] Paul Roetzer: I think a lot of writers will be pleasantly surprised by this tool. Like the thing that jumped out to me was, it was just really well done as an experiment. 'cause again, it's just like kind of a beta, it's just in their labs. This isn't a full blown finished product, but it's, it's really smartly designed.
[00:47:23] Paul Roetzer: It's a really simple user interface. I could see this being like a, a really valuable training tool for people like how to do creative writing, because what I found is even just looking through the 10 tools that are offered within it, these are things that as a creative writer, and you and I are both writers by trade.
[00:47:42] Paul Roetzer: You do 'em like subconsciously, and I'm sure I took creative writing classes in high school and college and maybe you learn some of these things, but you don't think about it. And I've, I've often, like you and I, are both also fans of hip hop music and I always like, Admire rappers and their, their ability with words.
[00:47:59] Paul Roetzer: Like, it's just, it's amazing how, how they do what they do. And so when you start to look at this and you realize the science behind what rappers do and like what poets do, it's actually intriguing. And so someone who I would consider myself a relatively creative writer, I wouldn't think to do this. And so for me it's more about like what we've talked about, AI's potential is a true.
[00:48:23] Paul Roetzer: Augmentation of human ability. This is the kind of thing I want to see more of the things where I go in, it's not writing the thing for me, but it's teaching me how to write more creatively and it's assisting me in that effort. And I can take as little or as much of what it outputs as I want, but. I could absolutely see things like this being the kinds of tools teachers use in the classroom.
[00:48:48] Paul Roetzer: So add and replace creative writing, but to have these kinds of tools and say, okay, let's play with alliteration today, and like, go in and here's what you're going to do and you're going to use this tool and it's going to show you how to do it. I don't know. I just like, within like three minutes of playing with this, I just started falling in love.
[00:49:02] Paul Roetzer: Not with this. Thing in particular, but with this path for AI as a true augmenting tool with really well done user interface and a really smartly built product that's designed for specific audiences to help them. And you could start to envision this sort of thing being built for different careers and different professions and being used in the classroom.
[00:49:23] Paul Roetzer: And that was like exciting to me of just like, Every once in a while you come across and say, yeah, this is what AI should be like. Don't just like write the article for people. And I was, last week, like, you know, I was at a, you know, Ohio University meeting with their business school. And so I was very in tune on Friday with like classrooms and what are we teaching students and what's the future of education?
[00:49:44] Paul Roetzer: And then to see something like this and you just get inspired again about what AI can be. So that was why I kind of like captured my attention on Saturday.
[00:49:53] Mike Kaput: Very cool. In another example of what AI might be, Elon Musk actually took to the streets of California in his personal Tesla model, SS and live streamed.
[00:50:05] Mike Kaput: 45 minutes of him driving and deploying and showing off the. Tesla's, full self-driving version 12 on the streets of California. And this is, this is self-driving that doesn't use a single line of code to actually use, to actually pilot the car, autonomously. So Paul, you are a Tesla owner. You follow Tesla quite a bit.
[00:50:31] Mike Kaput: And you found this livestream pretty notable from an AI perspective. Can you tell us a little bit more about what
[00:50:38] Paul Roetzer: went down. So it was weird. I was, so again, I was on vacation over the weekend and I saw this, I think it was Friday night. You know, I get alerts for Leon, Elon Musk. He's one of the people I get alerts from on Twitter.
[00:50:50] Paul Roetzer: And so I saw, he was like, live streaming something. I was like, what is he doing? Like it was just showing. So if you've never been in a Tesla, basically what happens is you set a destination and when the full self-driving is on. It then like starts routing itself. You have to keep your hands on the wheel, but basically it'll drive itself, including city streets, stopping for stop signs and stoplights and people, and recognizes objects like it'll recognize people and bicycles, like it shows you all this on the dashboard.
[00:51:14] Paul Roetzer: So it looked like any other Tesla, like, looked like my car. You're just seeing this stuff. And so I kind of like, and it didn't give any context, it was just a live stream. There was no like, hey, we're demoing version 12 of, of full self-driving or anything like that. So, yeah, whatever. I'll come back to that later.
[00:51:30] Paul Roetzer: And then I saw a couple tweets like later that night that were like, oh my God, this is like game changing. And I was like, what? What were they showing? And so I go back in and you really couldn't find much information about it at all. Like Friday night or even into Saturday. It was a very, I guess like more than 10 people, 10 million people watched the live stream, but there wasn't actually much explanation of it.
[00:51:50] Paul Roetzer: So once I dug into it, I started realizing what was going on. So, We we're not going to turn this into a main topic. I'm not going to like, expand on this greatly, so I'll just kind of seed this and maybe we circle back down the road. So a couple things. Right now, the, well, the way they've been trying to do full self-driving in Tesla is they have eight outward facing cameras.
[00:52:10] Paul Roetzer: Those cameras have computer vision and they observe and kind of record everything that goes on around the car. So it's seeing people and objects and all these things. Is used in the training to be able to drive itself, but there's still a ton of code. So for example, it is coded to stop at a stop sign and then to inch forward to get visibility and then to go, like they, there's code in there that tells it to do that, or if there's a speed bump, it's coded to slow down for the speed bump.
[00:52:42] Paul Roetzer: What they're saying version 12 is, which would be a complete transformation of the future of self-driving robotics. Everything is. It's just going to learn from a worldview. It's going to learn the way a teenage driver learns when you put them in a car. Basically, it's just going to observe the world around it.
[00:53:00] Paul Roetzer: It's going to watch how humans do what they do, and then it's going to do that Almost no code will actually tell it what to do. So what they're implying is, so my car has the eight outward facing cameras at some point. If not, it's already doing it. It will start taking all of the driving. It'll start watching everything I do, and it'll learn from everything I do.
[00:53:24] Paul Roetzer: It'll know that I stopped for a person in the road. It'll know that I stopped at a stop sign, but I didn't actually come to a full stop. I kind of like inched through it because most people don't actually stop at stop signs. So, It's going to start watching this and learning. And this is why I've said all along, Tesla is not a car company.
[00:53:40] Paul Roetzer: Tesla is an AI company that has more data than anyone can fathom on driving. And that driving can be used to train robotics and eventually a g I and all these other things. And so what's going to happen is they're going to have this fleet, they have over a million cars now. They'll have, you know, five, 10 million in five or 10 years.
[00:53:58] Paul Roetzer: Every one of those cars is going to observe the world around it, and it's going to learn instantly from people's decisions. And then the entire fleet will learn as it happens. So imagine like Mike and I experiencing life. Mike learns something and I learn it through Mike. That's what's going to happen except through millions of cars in real time, they're going to be learning everything about how humans drive.
[00:54:26] Paul Roetzer: Then these things will basically just program themselves to learn how to drive like a great human driver. And so this, it sounds crazy, but like it has major implications to the way software is built in the future. It has even bigger applications potentially to like the humanoid robot market, like Tesla's building optimists.
[00:54:47] Paul Roetzer: We've talked about figure before trying to build human when these things can all of a sudden just learn from the world around them and not have to be programmed to do everything. It changes the future in very dramatic ways and all of a sudden it makes like the idea of robo taxis very real, which is Tesla's major plays.
[00:55:06] Paul Roetzer: Eventually there's just millions of Teslas with no drivers that just pick people up all over the place. So, It's just fascinating. I would say like, again, you have to separate whether you're an Elon Musk fan or not from this conversation. There are many, many things Elon Musk has been doing of late that I am not personally a fan of, but if I just remove that part of the equation and say, okay, in terms of transforming the current and future path of humanity, Elon Musk plays a major, major role in that in a lot of different ways, and this could end up being like a massive breakthrough in technology and in the future of ai.
[00:55:41] Paul Roetzer: If they succeed at what it appears, version 12 of this software is going to be.
[00:55:47] Mike Kaput: That's an awesome breakdown. That's certainly more extensive than some of the articles I've seen out there breaking it down. So I think it's really, really valuable to get a sense what's going on here. Last but not least, we have seen LinkedIn roll out some interesting AI powered features, including the ability to use AI to generate posts using its draft.
[00:56:09] Mike Kaput: With AI feature. So basically you just tell LinkedIn information on what you want to write about and it'll write you an AI powered first draft. The platform is also rolling out an AI powered assistant to help you strengthen your profile. That appears to be just for premium users at the moment. Now Paul, you're kind of a LinkedIn power user.
[00:56:28] Mike Kaput: You use the platform often, and you've been experimenting a little bit and keeping an eye on some of these. What are your thoughts on the AI powered features from LinkedIn?
[00:56:37] Paul Roetzer: I don't know when draft with ai. Was rolled out, but I got access last week. Okay. Like, I think I put it on our Zoom. Like I just took screenshots, like, when did this get here?
[00:56:45] Paul Roetzer: So yeah, if you don't have it, I don't, I don't, I don't know if everybody has it or if it was just like it's being rolled out slowly. But I have it. It's when I go to post something on LinkedIn and I click Post It, then pops up and there's a link that says, do you want to draft with ai? You click that. Mm.
[00:57:00] Paul Roetzer: And then it just says, ensure your content follows our professional community policies. There's a learn more link. Then it says, in your own words, share the main points you want to highlight in your post. You'll be able to edit the draft before you publish, and then there's a minimum of 30 words. You write whatever you're going to write, and then you hit create draft, and then it writes the draft.
[00:57:19] Paul Roetzer: And then from there you can edit it or post it. I haven't used this, like I, I've very clear, like all the stuff I put on LinkedIn, I write a hundred percent myself. Like I don't use AI to write anything. But I remember 30 episodes ago we said this, like, how long until AI is infused into everything LinkedIn does, your comments, you know, your replies, your, your posts.
[00:57:39] Paul Roetzer: Like this was the inevitable thing and now it appears like it's happening. And the other one that I have access to, and again, I don't know if it's universal, is if I go to my profile Yeah, there's that, enhance your profile option where you can use AI to actually improve. Your title, your subtitle, the description, all these things.
[00:57:59] Paul Roetzer: So it does seem like LinkedIn is definitely getting into the generative AI game in a big way. And if you don't have these features we're talking about, I assume they're, they're rolling out everybody, you know, in the coming months. So keep an eye on it.
[00:58:12] Mike Kaput: Awesome. Paul, thank you as always for rounding up what is going on this week in AI and clarifying kind of the signal from the noise of all the, all the buzz and all the hype.
[00:58:22] Mike Kaput: We really appreciate it and I know the audience really
[00:58:24] Paul Roetzer: appreciates it as well. Yep. Always good stuff and I don't, we didn't talk about next week we'll, We might be on Wednesday next week. We'll think about this. 'cause Labor Day is Monday. Yeah. I don't know if you and I are recording on Labor Day. I wasn't planning on it, so just a heads up.
[00:58:38] Paul Roetzer: If you're a regular listener, there is a chance that Nest next week's episodes will come out on Wednesday. Although we might record it early on Friday, we'll see. But if it's not there Tuesday morning it'll, it'll be Wednesday morning next week. Alright, thanks Mike. Thanks Paul.
[00:58:53] Paul Roetzer:
[00:58:53] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.
[00:59:14] Paul Roetzer: Until next time, stay curious and explore AI.
Cathy McPhillips is the Chief Growth Officer at Marketing AI Institute.