<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

47 Min Read

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

After travels to Miami and Austin (Paul), and Baltimore (Mike) for events and presentations, the team is back together in Cleveland to record Episode 66 of The Marketing AI Show. And the AI news didn’t slow down. The guys break down the biggest happenings and advancements including further development of ChatGPT, Meta, Anthropic, Spotify, and more.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsors:

Use BrandOps data to drive unique AI content based on what works in your industry. Many marketers use ChatGPT to create marketing content, but that's just the beginning. BrandOps offers complete views of brand marketing performance across channels. Now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions.

Start winning with AiAdvertising's innovative approach to maximizing budget and performance. Use AI to optimize campaigns by gaining deep customer insights, drawing out motivations and behaviors, enabling intelligent targeting, and ensuring messages hit the mark. Stop wasting time, money, and resources. Let AiAdvertising lead while you take the credit! Visit their website to learn more!

Listen Now

Watch the Video


00:05:09 — ChatGPT can now see, hear, and speak

00:26:13 — Meta introduces AI assistant and new AI features

00:37:12 — Amazon invests $4B in Anthropic

00:45:47 — Spotify clones podcasters’ voices

00:51:21 — Writers Guild and studios reach a deal

00:54:11 — Getty trains licensed images

00:57:03 — Google indexes public Bard conversations in search results

01:00:49 — Student use cases for AI


ChatGPT can now see, hear, and speak…and is connected to the internet (again)

ChatGPT can analyze and respond to images…and New York Times journalist Kevin Roose tested it out. Roose asked ChatGPT questions like:

  • "What’s this object in my junk drawer?”
  • “Summarize the front page of The New York Times, with a photo of the paper edition spread out on a table..."
  • And, “Turn these photos of items I want to sell into Facebook Marketplace listings.”

How did it do? It did pretty well on those use cases, but not so well on questions like “convert these assembly instructions into a step-by-step list” or “rank these men in order of attractiveness.”

The last one is by design: OpenAI has specifically created the system to not respond well to most questions about human faces. OpenAI doesn’t want to have people using it for facial recognition, creepy use cases, or to prompt offensive answers about physical appearances.

ChatGPT can also now respond in spoken words like Siri or Alexa. You just speak to ChatGPT, and then it answers you verbally.

In Roose’s test, it did many different things pretty well, like reading a bedtime story and chatting about work-related stress. He says what stood out in these tests is how different talking to ChatGPT feels compared to Siri or Alexa.

Unlike Siri and Alexa, which sound limited, wooden, and flat, Roose says ChatGPT’s synthetic voice sounds fluid and natural. It was capable of having long open-ended conversations on almost any subject he tried. Plus and Enterprise users will see these changes first.

Meta Introduces new AI Experiences across apps and devices

Meta is releasing their new AI assistant and AI characters across WhatsApp, Messenger, and Instagram.

With similar conversational functionality to ChatGPT and Claude, the assistant will be able to provide real-time web results and generate images. 

The Verge says, “For now, Meta AI isn’t trained on public user data across Instagram and Facebook, though it sounds like that is coming. It’s easy to imagine asking it to ‘show me reels from the south of Italy’ and that being a compelling use case that other chatbots can’t replicate."

Additionally, other AI features will soon be available to users of Meta products, including AI characters that are informed by celebrities, AI-generated stickers, and AI editing features in Instagram.

Amazon invests up to $4 billion in Anthropic

Anthropic, the maker of the popular Claude AI assistant, announced that Amazon will invest up to $4 billion into the company, making Anthropic’s “safe and steerable” AI widely accessible to AWS customers.

In conjunction, AWS will also become Anthropic’s primary cloud provider for mission-critical workflows, giving Anthropic access to Amazon’s chips.

This will open up access to Claude 2 to many more organizations that are already building on AWS and want to access state-of-the-art models in safe, secure ways. Big companies already building with Anthropic models via Amazon, including LexisNexis, Bridgewater Associates, and Lonely Planet.

There are more exciting stories in the Rapid Fire section of the podcast, including Spotify cloning podcaster voices. Listen, subscribe, and we’d love your review!

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: it's not like they're abandoning the metaverse completely, but I think they have made the full on shift to really leverage all the data and AI capabilities and talent that they have at meta, which was the right business decision.

[00:00:12] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:32] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:42] Paul Roetzer: Welcome to episode 66 of the Marketing AI Show. I'm your host, Paul Roetzer, joined as always by my co host, Mike Kaput. Hello, Mike. Hey, Paul. We are back to our regular recording schedule. So it is Monday, October 2nd, 9 a. m. Eastern time, about. So. Hopefully unlike last week where we recorded a day early and then everything started getting announced because last Monday was a little crazy.

[00:01:10] Paul Roetzer: So we're going to get into some of the stuff that happened, not only last Monday, but really kicked off a pretty crazy week. But I was traveling last week. I was in Miami and Austin doing talks. You were where last week I was in Baltimore last week. Yeah. Okay. I think we were actually at the airport at the same time, but I think we were, yeah.

[00:01:30] Paul Roetzer: It was a parting flight. So, while Mike and I were both off doing talks last week, the AI world just sort of changed, and it was I remember being in... The different cities like trying to process everything that was going on and trying to decide do I change my presentations for today because both both talks I ended up weaving in news just from last week to try and, explain how quickly this stuff is changing so lots to talk about today some really cool big topics and definitely seems like we're still in for a pretty hectic fall of AI news.

[00:02:05] Paul Roetzer: Because it really seems like the things we've been hearing about are really just the leading edge of some of the other things that are coming in the weeks and months ahead. So buckle up as we always say, it just keeps getting crazier. All right. So today we have, the episodes brought to us by brand ops.

[00:02:24] Paul Roetzer: So many marketers use chat GPT to create marketing content, but that's just the beginning. When we sat down with the BrandOps team, we were impressed by their complete views of brand marketing performance across channels. And now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions.

[00:02:42] Paul Roetzer: Use BrandOps data to drive unique AI content based on what works in your industry. And learn more at brand ops. io slash marketing AI show. So again, that's brand ops. io slash marketing AI show, and you can see brand ops in action. So thanks to brand ops for sponsoring this episode. And then also, we have AI advertising start winning with AI advertising's innovative approach to maximizing budget and performance.

[00:03:10] Paul Roetzer: Use AI to optimize campaigns by gaining deep customer insights, drawing out motivations and behaviors. Enabling intelligent targeting and ensuring messages hit the mark. Stop wasting time, money, and resources. Let AI advertising lead while you take the credit. So that is aiadvertising. com slash AIpod to learn more.

[00:03:32] Paul Roetzer: And then, just wanted to mention, we've talked about this, I think, on last week's episode, but we have the AI for Agencies Summit coming up November 2nd. That is a Marketing AI Institute event. It's a half day virtual event you can, join us for. It's going to be amazing talks. We have... Let me just pull it up so I don't forget what we've got.

[00:03:50] Paul Roetzer: We've got... I'm doing an opening keynote on AI emergent agencies. So what the future of an agency looks like. We have Sharon Torek doing balancing legal risk with opportunity. We have Robert Rose doing an amazing talk on the real opportunity for agencies advising on generative AI. Mike is doing one on AI partner programs.

[00:04:08] Paul Roetzer: Drew McClellan from Agency Management Institute is going to be looking at 2024 and what's in store for agencies. And we have a series of rapid fire case studies from agencies. So if you are an agency leader. I highly recommend checking that out. Or if you are a brand marketer and you work with agencies, I would recommend that they check it out to get up to speed on everything with AI and be able to create a ton more value for your organization moving forward.

[00:04:34] Paul Roetzer: So that is aiforagencies.Com. And you can use AIPOD50 to get 50 off. Of that. So again, that's November 2nd. It's a half day. It runs from, I think, noon till 5 p. m. Eastern time, that will be available on demand as well. There'll be an option to add on demand at registration.

[00:04:57] Paul Roetzer: All right, so let's get into it. There is plenty to talk about this week. We've got three main topics and then we've got, I think, five rapid fire. So let's get rolling, Mike. Sounds good, Paul.

[00:05:08] Mike Kaput: So first up, ChatGPT can now see, hear, and speak thanks to two big new updates. So first, ChatGPT can now analyze and respond to images.

[00:05:20] Mike Kaput: So you just drop a photo or a picture into it. And then start prompting it to do things with that image. For instance, Kevin Roose at the New York Times tested prompts tell me what this object is in my junk drawer, summarize the front page of the newspaper by just giving it a photo of the paper, Or write Facebook Marketplace listings for these photos I'm giving you and he said it actually did a pretty good job doing all three of those things.

[00:05:49] Mike Kaput: The second big update is that ChatGPT can now talk to you just like Siri or Alexa. So you just speak to it via the app. And ChatGPT responds. So Rooseagain kind of put it through its paces and did things like have it read a bedtime story to his child. It helped him analyze a dream he had by talking it through with him.

[00:06:12] Mike Kaput: And it actually conversed with him about some work related stress that he was having. And really notably, he said it's way better than Siri. Or Alexa, and then it sounds much more natural and actually feels like you're having an open ended useful conversation out loud with the machine assistant. Now, most users don't yet have access to all these features.

[00:06:38] Mike Kaput: They've. Just been made available to some early testers, but they're soon going to be rolled out to chat. GPT plus and chat GPT enterprise users, somewhat related at the same time. We also got an announcement that chat GPT is now able again to connect to the internet. Now, open AI had shut down this feature for some time after I believe chat GPT was, Graping websites, it did not have any business scraping and

[00:07:08] Paul Roetzer: reading paywalls,

[00:07:09] Mike Kaput: paywall, the content.

[00:07:11] Mike Kaput: But now you can, as a chat GPT plus user access these features again. So first up, Paul kind of talked to me about this ability to analyze images. How can I start using something like this to my benefit as a marketer

[00:07:27] Paul Roetzer: or a business leader? Thank you. Yeah, I mean, I think, on the back of the DALI 3 announcement, so if you didn't catch last week's episode or missed that news, so we just had DALI 3, the next version of their image generation technology is going to be infused into ChatGPT.

[00:07:45] Paul Roetzer: So you're going to be able to create images with text prompts. And then, two days later, they announce all these other capabilities where it can analyze images and it can talk to you. And I think that the broader thing to understand here is we are seeing the next generation of these language models starting to roll out through these changes.

[00:08:04] Paul Roetzer: So... Google Bard has had this capability, like you could give images to Google Bard, but as we've talked about previously, like for some reason, Google's models just aren't there yet. they're not as capable as what we're seeing with open AI at the moment. I expect that will change this fall, but right now the reality is chat GPT is just an insanely powerful tool.

[00:08:27] Paul Roetzer: And you and I are out all the time doing these talks, meeting with hundreds of marketers and business leaders and. The reality is like people just don't really comprehend what chat GPT is capable of doing like most people just think they go in and they kind of use it like a search engine or they ask it to write something.

[00:08:46] Paul Roetzer: And they're not really thinking about the depth of all the capabilities it has and really pushing it and experimenting with it and trying. And I think this is a great example with this image technology. Once you have access to it, don't just go in and like give it a photo and say analyze this for me or like what's going on or really think about a series of use cases.

[00:09:08] Paul Roetzer: Kind of like when Mike, when you and I talked about, how we demoed Duet AI on Google in a past episode. And we went in with like specific use cases in mind. And so that's what I would recommend off the bat here is if you're a marketer, if you're a business leader, if you're really in any profession, really think about these tools and go into it with, all right, I've got five ideas of how I'm going to test this thing and see what it's capable of.

[00:09:35] Paul Roetzer: So, for example, with the image, so let's say you can upload any, any image to chat GPT and it can analyze this thing. And I have seen some demos of people who have this already, and it's incredible what it's able to do already. But for example, say you gave it a Google analytics chart or any kind of marketing data charts that analyze what's going on in this, it can do it.

[00:09:54] Paul Roetzer: It can read the text, it can interpret the images. So you might have. A way where we're going to use this as an assistant to analyze charts and data. You mentioned product descriptions, giving it photos, writing descriptions. If you run an e commerce store, like imagine the ability to say here's a hundred products, like write the descriptions for these.

[00:10:14] Paul Roetzer: Photo captions is another one that comes to mind, whether it's social media posts, articles, like whatever you're using these and you need to write captions, like here's a photo, what's going on in it, write a caption, explaining visual concepts. So this is one that immediately came to mind because I happen to be reading, the research report that Ethan Mollick and the team did on that BCG study we were talking about, where they, a couple of weeks ago, where they analyzed 758 consultants and the increase in performance using GPT 4 or not.

[00:10:43] Paul Roetzer: And this research report had these charts in it that, I've looked at these things five times and I can't actually understand what in the world the charts are showing. And so, that would be one where I would say, okay, just throw it in the chat, GPT. I'm obviously not smart enough to figure out what they're trying to represent here.

[00:10:58] Paul Roetzer: What is this chart saying? That would be an example of how I might use it. The other one that came to mind, here was customer support, customer service. So there's a few examples I've seen of companies that were built to analyze images, so in the insurance side, for example, if there's a car accident, the ability for the AI to look at the photo and analyze what happened.

[00:11:22] Paul Roetzer: the amount of damage and what the replacement cost is or what the cost to fix it is. I have something like this in my Tesla app. So if you have a, if you're a D a defect in your car, you upload a photo to through the Tesla app. And I assume they're using some form of computer vision to analyze that thing.

[00:11:37] Paul Roetzer: But right now you have to like. Probably invest millions of dollars to build a tool to be able to do that, whether it's for product defects or car accidents or insurance claims or whatever it is, but this seems like those vertical solutions may be in trouble in the not too distant future because if you have this general purpose.

[00:11:56] Paul Roetzer: Model that's able to analyze pretty much anything it, you start to wonder about what is the future of these, these vertical solutions. So those are kind of the things. And then like another example that kind of goes back to our past, we used to have a client that was in the roofing industry, commercial and commercial roofing.

[00:12:15] Paul Roetzer: And so one of the things they do is they look at that on all these flat roofs, like on targets and Walmarts, like there's, there's these drains on top of these roofs. And some of these drains in some older buildings could be, decades old and you can't tell what the brand is anymore. So now imagine just being able to take a photo of that, that drain and it just immediately analyzes it and says, yeah, this is a drain from 1965.

[00:12:37] Paul Roetzer: It's, from this manufacturer and you can now match that drain. So I think you're going to see We talk a lot about AI opening up new possibilities and innovation and like entirely new ideas to reinvent industries. This is the kind of tech to me that when you put it in the hands of people who have domain expertise, they start looking at their industry differently and saying, Oh my gosh, like we have been doing this manually all these years.

[00:13:04] Paul Roetzer: What if? We can now just take a photo and it tells us what to do. So those are some of the things that like immediately jumped out to me as potential use cases. But again, as you highlighted. This is all going to be about experimentation. It's going to be about the people who are out in the frontier testing this stuff and finding use cases.

[00:13:22] Paul Roetzer: Kind of like we did what last week's episode, we talked about Andy Crestodina and like him using chat GPT for SEO. That's what needs to happen is like these people who are experts in their field, finding the use cases. Cause open AI is not going to like. Say, Hey, marketers, here's the 25 use cases for this thing.

[00:13:40] Paul Roetzer: Like they don't care. They're just pumping out the innovations.

[00:13:44] Mike Kaput: Yeah. What also jumped out to me is. We've talked in the past about how useful tools like this can be for a strategy assistance And I think this capability takes that to just a whole new level. I mean you mentioned here Analyzing charts and explaining visual concepts Well, how many times have you and I like slammed our head against a whiteboard for three hours when it just looks crazy And we're all right, take a photo of it We'll get this into a cleaner format like we can just do that now and have it fill in strategic gaps and say, Hey, we're still trying to figure out this part of a customer journey or a business plan.

[00:14:22] Mike Kaput: That becomes really interesting to me. We could do that before with words, but it takes quite a bit of time. To describe what is just there in an image, they say a picture is worth a thousand words. I think that's really interesting. I'll be very curious to see as well. Any type of web development and design.

[00:14:39] Mike Kaput: This kind of harkens back to that first, I believe it was the first GPT 4 demo where they basically wrote. on a piece of paper what a website could look like and then it can create it. This is the promise of that. We could literally sketch out in two seconds how we want a home page of our app to look and say, Hey, I'd like you to attempt to code me

[00:15:00] Paul Roetzer: a prototype.

[00:15:01] Paul Roetzer: Yep. Yeah. And I actually saw demos of that, of, people doing it for that kind of thing. And I agree a hundred percent. Like if you're you're standing at the whiteboard and you're okay, here's conceptually what we think needs to happen. Here's how the customer journey is going to go here.

[00:15:13] Paul Roetzer: Or here's how the learning journey is going to go for this audience. And you like get a model in place and it's oh, and it's cause so many times, like I'm sitting in my, my office basement at 1130 at night on a Saturday, trying to like conceptualize something. And you just need to talk to somebody or like think it through.

[00:15:31] Paul Roetzer: And I'm not going to bother you at 1130 on a Saturday. So you're okay, let me just like. Get this concept, take a picture, put it in chat, GBT, and say, here's what I'm trying to do, here's what I have so far, what, what should I be thinking about or how should I do that? And to actually like interact in a strategic way, like you're saying, that's the stuff that to me is like.

[00:15:50] Paul Roetzer: What if I mean, it sure seems like maybe that's what we're entering into. And now you really get into the situation where the professionals who figure out how to use these tools to do that kind of stuff, how they're able to just leap their peers in terms of like capabilities and utilization of these tools.

[00:16:11] Paul Roetzer: So. I think this fall as I believe they said it was going to start rolling out over the next couple of weeks to ChatGPT Plus and Enterprise users. So probably by the end of October, we're going to start seeing some really cool ideas of how to apply this technology. And so I think that's the action item for everybody is when one, when you get access, well, one, if you don't have ChatGPT Plus yet, pay the 20 bucks a month, like seriously, you need to be able to experiment with this technology.

[00:16:39] Paul Roetzer: So get your ChatGPT Plus account if you don't have it. And then once you get this technology, start experimenting with a series of use cases in mind. Like you're gonna get more use cases as you're going and testing it, but like have some ideas in mind. And then start paying attention to the people who are out in the frontier really pushing the limits of what these things can do, and getting inspiration for the own, your own ways to apply this to your career.

[00:17:06] Paul Roetzer: So talk

[00:17:07] Mike Kaput: to me a bit about possible use cases and your thoughts on this voice assistant capability for chat GPT. I was pretty blown away how strongly Kevin Roosein the New York Times was yeah, this thing, blows Siri and Alexa out of the water.

[00:17:22] Paul Roetzer: Yeah, I think we, for the last, 5, 10 years have been waiting for voice assistants to truly be assistants and not just like give you the weather and, sports scores and like those fundamental things.

[00:17:35] Paul Roetzer: Like I always complain. A lot of times in the morning, like my kids will ask me questions. My kids are fifth and sixth grade. So like on the, on the ride to school, we'll have questions, conversations about like totally random stuff, like black holes and just like things they'll ask me about. And it's or how far is the moon from the earth?

[00:17:49] Paul Roetzer: If you see the moon in the morning, like things like that. And you like ask Siri, because it's the obvious thing while I'm driving, Hey Siri. And it'll like say, I found three links. And it's that's not the point. I'm driving. I wouldn't have asked you if I wasn't driving. I can't click on the three links.

[00:18:04] Paul Roetzer: So this idea that whether it's Apple solves it finally, or whether it's Pi, because like inflection Pi, you already have the ability to have voice conversations. Or if it's, open GBT, like someone has to solve an actual assistant that you can have a conversation with and learn things from.

[00:18:21] Paul Roetzer: And we know that OpenAI's Whisper technology is incredible, their transcription technology, their voice to text, like the ability for their technology to understand human voice and transcribe it is incredible. And if that's the basis for this, which I believe it is, then it's pretty interesting. And it, really starts to lead down this path of What, what do Amazon and, and Apple do, are they improving their own technology is, we're going to talk about Amazon's investment in anthropic, I saw an article from Rob Tao's Tao Tao's toes, in Forbes, I think it was where he was like theorizing who buys who, and one of them was well, maybe I think it was Apple buys inflection pie and then just uses pie to replace Surrey.

[00:19:10] Paul Roetzer: So I think what we're going to see is. A, a rapid escalation of competition in this space because everybody's trying to do, we're going to talk about meta next. Meta's got these capabilities. Apple has them. Amazon has them. These startups are building them. So I think we may 2024 maybe is the year where voice finally.

[00:19:31] Paul Roetzer: Like starts to become what we've thought it was going to be all these years and maybe that's something we need to start thinking and talking more about is where, where voice is going and what are the implications to businesses. So zooming

[00:19:46] Mike Kaput: out as we wrap this topic up, the fancy term for these are multimodal assistants.

[00:19:52] Mike Kaput: They're in multiple modalities. They're able to interact, through text, through images, through voice. What are the bigger implications here on like my career or my company having access to something like this?

[00:20:06] Paul Roetzer: I really think we're at the very beginning and we do know that the next iterations of these models are going to be multimodal.

[00:20:14] Paul Roetzer: The word is that GPT 5 will be multimodal from the ground up. So what we're seeing right now is a language model built in a traditional way to do text that they're layering in these other abilities on top of the rumor is that the next version of GPT from open AI will be built multimodal from day one.

[00:20:37] Paul Roetzer: My assumption is that is what. Google is doing with Gemini, their next model that's supposedly going to be more powerful than GPT 4. So we've known that multimodal, as you called out, is the future of these things. That was like the given of what happens next. There's always this well, where do we go next with this technology?

[00:20:55] Paul Roetzer: Multimodal, the ability to take in and output images, video, audio, text, code, all of these things. That's what these models are going to be capable of. So what are the implications to us in business? It's that's the trillion dollar question. What does this do to the way we work? What does it mean to different industries?

[00:21:18] Paul Roetzer: You really have to sit down and start projecting out on an individual company or industry level. It's very hard for us to prognosticate, like broadly speaking, horizontally across all industries to make some blank and statement about the impact that's going to have. But I think what it is is.

[00:21:34] Paul Roetzer: The leaders in different industries who get there first and test this technology, they're going to find the ways to reinvent their own industries, whether it's accounting or law or agencies, marketing agencies, HR, finance, whatever it is. That's what I think 2024 is really going to be, is not only these multimodal models being available to us, but innovators, entrepreneurs, finding ways to reinvent industries with this technology mixed with.

[00:22:05] Paul Roetzer: AI agents, which we've talked about before about these machines being able to not only output things, but then take actions. So the example you gave of we need to build a site or we need to build this journey, not only can it help us visualize the concept. So we start with an idea as the human. We put that idea into the engine.

[00:22:22] Paul Roetzer: It comes back to help us hone that idea. Then once the idea is, more fully baked, say, okay, build it for us, generate the code to create this page, this site, this campaign. Okay. Now go in and build that campaign. Like that's what we're going to have in the next 12 to 18 months is not only a strategic assistant or an ideation engine, but we're going to have agents that can help us do the work.

[00:22:46] Paul Roetzer: And that's the part, again, we keep talking about. the business world isn't ready. They're absolutely not ready for this multimodal world where the agents can take actions on our behalf. When you talk about that stuff on stage, I don't know if you've done this, but like in the last couple of weeks, I've said this stuff on stage.

[00:23:03] Paul Roetzer: I've shown examples and you can hear a pin drop in the room. And then when you say like this isn't sci fi, this isn't five years from now, I'm like theorizing. This could be this is. Tech today, we're going to have these capabilities and people just like stare dumbfounded at the screen. And then you get the people who come up afterwards.

[00:23:22] Paul Roetzer: They're how is this possible? So that's, I think we just can't stress enough on this show is we're not talking about. Three to five years out, like guessing this is pretty predictable stuff that's going to come next because everyone's working on it. So, just like we always say, get through the fear and the uncertainty and the anxiety of it and just go experiment.

[00:23:44] Paul Roetzer: It's the only thing we can really do is experiment and then connect the dots yourself.

[00:23:50] Mike Kaput: And like we've mentioned, I realize and I very much sympathize with the fact that people are so overwhelmed, generally, but also in this domain. Yes, even go experiment, but here's your next step, go buy ChatGPT plain and simple, and go spend an hour a day getting very, very good with it over the next 30 to 60 to 90 days.

[00:24:11] Paul Roetzer: That's your step. And I'm not gonna lie, like I was, so I did a talk, well it was Tuesday morning, I was in Miami. And then I flew to Austin Tuesday night and then I had a buffer day Wednesday in Austin. And then I had a talk Thursday and then I flew home like late Thursday night. Wednesday night, I actually hit a point where I was this is too much.

[00:24:34] Paul Roetzer: Like I really was personally starting to feel like I just couldn't even process everything that had happened. Like I mentioned earlier, like I wasn't even sure if I should change my decks. I wasn't sure, how do I even, the meta stuff, I'm oh my god, that's what we're going to talk about next.

[00:24:47] Paul Roetzer: It's this was the part that started putting me over the edge, it's I can't even process all of this. So, just know, I feel it too, maybe it sounds like we got this all figured out because we have a show about this stuff, but it's a lot. And, and I sympathize greatly with the people we present to and talk to and meet with, because...

[00:25:08] Paul Roetzer: I felt overwhelmed last week, and I live this stuff 24 7. Most people listening to this show have full time jobs. With responsibilities related to things that aren't AI. And they're trying to moonlight and solve for this. So, Mike and I, get it totally. And yeah, so I mean, when we're talking about this stuff, just know, we're trying to, to figure it out ourselves, but it's a lot even for us.

[00:25:34] Paul Roetzer: So if you ever are feeling overwhelmed, just welcome to the club. And the only thing I can figure out is, one, try and synthesize it. Break it down into parts that are digestible. And then as we keep saying, just, just play with the tools, pick one. Like we talk about all this stuff. We're going to talk about Meta next.

[00:25:50] Paul Roetzer: Meta just announced a whole bunch of stuff. If you only have a few hours a week, just go hard with ChatGPT Like it, it, that's my number one advice. I always tell people where should we start? Just get good at ChatGPT Like it'll train you to get good at everything else that's coming online. If that's all you can do is experiment a little bit, then just,

[00:26:09] Mike Kaput: just do that.

[00:26:12] Mike Kaput: Alright, so let's talk about Meta, because they are not sleeping on AI. They actually just dropped some major AI updates, and the biggest one is they're releasing an AI assistant. So think of this chat GPT within Instagram, Messenger, and WhatsApp. This assistant has some pretty interesting features that are worth noting.

[00:26:35] Mike Kaput: First, it's conversational, just like chat, GPT or Claude or any of the others. It's going to connect to the internet via Microsoft Bing, and it will also generate images like mid journey or DALLE. Now here's the kicker though, right now it is not trained on any public user data from. Meta's different social networks and apps, but reporters at the Verge have kind of hinted based on their interviews with the company that it very well could be trained on that data very soon.

[00:27:08] Mike Kaput: And I think this is really interesting if they're right, because if you think about all the data that meta is sitting on, we're talking things like. Billions of Facebook posts, messages, comments, Instagram photos, feeds, and reels, and of course WhatsApp conversations. I mean, if you imagine an AI tool that can interact with all this starts to unlock some really, really fascinating AI use cases across social media and probably business and life at large.

[00:27:38] Mike Kaput: Um. Now, also, in addition to this assistant, Meta announced some other AI features of note. You'll be able to soon create AI stickers. Stickers are existing functionality right now across some of the platforms. AI will be able to help you generate those. You'll be able to edit images with AI on Instagram.

[00:27:58] Mike Kaput: Interestingly, they've noted that these images will be tagged to indicate that AI was used on them. And they're also releasing what they're kind of calling AI characters, which are these AI chat bots that have personalities that are informed in some cases by celebrities like Mr. Beast, Snoop Dogg, and Paris Hilton.

[00:28:17] Mike Kaput: So first up, Paul, like how important are these updates? For users at large and also kind of businesses that are marketing on Facebook or

[00:28:27] Paul Roetzer: Instagram. Yeah. First, the, the thing that jumped out to me is I think we're officially shifted from Metaverse to AI at Meta. So if you recall, they changed their name to Meta.

[00:28:40] Paul Roetzer: Facebook is still, the social platform, but they were all in and rumor was they'd spent like 10 billion on the Metaverse and it just hadn't delivered. And. From the outside, you look and it's but they have this amazing research lab like run by Yann LeCun and others and they, we knew that they had all this AI capability, but they were really Zuckerberg was very much focused on the metaverse.

[00:29:03] Paul Roetzer: And I think. The chat GPT moment woke Zuckerberg up as well and they are all in now it's not like they're abandoning the metaverse just this week, Lex Friedman had the podcast with Zuckerberg where they were virtually, in like these very photorealistic images of each other.

[00:29:22] Paul Roetzer: Interacting. So if you haven't seen that just Lex Freeman's a podcaster, he interviewed Zuckerberg in the metaverse with like the most advanced form of what they, the technology they have. So it's not like they're abandoning the metaverse completely, but I think they have made the full on shift to, to really leverage all the data and AI capabilities and talent that they have at meta, which was the right business decision.

[00:29:44] Paul Roetzer: The other thing I found fascinating is they are all of a sudden very friendly with Microsoft. So the fact that Meta's technology is going to find its way into Microsoft is interesting on a lot of levels. One, just that they're collaborating, which is interesting, competitively for Google, that those two are working together.

[00:30:02] Paul Roetzer: For OpenAI, who obviously Microsoft put 10 billion into, it seems like Microsoft is really starting to hedge their bets, not against OpenAI. But that they're not reliant solely on open AI. So now you're going to have meta AI being able to be infused into there. And then rumor is that Microsoft is also starting to build their own smaller models to compete with GPT force.

[00:30:24] Paul Roetzer: They're not as reliant on it from a cost standpoint. It costs a lot of money to use GPT-4 so anyway, so that, that was interesting to me. We also didn't address this, but they, in these announcements also talk about emu, which is their image generation technology. So that'll compete with DALL-E 3 and mid journey.

[00:30:43] Paul Roetzer: That's another element of it, but I think the biggest thing from a business perspective is their distribution. They have these applications that are so essential, so interwoven into our consumer lives and our business lives that. You start to step back and try and think of all and this is where I was saying like last Wednesday where I was just like my mind was on overload when you start to look at the implications for all the different platforms that they have and then into the training data they have and what they can do with that.

[00:31:12] Paul Roetzer: So for example, if we think about inflection pie, which is able to, they want it to be more of a personal assistant and to get to know you and to talk to you in a more informal way. We just talked about ChatGPT and its voice capabilities and being able to have conversations. Like you highlighted, Mike, if you have all of that data from how people actually communicate with each other and how they talk and comment and in videos, and you can train these models on that stuff, that's where you start to imagine wow, maybe Meta takes the lead on this capability if they build the right things.

[00:31:46] Paul Roetzer: And then their play to do it through characters, like familiar characters or, influencers, celebrities. There it's becomes really interesting, also very concerning. Like I start to think about the, like the negative ramifications of this and how these things go off the guardrails. And, that's why I said like it was just a lot, like last week for me was personally a a lot to, to process.

[00:32:09] Paul Roetzer: The other thing I'll say real quickly is character.ai. You, I think you mentioned them. That's that company came out of, so in 2017, we've talked about the attention is all you need paper from Google, which invented the transformer architecture. That's the foundation of GPT generative pre trained transformer.

[00:32:27] Paul Roetzer: One of the authors of that paper started character. ai. So a former Google person, and that's what character AI does. Like you can go in and interact with all these different celebrities, experts, thought leaders. Entrepreneurs and so I, I think that Rob Tao's article I mentioned, I think he brought up like maybe meta buys character dot AI.

[00:32:48] Paul Roetzer: So lots of interesting ramifications on this one. Yeah, I'm glad

[00:32:52] Mike Kaput: you mentioned character. ai because people don't know we don't talk about a ton But it's like one of the most used AI tools out there according to some of the data in terms of like daily and monthly active users and people apparently love having in depth conversations with chatbots that have personalities that either mimic or real people alive or dead or fictional characters, it sounds like that's the direction.

[00:33:21] Mike Kaput: One of the directions meta is also going. Do you think, we could see a significant portion of social media conversations happening? With ai moving forward instead

[00:33:33] Paul Roetzer: of people. I mean, it's sure possible. I got to ask the question in one of my talks in the last two weeks about like people having relationships with their ai.

[00:33:42] Paul Roetzer: I was yeah, that's that's happening already. And I certainly think you could get to the point where. You just start to trust these interactions and maybe you can, and I think that's what Meta is going to enable is you can build whatever you want, like build me an AI agent that's an expert in business law and has, background in finance because as an entrepreneur, I just need somebody to talk to about this stuff.

[00:34:09] Paul Roetzer: And I would imagine you're going to be able to kind of like create your own. Characters, if that's what they're going to call them. I don't know if that's what's going to stick as like the name, but again, like you look at the technology and the advancements on the surface and it's oh, that's interesting.

[00:34:24] Paul Roetzer: Here's the ways we could use it. But the more interesting thing is to actually stop and ponder what are the innovative ways this will be applied. And that's what I start to think about is. And then you get into like licensing deals, like Steve Jobs, like what if you want Steve Jobs as an actual like advisor, I know character.

[00:34:41] Paul Roetzer: aI think you can go interact with Steve Jobs, but what if as a state licenses the ability to build these things like these influencers have done with Meta, I assume there's a licensing deal that's allowing them to use their, character and, personality and things within this, these tools.

[00:34:57] Paul Roetzer: So, Okay. It's just, that's why I say, for me, this is so broad in terms of its implications that, the ability to build stickers, the ability to talk to these 20 people, that's interesting. But that's just a prelude of, the much broader way that this stuff is going to be used. And that's the stuff where we keep saying, if you just figure out AI, the fundamentals of it.

[00:35:20] Paul Roetzer: And you're a domain expert, like the future is constantly connecting the dots. As new things emerge, it's what does that mean to you? What does it mean to your company? What does it mean to your industry? And so I think this can be really exciting. It can be overwhelming and it's okay to have the overwhelming days.

[00:35:37] Paul Roetzer: But, there's just so much potential with stuff like this.

[00:35:41] Mike Kaput: Yeah. And I think it's tempting sometimes for some people to say, okay, Maybe in one way or another I'm not super interested in meta like I don't use Facebook anymore that much I'm not very active on Instagram though. Plenty of people are but or oh, I don't really use.

[00:35:58] Mike Kaput: What's that? Well billions of people still use these services Instagram obviously, but what's up? I mean is wildly popular and like a linchpin of communication in a lot of non US countries both for Personal communication and business like customer service. So globally, there are some incredible implications here.

[00:36:18] Mike Kaput: But also if you're just a business owner, it's I'm not active on these platforms. Why should I care? Well, maybe you should ask your marketing person. Have you explored these features, right? I mean, even just that question can be really valuable. Are you using instagrams AI photo editing so we don't have to spend as much time on that or what have you?

[00:36:36] Paul Roetzer: Yeah. And to your point, like You may not use it. Like for me, I don't really use their tools that much, but your consumers do so you still got to understand it, right? Yeah. It's, and we didn't even get into like their glasses, like the Ray Bans that are going to record the world around you. And, so.

[00:36:55] Paul Roetzer: Six months from now, if you see people wearing black Ray Bans, you got to wonder are they recording me right now? There's a pretty good chance they are. And everything's just, yeah. Like I said, you can pick any one of these and just like play it out the good and the bad of each of these announcements.

[00:37:09] Paul Roetzer: Yeah.

[00:37:12] Mike Kaput: So another huge news story in AI this past week, Anthropic, the maker of the popular quad AI assistant announced that Amazon is investing up to 4 billion in the company. And Anthropic says this will make Quote, safe and steerable AI widely accessible to AWS customers. AWS will also become Anthropic's primary cloud provider for mission critical workflows.

[00:37:40] Mike Kaput: And importantly, that gives Anthropic access to Amazon chips to be developing further AI models. It will also open up access to CLAWD2. To many more organizations who are already building on AWS and want to access state of the art models in safe and secure ways. What's cool about this announcement also is that Anthropic cited how some major companies are already building with Anthropic models via Amazon.

[00:38:09] Mike Kaput: For instance, LexisNexis Legal and Professional is using a custom version of Clawd2. To summarize and draft legal content, the famous investing firm Bridgewater Associates is using Claude to, to generate charts and summarize data. And the travel company Lonely Planet is using Claude too, to drop the cost of generating their travel itineraries by up to 80%, using all the decades of travel content they have available.

[00:38:39] Mike Kaput: So first step, what does this mean for Anthropic and its competitive position in the AI landscape?

[00:38:47] Paul Roetzer: So this stuff is really hard to keep track of, but my first reaction when I saw this as, wait a second, didn't Google just put a bunch of money into Anthropic? So if you think about, if we go back to the technology infrastructure at play here, we have the infrastructure companies like Nvidia that makes the chips, and then you have Google.

[00:39:08] Paul Roetzer: With their cloud, you have AWS with it from Amazon and you have Microsoft Azure. Those are the cloud companies where all the data lives, where these models, connect to the data and do things. Then you have the companies building the models like Anthropic and Inflection and Cohere and all these players.

[00:39:26] Paul Roetzer: And then you have the application layer companies, the software companies building on top of them. So what we're seeing is the cloud companies, Google, Microsoft and Amazon. Are doing deals with the language model companies like Anthropic deal here. So AWS and Anthropic, but so go back and check my notes and it's Oh yeah, totally.

[00:39:44] Paul Roetzer: So in February, there was a report that the Financial Times said in late 2022, Google invested around 300 million in Anthropic. In return, Google got a 10 percent stake in the company. Separately, Anthropic announced that, that week in February, that Google Cloud is its preferred cloud provider, and the companies will co develop AI computing systems.

[00:40:10] Paul Roetzer: So just, what is that, seven months ago? Anthropic raises 300 million from Google and basically says, There are partners like we're going to work with Google and now we have up to four billion from Amazon seven months later. So the word is, it was like 1. 3 billion upfront, maybe in processing power, maybe in money, I don't, I don't know.

[00:40:30] Paul Roetzer: And then up to 4 billion based on whatever the terms of the agreement are. So what it shows like the competition is insane right now for these language model companies. So we have Microsoft. That we know put like 10 billion into open AI for access to GPT 4. But just what we talked about previously, they, they're doing deals with Meta now, who has Lama 2, the biggest open source model.

[00:40:53] Paul Roetzer: And they were the lead investor in inflection. So that, that was earlier this year. Then you have Google who's invested in Anthropic and Cohere that we know of and probably others. And then you have Amazon that did their deal with Anthropic, but they also put money into Cohere and they're doing deals with Meta and they have Stability AI now.

[00:41:12] Paul Roetzer: It is like mind boggling. How this space is playing out right now. And then keep in mind, Dario Amadei, who's the co founder of Anthropic, was the vice president of research at OpenAI, and prior to that, he was a research leader at Google. So, we, I know a few episodes where we talk about how all these people are connected, they all work together at different times, they all came from the same company trees.

[00:41:38] Paul Roetzer: Yeah, that was, I mean... It's a huge deal what they're doing. We had a whole episode, where we talked about Dario and the interview he did about what Anthropic's building. So if you, that would be a good one to go back and listen to, to get more context around Anthropic, but they are a major player.

[00:41:54] Paul Roetzer: They're investing obviously billions in building their next generation models, which will be multimodal. Dario, the founder of Anthropic was the guy who I think led the development of GPT at OpenAI. So. Yeah, I mean, if you weren't paying attention to Anthropic Claude before, now would be a good time to start and they're the one, again, if you're not familiar, they have the largest context window at the moment.

[00:42:20] Paul Roetzer: So what that means is if you want to do a prompt in Claude, you can give it up to like 75, 000 words as part of the prompt. You could give it a whole novel. You could give it the last hundred like customer service transcripts. You could give it, a thousand sample emails. Like you could give it all this stuff as like training data basically, to, to drive the output.

[00:42:42] Paul Roetzer: So major player, crazy, the amount of money that's getting thrown around to these language model companies, you gotta think at some point, somebody buys someone I mean, I assume that in all these deals, there is some sort of right of first refusal on acquiring the companies. I don't know. Like it's wild.

[00:43:04] Paul Roetzer: So what

[00:43:04] Mike Kaput: does this mean for me if I'm an AWS customer

[00:43:08] Paul Roetzer: today? I think they said that it's actually like widely available already that if your data lives in AWS, you can now start training Claude models on your data safely, securely, already through procurement. Like it's already there. So I think that's the biggest implication is.

[00:43:26] Paul Roetzer: Claude too is now baked right into your AWS account, and you're able to start using your data that's already living there to build things with Claude. So

[00:43:35] Mike Kaput: obviously, as we know, in some of the work we do, enterprise adoption and development, this stuff is complicated. But at a general level, if you are an enterprise, trying to build your own model or customize one, and you're an AWS customer, you probably need to be looking into one of the models

[00:43:53] Paul Roetzer: they offer.

[00:43:54] Paul Roetzer: Having your CIO, like your data team. Yeah, this is one where if you're a marketer listening to this, you're probably gonna need some help. You're not probably going into AWS and starting to build your own models. But you're bringing business cases for ideas to build, some pilot models, things like that.

[00:44:10] Paul Roetzer: So. Knowing it's possible and being able to go have a conversation with the people in your company you need to talk to to start testing it out is really the key there. So

[00:44:19] Mike Kaput: if I'm not an AWS customer, what kind of lessons or takeaways should I be thinking about in light of this

[00:44:26] Paul Roetzer: development? Well, if you're Google or Microsoft, you have similar capabilities with other language models.

[00:44:32] Paul Roetzer: Like this is, this is the future. So again, the cloud companies, there's three major players. If your data lives in those, all three of them are trying to build the ability for you to develop language models in your company based on these other foundation models. So maybe you're building with Anthropic, maybe you're building with Cohere, maybe it's Metaslama 2, maybe it's Stability AI.

[00:44:57] Paul Roetzer: Maybe it's OpenAI, GPT 4, whatever, these are the models that you're going to build on. So your data's got to live in a cloud, and then you use one of these models to basically train custom versions for your company. Maybe there's a customer service version, and a marketing version, and a sales version, and a HR version.

[00:45:13] Paul Roetzer: That's the assumption we're under, is you're probably going to have a collection of these. Models in your company that are tuned specifically for different applications or different business functions. And maybe, maybe you're an AWS customer and one of them is built on Cohere because Cohere is best at marketing.

[00:45:31] Paul Roetzer: And maybe another one is built on Anthropic because it's best at handling HR and legal. That's the unknown right now. And that's why consultants who can help solve these strategies are going to make a lot of money in the coming years. All right, let's

[00:45:44] Mike Kaput: dive into a bunch of rapid fire topics. First up, what if you could translate your favorite podcast into another language instantly?

[00:45:54] Mike Kaput: This is soon going to be a reality for at least some podcasts because Spotify is partnering with OpenAI to launch a new AI voice translation feature for podcasts. This tool will let podcasters create versions Of their shows translated into other languages. And this isn't just standard kind of translation.

[00:46:16] Mike Kaput: Spotify's AI will actually create a synthesized version of the podcaster's own voice in the new language. And they're actually starting to roll this out with high profile podcasters like Dax Shepard and Lex Friedman. The tool actually uses OpenAI's Whisper technology for transcription and translation.

[00:46:37] Mike Kaput: And it uses new audio replication technology to clone the voices. Right now, the initial languages they're supporting are translating into Spanish, French, and German from English. Right now, it's actually unclear how widely Spotify plans to roll this out. At the moment, there are some concerns that rolled out in the wrong way.

[00:47:00] Mike Kaput: This could lead to misuse synthesizing people's voices. So...

[00:47:05] Paul Roetzer: Hank?

[00:47:08] Mike Kaput: It definitely sounds to me, Paul, that we now have the ability to just immediately translate podcasts into different languages. Like I, whether Spotify rolls forward with that or not, how should I be thinking about a podcast strategy to prepare for this?

[00:47:25] Paul Roetzer: I just got like lost for a second in how easy it is to scrape and like steal text online to publish stuff and make it appear as if it's your own. And now I'm wondering if like it's going to be that easy with podcasting. So if you can synthesize our voices, which you can, and you can write a script with Claude or GPT 4, which you can, and you can publish a podcast under our voices with us saying stuff we've never said.

[00:47:55] Paul Roetzer: You can do that. Like that's, that's doable right now. Like there's no advancement in technology needed. When I first saw this, the next on lock is not only more languages with, which Meta and Google are actually the leaders in number of languages that they can do translation to. So I would see them playing in the space, but the next is going to be deep faking of the mouth to move with the translation.

[00:48:17] Paul Roetzer: Yeah. Because right now I think their technology is just voice synthesis and translation, but the words don't match the mouth. But that is not out of the realm of possibility in the next 12 months either that you could actually deep fake our entire thing and we could have a Spanish edition of each podcast episode.

[00:48:35] Paul Roetzer: You could watch it on YouTube and it looks like you and I are speaking in Spanish. This tech scares me, honestly, like it's fascinating from a business perspective. There's no turning back. I'm not saying like we should stop this. It's coming. It's already here in many cases, but. I worry about this kind of stuff a lot.

[00:48:56] Paul Roetzer: Like it's going to be so easy to replicate anyone's voice and, have them say whatever you want them to say and create audio. And, and as far as I know, like we're still trying to figure out how to know if. AI generated text or images like I haven't heard anything about the ability to analyze whether or not someone actually said anything, right?

[00:49:19] Paul Roetzer: I don't know if there's AI detection technology for audio yet. I would assume there someone's working on it by the US government. I would imagine DARPA has something for this, but I haven't seen any research papers on that. The spread of synthetic audio and how are we supposed to know if you and I said what we said kind of thing.

[00:49:39] Paul Roetzer: I know this is supposed to be like a cool podcast example, but man, this one, this one worries me a lot. Yeah. Yeah. Wow. Yeah. Cool. Cool tech. I guess the implications are you're going to be able to unlock more markets, which is on the good, on the pro side. Our podcast, our content is English speaking audiences only, and that's not the only language spoken in the world.

[00:50:02] Paul Roetzer: So it'd be really nice if. We could open up to other audiences and especially if you have, valuable content to, to share and to help people, it'd be nice to know you can spread that further. I don't know what the checks and balances, like I don't know who verifies that the translation is correct.

[00:50:19] Paul Roetzer: This goes back to I assume you still need a human in the loop. Make sure that the accuracy is, is there for these translations? Or are we just going to like say, whatever we trust the tech it's 97 percent accurate. And let's just. Every brand just now starts putting out all these translations of audio and text with no verification of whether or not it's accurately translated.

[00:50:42] Paul Roetzer: I don't, I don't know. This is again, audio, we've said it before, I think 2024 is going to be the year of audio too. Like it's going to be, video and audio are going to be huge next year as well as AI agents. But it's not a space. I've studied a ton and I don't know that there's been as like nearly as many AI research papers that have come out on the audio side.

[00:51:02] Paul Roetzer: It's like we're just starting. We're like where image generation was in spring of 2022. I feel like that's where we are now with audio that it's just starting to have. It's moment. And now we'll start getting a ton more innovation and research in this space.

[00:51:20] Mike Kaput: So in some other news, the Writers Guild of America reached a tentative deal to end its 146 day strike.

[00:51:28] Mike Kaput: So this is marking some major progress towards resolving Hollywood's labor disputes. But what's really interesting is AI played a starring role here. So this deal includes gains for writers, like giving them more compensation from streaming, limiting TV writer rooms, and restricting how studios use AI without consent.

[00:51:51] Mike Kaput: I mean, AI acted as kind of a lightning rod issue during this strike. It wasn't the only thing they were striking over, but It did galvanize a lot of other actors and the public at large to kind of respond positively to this idea that writers were trying to protect their livelihoods from being impacted by artificial intelligence.

[00:52:10] Mike Kaput: And so due to some reporting at the Guardian, apparently under the new terms, studios cannot use AI to write scripts or to edit scripts that have already been written by a writer. And it also prevents studios from treating AI generated content as source material, like a novel or a stage play, that screenwriters could then be assigned to adapt for a lower fee and for less credit than a fully original script.

[00:52:39] Mike Kaput: So Paul, what did you take away from the AI specific provisions of

[00:52:46] Paul Roetzer: this deal? Everything I saw was that the writers won here and set an important precedent. And I think we're going to see a similar follow on in other industries. And the basic takeaway, it appears, as you were saying, was the writers are allowed to use the AI to assist in scriptwriting, but the studios aren't allowed to replace the writers, in essence.

[00:53:10] Paul Roetzer: So you can't use the AI to do scripts or to improve them. Um... So that's the main thing is that it appears as though the writers got what they wanted here, which is the allow the ability to use it while protecting themselves from being replaced by it. So I would expect we'll see, some other industries very quickly that will use this as precedent to protect other humans.

[00:53:37] Paul Roetzer: And it's a tricky one because in reality, as we've talked before, like. It probably could write scripts with the tech we have today. It probably could do it. And as it gets multimodal, it could probably even build storyboards as it's writing it like. So I think it was important that the writers did what they did now.

[00:53:54] Paul Roetzer: Because I feel like if they didn't negotiate this here, they would have lost the chance. Like this was the one moment where they could get these concessions because the tech is gonna make possible the things they were afraid that it would be able to do.

[00:54:10] Mike Kaput: So another news, Getty Images is partnering with Nvidia to launch generative AI by Getty Images, which is a new AI image generation tool trained only on Getty's licensed photo library.

[00:54:25] Mike Kaput: It gives users full copyright protection when they publish images commercially created by the tool. So the verge actually tested out the tool and found that it generated pretty high quality realistic photos from text prompts, just like any other image generation tool. And these were based on actual stock photos that Getty owns.

[00:54:48] Mike Kaput: They did say when they tried to generate illustrations, those were a little less, sophisticated, but in terms of actual stock photos created by AI, they were quite good. Yeti has also said it's going to pay creators if it uses the AI generated images they create to train the current and future versions of the model.

[00:55:10] Mike Kaput: And it is also limiting images of real people so that you can't start manipulating photos to look like public figures and spread misinformation. So I guess when I'm reading this, I'm Thinking to myself, is Getty now a viable option for me if I'm a business that wants to avoid all these possible legal issues?

[00:55:31] Mike Kaput: With using generative AI.

[00:55:34] Paul Roetzer: Yeah, I definitely think for enterprises that are more risk adverse, or worried about, being sued for using like a mid journey, maybe use some illegal images to train its models and eventually, you get sued for it in some way. This seems like a safer route.

[00:55:55] Paul Roetzer: It's not going to be as good like that. That's like right up front. We just acknowledge mid journey DALLE three. They're going to blow this out of the water most likely because they don't have as many guardrails and limitations to how they were trained and therefore the quality of what it's able to output.

[00:56:09] Paul Roetzer: But for corporations that want a safer route, this is probably it. Now I just to clarify, it says it gives users full copyright protection when publishing images. What I would say there is. I think what they're implying is copyrights weren't infringed upon to train the model. That does not mean you can own a copyright or a trademark to something you output from the model, because that's not currently allowed under us copyright.

[00:56:37] Paul Roetzer: So law, so just to clarify there, just as like something to consider. So yeah, I would say this is probably a more conservative play, probably safer play for bigger enterprises that have to worry about, or have legal challenges to using. A tool like DALLE or mid journey where they may get questioned about the.

[00:56:54] Paul Roetzer: The legality of how the models were trained. So in

[00:57:00] Mike Kaput: a pretty major issue in AI this week, it looks like Google has been indexing at least some of the conversations that users are having with BARD, meaning that they were starting to appear in search results. Now they've clarified the issue since on Thursday, Jack Krawczyk, a senior product director at Google who works on BARD, tweeted that the only subset.

[00:57:27] Mike Kaput: Only a subset of chats were actually indexed and these were chats that users had already agreed to publicly share Based on bard's terms and they were also chats that they had linked to publicly elsewhere online So I believe that's where the issue came from google actually indexed those And started displaying them in certain search results, but he also said that those chats Have now been removed from Google search.

[00:57:52] Mike Kaput: So it doesn't sound like this became a widespread issue. It doesn't sound like it's something you necessarily need to worry about that suddenly all your private chats with Bard are going to be in the public sphere. That's all really good. But I kind of have to ask the question that might be on the minds of other people in the audience.

[00:58:11] Mike Kaput: Like is Google losing its step? Like we've seen a series of kind of underwhelming AI releases from them and now stuff

[00:58:19] Paul Roetzer: like this. I don't know. I would read too much into it. I think they got caught. Like I, so basically the way it worked, it just to clarify even further, if you went in and did a, an output in Bard, like you gave it a prompt and it gave you something, and then let's say I did that and I wanted to share it with Mike, and then I chose share link with Mike.

[00:58:42] Paul Roetzer: That sharing of the link to Mike, whereas the user, I would assume, only Mike is going to, get that link and be able to access it, that link was then being indexed. So my choice to share a link with a user, another user, was making it indexable in Google. And now able to show up in results. So maybe it's not embarrassing stuff.

[00:59:04] Paul Roetzer: Maybe it's just standard stuff, whatever, but maybe in their terms, it clarified that, but realistically, I think the expectation as a user is if I share something with Mike, I don't expect it to be the number one result on the search engine result page. And so they came out and basically Oh, our bad.

[00:59:23] Paul Roetzer: When they said our bad, they didn't say it wasn't intentional. They just basically got caught that it was happening and users didn't seem to understand that that's what they were agreeing to. The other side of this from a marketing perspective is, I think it was an SEO guy who figured it out and he tweeted it out.

[00:59:41] Paul Roetzer: Because from an SEO perspective, they saw Oh my gosh. Google's indexing BARD chats, let's go optimize for BARD chats and like start sharing the links out and now we can get listings at the top of Google results. So Google obviously doesn't want SEO people gaming the system, so it probably became more of oh, we got to turn this off.

[01:00:02] Paul Roetzer: I think it was an intentional choice that they were allowing them to be indexed. They just came out and said, Oh, we forgot. No, they didn't like that. If they did, someone got fired. Like that's a pretty big thing to forget. So I think moral of the story though, is we've talked about before about trust in these systems and knowing what you're putting into them and knowing what they're keeping, what they're sharing.

[01:00:24] Paul Roetzer: And so if Google's doing stuff like this, like you got to really read closely on all these other tools you're going to use and know how they're using your data. And that might mean taking the time to read their terms of service or taking their terms of service and putting it into chat, GBT or Claude and saying, what items in here should I be paying attention to?

[01:00:43] Paul Roetzer: Like what should, what should I make sure I read thoroughly kind of thing. So

[01:00:49] Mike Kaput: in our last topic for today, Ethan Mollick, who we've talked about many times on the podcast, he's one of the top AI experts out there. He's a professor at Wharton. He just released a series of articles that explain how students can use AI to improve their learning.

[01:01:04] Mike Kaput: And this series covers use cases such as. using AI as a feedback generator, a personal tutor, a team coach to improve your group work and as a learner itself. So students can learn a subject better through teaching it to AI tools. This also offers practical advice for students and their teachers on how to properly understand large language models, understand what limitations they have, and It offers some best practices for using them.

[01:01:35] Mike Kaput: So I found this really fascinating, useful material. If I'm an educator or a parent, Paul, what is the quickest and best way for me to use information like

[01:01:44] Paul Roetzer: this? Yeah, experiment. And this is what we were saying earlier, pick some use cases that you actually want to try and then go test them.

[01:01:50] Paul Roetzer: So I think there's four great ones here. And the beauty of this is, they give you the sample prompts. So the AI for Tutor one, it starts with you are, this is the prompt that it's teaching you to give. You're an upbeat, encouraging tutor who helps students understand concepts by explaining ideas and asking questions.

[01:02:06] Paul Roetzer: Start by introducing yourself to the student as their tutor. Who's happy to help them only ask one question at a time, never move on until the student responds. So then it's like this very in depth prompt, but this is the perfect example. Like if you don't know how to have chat GPT help your child, this is a great way to go in and do it.

[01:02:25] Paul Roetzer: It's Oh, that's how you write a great prompt. And then just that one example of prompt teaches you to start. Prompting differently. And as we've said before, like prompting still matters, especially with chat, GPT in these language models, your ability to give it guidance, set guardrails, tell it to go step by step.

[01:02:42] Paul Roetzer: So you understand, these are all really essential ways to get the most value out of these systems. So I think stuff like this is great. And for teachers and students who really need to figure out how to use these things responsibly, as quickly as possible, I'm happy to see they're sharing this stuff out.

[01:02:59] Paul Roetzer: Well,

[01:03:00] Mike Kaput: that's a wrap on another packed week in AI. Paul, thank you as always for decoding what's going on in AI this week. I know I appreciate it and I know our audience appreciates it.

[01:03:11] Paul Roetzer: Yeah, and quick note, we, if you don't already subscribe to the newsletter, we have a newsletter on Marketing Institute's site.

[01:03:16] Paul Roetzer: It comes out every Tuesday morning. And what Mike's done is we've like reformatted it. So if you're a previous subscriber, you'll notice something different this week. And if you haven't subscribed, this is how it's going to work. So we're actually going to take all the key, topics and rapid fire items, as well as some additional topics that don't make the cut each week.

[01:03:34] Paul Roetzer: And we're going to build all of those right into the newsletter. So it'll be kind of like a recap of everything, you'll be able to click deeper into it. So we put all the links in the show notes for the podcast, but now you'll be able to get quick access to them in the weekly newsletter as well. So we'll put the link to the newsletter in the show notes, but you can just go to marketinginstitute.

[01:03:52] Paul Roetzer: com and... Under resources is the newsletter. So that's a good way to keep up to speed as well. So, Mike, thanks as always for curating everything and leading us through the conversation. And we'll talk with everyone again next week. Great. Thanks, Paul.

[01:04:07] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com.

[01:04:20] Paul Roetzer: Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:04:28] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 55]: AI-Powered Content Strategy, Claude 2 from Anthropic, and Major Google Bard Updates

Cathy McPhillips | July 18, 2023

This week's episode of the Marketing AI Show discusses Marketing AI Institute’s AI-powered podcast strategy, plus Bard updates.

[The Marketing AI Show Episode 49]: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

Cathy McPhillips | May 30, 2023

This week's episode of the Marketing AI Show covers AI updates to Google Ads, Microsoft's AI copilots, and much more happening this week in AI.

[The Marketing AI Show Episode 48]: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

Cathy McPhillips | May 23, 2023

This week's episode of The Marketing AI Show covers a major Congressional hearing on AI, major AI safety risks, and possible regulatory action.