<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

44 Min Read

[The Marketing AI Show Episode 67]: Testing ChatGPT's Vision Capabilities, Rewind's Controversial Wearable Pendant, Enhanced Personal Support from Assistant with Bard

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

There was no shortage of AI product updates this week! We are back with lots of exciting AI news for Episode 67 of The Marketing AI Show. Paul and Mike break down their experiences with ChatGPT Vision, delve into AI tech controversies, discuss a more personalized AI assistant with Bard, and more.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsors:

Use BrandOps data to drive unique AI content based on what works in your industry. Many marketers use ChatGPT to create marketing content, but that's just the beginning. BrandOps offers complete views of brand marketing performance across channels. Now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions.

The AI for Agencies Summit is a virtual half-day summit happening on November 2. The AI for Agencies Summit is designed for marketing agency practitioners and leaders who are ready to reinvent what’s possible in their business and embrace smarter technologies to accelerate transformation and value creation. To register, go to AIforAgencies.com and use the code AIPOD50 to get $50 off your ticket.

Listen Now

Watch the Video

Timestamps

00:04:14 — A hands-on test of ChatGPT’s Vision capabilities

00:23:14 — Rewind introduces controversial wearable AI pendant

00:35:41 — Google announces Assistant with Bard

00:43:06 — Adobe aims to improve AI image editing with Project Stardust announcement

00:45:30 — Canva begins Magic Studio rollout

00:48:41 — Zoom expands features of AI companion

00:51:55 — Asana introduces new AI features

00:54:45 — Tom Hanks warns followers against synthetic AI Media

00:58:47 — Anthropic in talks to raise at least $2 billion more in funding

Summary

A hands-on test of ChatGPT's Vision capabilities

As we covered in last week’s episode, ChatGPT can now see. This means ChatGPT Vision can now understand what is happening in any image you give it and do tons of valuable things using these images.

We’ve been blown away by the capabilities so far, so in this episode we wanted to cover our experiences in our initial tests and talk about the specific use cases for marketers and business leaders.

Rewind introduces controversial wearable AI pendant

A new AI product is creating a firestorm online, and that is putting it lightly. The company in question is Rewind, and they primarily sell an app that records everything you do on your computer or phone. Then the app uses AI to help you surface insights from all the information gathered by doing things like: automatically capturing and indexing meeting notes, finding specific messages quickly, and creating automatic summaries and breakdowns of things that happened throughout your day. Rewind is not some fly-by-night operation, they’ve raised $33M from people like Sam Altman and Andreessen Horowitz.

Now, the company is making waves with a new product release called Pendant. Pendant takes this all one step further. It’s a wearable device, like a necklace, that captures everything you hear and say in the real world. The possibility that everyone could soon be wearing a device that records everyone else is…controversial.

On one hand, the company has thousands of preorders. CEO and co-founder Dan Siroker has been vocal on X about all the possible benefits of this technology; for instance, you will never forget what someone did or said ever again in any interaction. On the other hand, many people online are pointing out how quickly this could go very wrong by invading the privacy of others and turning every interaction into an on-camera and on-mic affair.

Google announces Assistant with Bard

Google just introduced Assistant with Bard, a personal assistant powered by generative AI. This AI assistant will act as a personalized helper across Google apps like Gmail and Docs. For instance, you could ask it things like “Help me catch up on important emails from last week” or “Where is the birthday party I’m attending today?” and it will leverage your information across Google apps to help you achieve valuable tasks.

Says Google: “It combines Bard’s generative and reasoning capabilities with Assistant’s personalized help. You can interact with it through text, voice, or images — and it can even help take actions for you.”

It’ll be rolling out on Android and iOS in the coming months.

There are more exciting technology and AI updates in the Rapid Fire section of the podcast, including Adobe and Canva’s new design features. Listen, subscribe, and we’d love your review!

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

Paul Roetzer: [00:00:00] If these things become capable of doing almost everything, what do I need to go buy all these separate software products for? And I think this is something we're going to all be grappling with in 2024, is how capable are these models?

Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

Welcome to episode 67 of the marketing AI show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. Hello, Mike. Hey, Paul. We are, we are doing this one Friday morning, Eastern time, October 6th. [00:01:00] Because, as seems to be the trend lately, Mike and I are both traveling next week.

So, neither of us is going to be able to do this on our usual Monday morning spot. So, if anything groundbreaking happens Friday afternoon or Monday morning and we're not talking about it, you now know why we're not talking about it. And we will catch up on it on the next show. So, episode 67 is brought to us by Brand Ops.

Many marketers use ChatGPT to create marketing content, but that's just the beginning. When we talk with BrandOps team, we were impressed by their complete views of brand marketing performance across channels. You can now bring BrandOps data into ChatGPT to answer your toughest marketing questions.

Use BrandOps data to drive unique AI content based on what works in your industry. Check out BrandOps.Io/MarketingAIShow to learn more and see BrandOps in action. And this episode is also brought to us by [00:02:00] Marketing Institute's AI for Agencies Summit, which is coming up in like three weeks ish.

I really gotta figure out... The deck. This is like an ongoing theme. I mean, we listen to the show. I always know it's it's so hard to build. Like I'm doing the AI emergent agency for this was my opening keynote for it. And I just you try and think of what is going to happen between now and November 2nd when this summit happens, But yeah, so this is a, this is a virtual summit.

It's a half day virtual summit that's built for marketing agency practitioners and leaders who are ready to reinvent what's possible in their business and embrace smarter technologies to accelerate transformation and value creation. So it's going to go from, I think it's noon to five on November 2nd.

We've got, I don't know, there's probably like 12 to 15 presenters. We're going to talk about legal implications. We're going to talk about building out services. We're going to talk about how to drive profitability and efficiency. We're going to go through five case studies of agencies doing cool things.

So, if you [00:03:00] are a marketing agency practitioner or leader, definitely check that out. You can go to AIforAgencies.com, that's F O R, AIforAgencies.Com. And AI pod 50, we'll get you 50 off. So again, AI for agency summit coming up on November 2nd and quick background. I think most people listen to the show, know this, but I owned marketing agency for 16 years.

We were HubSpot's first partner back in 07. I sold the agency in 2021 and Mike worked with me at the agency for Nine years, Mike, was it? Yeah, I think it was about nine years. Yeah, so Mike and I have a pretty significant background in the agency world, actually. And I was at an agency five years prior to starting my own.

So I spent the first 21 years of my career, in the agency world. Okay, so with that, let's get going. We have some fascinating stuff this week. I would say we didn't have as necessarily as many groundbreaking things this week, but some really cool advancements [00:04:00] in technology, some new features coming from some very well known companies that many of you probably use, in your tech stack and, a few big topics to talk about to start us off.

So let's go ahead and jump into the main topics for the day, Mike.

Mike Kaput: Sounds good. So first up we covered last week that ChatGPT can now see it has vision capabilities, meaning it can now understand what is happening in basically any image you give it and do tons of valuable things with images that you feed into the system.

Now, we all just got access in the past week and we've been Pretty blown away. I would say by the capabilities so far. So in this segment, we wanted to cover our experiences in very initial tests and try to drill into some specific use cases for marketers and business leaders, because I feel like while many, many people are quite impressed with this tool, not enough people are.

Gaming out [00:05:00] really what's truly possible when you add vision capabilities to a powerful, you know, AI assistant like ChatGPT. So to kick things off, Paul, I wanted to maybe ask you about anything that jumped out to you as you've tried it out that you've seen other people doing. And then I'll kind of share a few of the initial tests I've done as well.

Paul Roetzer: Yeah, I mean, we, you know, we did talk a little bit at length last week about it, but to actually see it. Working is, it's pretty hard to comprehend, so I keep going back to this idea that these models are supposedly just like predicting words. And yet, I got it, I think, Tuesday night, maybe it got turned on in my ChatGPT account, um.

When you give it images and you say, analyze this, it's how is it? I still struggle to understand how it's actually doing it. And so I tried to, I just went into cause I got it like 11 o'clock at night. And so I went in and just like scan my photo album. I was trying to find [00:06:00] something that I, that could be challenging for it just to like see.

And so the first thing I gave it was actually my wife and I were at a Cleveland Browns football game. A few weeks back and we got there pretty early and so I had a photo of the field where you could see maybe half of the stands and it was pretty scattered in terms of attendance at that point. So we're still an hour plus early and so I put it in and I said how many people are in the stands.

That was my prompt to it. And its first answer was, it's difficult to provide an exact number just by looking at the image, but the stadium appears to be partially filled. You'd need an exact count or more detailed method to get a precise number of attendees. So I replied and said, best guess, question mark, try counting by section.

So I just further prompted it. And then it came back with, all right, I'll give it a go. Based on the image, I'll start by estimating the number of people in a small section. Two, then I'll try to extrapolate for the visible sections of the stadium. From the image, a smaller, clearer section might have [00:07:00] around 50 to 70 people.

Given the number of sections visible and the varying density of attendees in each, we might be seeing about 30 to 40 sections in this image. So if we average the people per section to about 60 and multiply it by the number of sections, 60 people divided by section times 40 sections equals 2, 400 people approximately.

Remember, this is a very rough estimate for only the visible section of the image. The actual number could be higher or lower and the entire stadium would obviously hold many more people. So that's what I'm saying where it's how, like it obviously has way more advanced capability, ability to reason, ability, like follow a chain of thought, ability to follow like a step by step process to do things.

And whether that number is right or wrong, I don't know. I didn't actually like zoom in and count and verify. But for me, right now, it's all about the possibilities. And, you know, I'm going to hear some of the examples you've done, because I know you've actually experimented with it a little bit more than I [00:08:00] have.

You can't use this and not have your head swimming. With what's going to be possible in the very near future. And so I think that's the thing I was left with. And like we've said on the show many times, this is the least capable form of this we're ever going to see. Google's going to come out with Gemini sometime this fall, most likely.

That is, I assume, multi-modal from the ground up. They're building it this way where you can feed it images and maybe videos and text, obviously. GPT 5, whatever. That is going to be in whenever it's going to be likely going to be multi-modal from the ground up. So what I think we're seeing is a glimpse into the very near future of what these things are going to be able to do.

And they're not being trained in specific verticals. They're being trained with this like horizontal capability that. Like you could just start to imagine all the applications. So what are some of the things you used it for? Cause I think, like I said, you, you kind of explored some more use cases than I did.

So I'm going to share

Mike Kaput: first a fun example and then [00:09:00] get into the business, some marketing ones, because the fun example hints at. exactly what you just said, that these things, it looks like this thing is doing some type of really interesting reasoning. So for instance, I was traveling, like you mentioned for a talk this past week, I took a very terrible photo out of an Uber of the city disappearing behind me.

It is literally just a small portion of the city's buildings. And it is a city that does not have a super recognizable skyline in this photo. Like you could look at this and be this could be any city in America, I have no idea. So I said, Hey, given the limited information you've got here, can you tell me what city I'm in?

And ChatGPT comes back and says, look, this is going to be really hard because there's limited information, no distinct landmarks. However, I'm going to give it my best guess. It says, the skyline suggests it could be any midsize city in the U.S. And it says that given. The broad possibilities here, it looked at, hey, it's [00:10:00] actually got modern building structures, and given the broad expanse around the city, I'm guessing it's either somewhere in the southwestern or western United States, like Phoenix or Denver, and I was driving away from Denver.

So I realized that Many people when I've posted about this so far online have said things like oh, that's really amazing but like yeah, it's like making stuff up and like it's very confident and often wrong. I'm like I get all that I understand the limitations of these tools. I'm not saying just because it says something confidently that it is right However, you cannot read something like this and not say like this is a hundred percent better than I would ever guess in my entire life even if I was wrong half the time it's right.

Half the time here, it is incredible to me what it is able to do based on very limited information. So that was kind of a fun one, but I've already started using it. In some very simple and [00:11:00] initial ways. I look really forward over the next couple months to just going completely down the rabbit hole with this, but to start one example that I had ready to go the day before I got access to this.

I had been taking some screenshots of some issues we were having with a contact record in HubSpot. In HubSpot, you can dive into the history of changes made to all the properties on a contact record. We were trying to figure out why did something change that should not have changed? So I actually, one of the screenshots was had no sensitive information or anything that was proprietary that we wouldn't care to share, like I would send it to HubSpot support or whoever.

So I shared it and it was literally just three contact property changes. And I said, can you like help me diagnose this person's contact info changed? And I don't know why. It gave me an incredibly solid analysis that said, look, here's the three things that could be going on based [00:12:00] on what I'm seeing. It looks like these are the most likely reasons this contact profile changed.

By the way, here are some suggestions on what to do about it. And based on the owners of those properties, who to talk to in your organization who made these changes. It was literally. To your point about in the beginning of this episode, our agency background, it is 100 percent a question a client would have come to us as HubSpot experts to ask.

It would have taken us 10 times as long to diagnose it. I know because I am reasonably competent in HubSpot and it took me 10 times longer to figure out what was going on. It was not some big rocket science analysis, but it didn't have to be. It would have immediately cut down by... 10 X, the amount of time spent on it.

And more importantly, it didn't have to spend a bunch of valuable brain power, slamming my head against a wall for a HubSpot issue. So that was a big one where I was Oh my God, this is [00:13:00] where we're starting to get into the implications here. I've also used it so far to do things like what was really cool.

I don't know if I could do this at scale, but it's super helpful to edit slides. You know, when you're staring at something that's visual. And whether it's a slide or a finished design of a, report editing the words is a real pain because you're looking at everything all at once. I literally took, I intentionally put a typo into the slide, took a photo and said, Hey, can you just vet that this is all there are no typos and it found it easily.

So that was super helpful. And you know, like when you're not in a word processing program, it can be hard to make those edits when you're in like PowerPoint or something. I gave it a chart of my LinkedIn follower growth over the last like 90 days. Just. The ups and downs visually. I said, look, can you tell me anything interesting about this?

What should I be thinking about to grow my followers? Gave pretty good suggestions [00:14:00] that identified, you know, the peaks and valleys and said, here, you might want to go look into those and see what you posted really well on that day. I literally took a photo of someone's LinkedIn profile and I said, can you give me the pros and cons of how well the copywriting has been done on this page?

Like break it down for me. Did it really well. And I did the same thing for an ad in the airport. I said, what works about this ad? Why is this here? What might have been the thinking behind placing this ad in an airport is incredible. It broke it down perfectly.

Paul Roetzer: Yeah, I, again, this is like two days of access to the stuff we're talking about.

So you can imagine though, the couple of things that jumped out to me again since Tuesday, that, this is just the initial version of this. The accuracy, the reasoning capability, the problem-solving capability, it's only going to improve. And we're seeing. You know, [00:15:00] restricted versions of this real estate, like they've put a bunch of guardrails in place, which then leads me to, wow, as soon as the open source models, the leading open source models also have this kind of capability and again, keep in mind, OpenAI is going to be ahead of a lot of those players, but you know, arguably they're maybe six months to a year behind whatever OpenAI is doing right now, you're going to have access to within these, open source models and some of them aren't going to have.

The restrictions put on them that these models do like right now, I think one of the things is if you put an image of people and maybe it like won't analyze faces or something, I think there's some, some stuff around that. The other thing that I think is just really intriguing is the cascading effect on these like SAS companies.

So you just mentioned HubSpot as an example to do what you were just asking, we would have had to have called customer support and it's hit or miss if the person you talk to has a clue. How to solve the thing. Usually what they do is go look in the FAQs themselves [00:16:00] or like the knowledge base and they try and like find it for you.

Yep. So, if these like foundation models that OpenAI and Microsoft and Google, Google and Anthropic, like if what they build have these apparent, like vertical capabilities where they're able to seemingly do anything, like analyze this, tell me how to put this Ikea thing together. Tell me how to fix this bike.

Like you're just count this thing, analyze this chart. Like what do, what do you build software for? Like. If these things become capable of doing almost everything, what do I need to go buy all these separate software products for? And I think this is something we're going to all be grappling with in 2024, is how capable are these models?

And that's why I think all this experimentation and testing is really critical, because You know, go back into our agency days, we would build these monthly reports where we would literally take screenshots out of HubSpot and Google Analytics. [00:17:00] Here's what happened, drop it onto the slide, and then the account person would go through and say, this is what happened, here's what we did this month, here's what, it's up, it's down, it's whatever.

And you're spending all this time just looking at these charts. If these things can do that, it like... It's a significant change. And now I start to question well, do I need separate analytics software? Do I need to be buying other things? If it can just do this on a fly. And as we know, it's going to be able to build charts as well.

Like it already does that with code interpreters / advanced data analysis. So I think one of the biggest unknowns going into 2024 is. How, how far are these capabilities going to go with these multimodal models? When again, Google Gemini, we assume is coming when we know we already have these capabilities in GPT 4v and we assume they are precursor to GPT 5.

Like that's a, it's an unknown for us as marketers and business [00:18:00] people. It's a huge unknown for venture capitalists who are like making these bets on these software companies that are building these vertical solutions for language and image and video, who maybe we don't need those things. Maybe we just literally need OpenAI and Microsoft and Google.

So that's the stuff that I find the use cases are super intriguing and like experimenting with these things are fun and like fascinating, but my mind is like. Bigger macro, oh my gosh, like what are the implications of this to business and to software and to like where these companies go? Are we truly going to have three or four winners and everybody else is just irrelevant in 12 months?

I don't know, but It's certainly a possibility that we don't end up needing a bunch of vertical solutions. These things are just trained to. Somehow do everything general purpose. Like that it's a whole, that whole, was it Microsoft? I forget whoever published the paper after GPT four came out. GPTs are GPTs like generative pre-trained transformers are general purpose technologies.

Like they have [00:19:00] this wide scale capability that because there's no user guide when they release this stuff. It's up to all of us users to realize what it's actually capable of doing. And right now it seems to be really capable of a lot of very intriguing things in marketing and business.

Mike Kaput: Yeah, that highlights a point that I keep coming back to because, and again, I understand why people are sometimes skeptical of technologies like this.

I understand why that they have vast limitations and people are okay, I'm tired of hearing the hype around ChatGPT. You can tell people are getting burnt out, but I really need to emphasize to people both when you're thinking about this tool and the implications, we have to think much, much bigger.

People are not thinking nearly creatively or big enough when it comes to what this means. This is just functionally checked technology that can see and then think about what it sees. That's a really broad statement, but it is [00:20:00] intentionally. So it is going to change everything. That's anything to do with looking at things and thinking about them, which I don't know if you realize is probably like your entire life.

Paul Roetzer: Yeah, and that's like whenever you know we're doing talks and stuff for teacher and intro to class, what I always tell people is: the only way to solve for this is comprehension and competency. You have to understand what the stuff is capable of. And then the competency comes from experimentation. Like listening to this podcast is great.

Like that it'll help you like solve some of the stuff, or at least give you inspiration to go figure things out, taking some classes, reading some books, like all that's great, but until you actually go in and say, okay, I'm going to pick five things I do every week. And it seems like GPT 4V might be able to help me with those.

Let me see if it actually does. You have to go in and try your own stuff and start to really experiment. I saw one, our friends at Stream Creative, I think they put an example up where they [00:21:00] took like a flowchart from their whiteboard and basically had it like analyze that flowchart. That's the kind of stuff where you have known things you already do, but once you start to see its capabilities, you're I wonder if it could do this.

Like now you start to think about strategic ideas and like bigger, innovative ideas. And that's to me where I think a lot of the really cool stuff is going to happen where people, once they get the fundamentals of what it's able to do, then they go and start pushing the limits and. It does not at this point appear that we're even like scratching the surface of what the limits and capabilities are.

Mike Kaput: Just one more comment, then I'll get us onto the next topic. It does just strike me right now talking. I cannot believe that this was less than a year ago. We were in a world, that did not have this tool and look how far we have come. It's incredible.

Paul Roetzer: October 6th. So November 30th came out. GPT 4 was introduced in March, right?

Like [00:22:00] middle of March of this year. So yeah. and when you think about, you know, so many people feel like their organization is so far behind and isn't solving for this stuff yet. But I always have to put that in that historical context of yeah, this is like 11 months old that like we've even most of the world has even known this stuff was possible.

And then this form of it that we're now interacting with. Is not, you know, what, six months old. So it is, yeah, it's really hard. And to think how good it has gotten in that time period. That's always like that. What does that exponential growth curve and exponential advancements in technology look and feel like?

It's a really hard concept for people to wrap their minds around. And then you look forward and say, wow, like what. What does a year from now even look like? How do you even project that and start to think about what you have to assume we have GPT five and we have open source multimodal models. You have AI agents that are actually able to take actions on your behalf.

And [00:23:00] it's I don't know, I mean, at some point we should probably sit down and have this, what is, maybe we should do, a what do we think is coming in 2024 episode or something. That'd be great, yeah. Yeah, it's fascinating stuff.

Mike Kaput: So, as our next topic shows, though, maybe not all advancements are good advancements, because there is a new AI product that is creating a firestorm online, and that's kind of putting it lightly.

The company behind this product is called Rewind, and they primarily until now have sold an app that records everything you do on your computer or phone, everything in your digital life, then uses this AI to help you surface insights from all of that data. So it'll do things, you know, you're automatically capturing and indexing meeting notes.

You're able to quickly find specific information or messages. And you can kind of get automatic summaries and breakdowns of literally everything that happened in your digital life, day in, day out. It's kind of like having the perfect [00:24:00] memory. They're also not like some crazy side project or startup.

They've raised 33 million from people like Sam Altman at OpenAI and Andreessen Horowitz. But, their new product is really what's getting people going a little crazy. It is called Pendant. And Pendant takes everything I just talked about. Many steps further. It is a wearable device like a necklace that actually captures everything that you hear and say in the real world.

And this possibility that, oh my gosh, everyone could soon be wearing a device and walking around recording everyone else is controversial, to say the least. So on one hand, The company has thousands of pre-orders and just a week or so after launching this, the CEO and co-founder Dan Siroker has been pretty vocal on X about all the possible benefits of the technology.

I mean, there's a lot of good things you can do with the ability to never ever forget an interaction [00:25:00] ever again. On the other hand, many, many people online are pointing out this could go really wrong. Obviously it invades the privacy of others, turns every interaction into something that's, you know, on camera and on mic.

Interestingly noted investor, Jason Calacanis, you know, if the all in podcast, said quote, only a deranged lunatic would want to live in a world. Where their every word is recorded and summarized for all time by a tiny covert pendant. He even said to the founder online, not saying you're a deranged lunatic, Dan, but you're going to cancel this project and regret it 100 percent guaranteed.

Now, Paul, I don't have a lot of questions here. I want to just give you the floor because I know you had some strong reactions to this and you engaged a little bit with Rewind CEO online, like walk us through what you thought of this.

Paul Roetzer: Yeah. I mean, I try. I try really hard to be neutral in a lot of this stuff and like I've said before on the podcast, I think oftentimes we're just trying to present the information and let people [00:26:00] make their own decisions.

There are just some categories and some ideas that I feel pretty passionately shouldn't exist and this would definitely be one of them. So I kind of like kept my mouth shut for a couple days. Like I saw this thing and I, you know, I was watching the tweets I follow Dan, I follow the company. Mainly because they were part of the inspiration for my Law of Uneven AI distribution I wrote earlier this year.

So when I created like the three variables that would determine the benefits of AI, of understanding, access to, and then acceptance of, they were the product I had in mind when I thought about the acceptance of. Meaning, Mike may love this thing. You may love this thing as the listener. Somebody else may love this thing.

Andreessen Horowitz may love it. Sam Altman may love it. And they're going to get the benefit of it. I will never, ever, ever use this product. and so for me, I won't have the benefit of it because I am not willing to accept what I would have to give up to get that benefit. So all that being said, Brian Halligan, [00:27:00] co founder and chairman of HubSpot, a friend of mine.

Tweeted out the other night, like cool or creepy, I think was all he, he said basically, and he did a, an image of the pendant and I replied, I was a hundred percent creepy, like this product category should not exist. The AI wearables that are always listening, like recording, just is not something that we should have in society.

So that was kind of it for a moment. And then I saw Dan, the founder tweeted out, basically he's been getting a lot of pushback from Jason and others. And he keeps throwing in people's faces hey, well, we've got over 3 million views and 3000 pre-orders and whatever. So you know, screw the haters kind of thing.

So he posted though, a video of like his: how they're dealing with this number one pushback of privacy invasion. And so I was all right, I'll, I'll play the game. I clicked to see what it was. So here is what the hair. So again, imagine a device that maybe I see you wearing. Maybe I don't. And you are recording everything, every [00:28:00] conversation that's happening.

You're on the train, you're recording everything around you. You're in a coffee shop. You're you and I are talking on the phone. We're talking out of coffee, like everywhere. You're just recording literally everything in your life. So this is how they are currently positioning that they're going to solve for not invading people's privacy.

It says this FAQ on their site. How can we prevent people from being recorded without their consent? This is, this is what it says. People loved that we planned to offer features to prevent people from being recorded without their consent. But many were curious what specifically we were thinking. So I'll share two of our ideas.

Now keep in mind, they are launching a product. They are selling right now, that he claims they have over 3, 000 orders for, that they have ideas, of how they're going to prevent people from being recorded without their permission. So these are their ideas. One, only store recordings of the user and anyone else who has verbally opted in, using voice fingerprints and speaker, I don't even know what this word is, diarization. It's possible to tell who said what. [00:29:00] If a person hasn't previously said, quote, sure, you can record me, then don't store anything they say. And it is as if they never said a thing. So that's idea number one is people are going to opt in or out of being recorded.

Number two is only store text summaries of what they said, not verbatim transcripts and recordings. Summaries are basically what a fantastic note-taker would have written down manually, but in this case, it is done for you automatically. These are just two of many ideas that will help us achieve our goal of making sure no one is recorded without their consent.

These are exactly the kind of features the device depicted in that one Black Mirror episode should have had. So I was okay, I can't, not do this. So I replied to him on, on Twitter. I said, so in order to get value from the use cases featured on the site, the person wearing the device will have to constantly ask everyone they interact with for permission to record them.

Question mark that seems clunky, awkward, and unrealistic at [00:30:00] best, which Dan to his credit replied. He said fair criticism for people who meet for the first time, but it will work seamlessly really well for people you meet often who only need to opt in once like my spouse. So I will just read what I replied.

I said, I really appreciate the entrepreneurial spirit here and the willingness to take and respond to critique, I think best in practical situations. So first time I see my buddy with one of these on, he asks for permission to record our coffee meeting. Maybe it's business-related. I hesitantly say sure to avoid friction, even though I find it very odd.

He's asking next time we're together is at a bar, just hanging out in a more personal and casual setting, talking about friends, family, work, and other personal stuff. He's wearing it again, the pendant. And I assume recording everything we say, since I have given him permission previously, do I need to now ask him to stop?

How does he make it stop? Do I even trust the company that makes it and that [00:31:00] everything my friend's recording really is private and secure because maybe I have no idea who rewind is or what that thing on his neck is, so I said, I just think the product is missing a few steps on the consideration of all the people who don't want to be recorded and or won't trust these devices.

And then I concluded by saying, if my buddy told me his iPhone was recording everything we say, I'd ask him to turn it off 100 out of 100 times, no matter the setting. And I trust Apple more than any company on earth. And the reason I say that is because I literally have their watch and their phone in my pocket and on my wrist.

Everywhere I go. So if I didn't implicitly trust Apple, then I wouldn't be doing that. And I would still, Mike, if you showed up and said, Hey, my phone's not recording my life. I would be dude, turn, turn that off. Come on, man. We're you don't need to be recording. Right. So then my final comment, which Dan didn't acknowledge or reply to, as I said, not sure if the team has any ethicists on staff or as [00:32:00] part of an advisory board supporting product development, but I feel like their perspectives could be very valuable. To proactively consider and solve for complex scenarios around this product concept. So again, this is not meant to bash entrepreneurship, Dan's efforts, like what Rewind is doing. Again, I have total respect for people who are trying to build on top of AI and do cool things.

It was the fact that they replied and said they have ideas of how to protect privacy. I am sorry, but when you're invading the privacy of everyone around you, you are making massive leaps about trust in that company, which no one knows, and in technology that they don't understand. And I just feel like not only is it probably going to be illegal in many places, I feel that it is highly unethical.

And to me, these are the kinds of advancements AI allows that should not exist. I probably feel the same way, honestly, about Meta's Ray Ban [00:33:00] glasses. I don't know if there's going to be some light that tells you it's recording everything, but I just don't like the idea that we're going to enter a part of society where everyone is just literally walking around recording everything.

And I get the push, but oh, everybody's got their phones anyway. Like it's different, like holding your phone up and you're recording okay, I get it. And I guess you could covertly record on your phone and not tell somebody, but that's unethical and immoral too. So, you can't really stand on that ground.

So, yeah, I just, I don't know, again, respect to Dan for commenting and replying to people pushing back on it. I do hope Jason is right in that they cancel the product and I hope we don't actually have to deal with this product existing.

Mike Kaput: One thing that jumped out to me in the conversation too is there's a fair amount of people are oh what are you so upset about wait until you find out tech companies have been doing the same thing forever. And it's like well, that's a that's a intellectually ridiculous argument I'm [00:34:00] sorry like it just is you're wrong like A people have been furious about that for over a decade B there have been lawsuits over it for over a decade and C Even if nothing bad happened to these companies for doing it, everyone's pissed, so we're just supposed to say it's happened before, let's make it even worse?

That's an insane argument to make.

Paul Roetzer: Yeah, we can't give up our ethics and morals just because people have done it before, or the government doesn't. It's just, so... I don't know. I mean, again, this goes to, we've talked many times on this, about this whole idea of responsible AI principles, like every organization should have them.

Not every organization is going to have emphasis on staff. And even when they do, they don't necessarily listen to them as happened with Google and other companies previously. But I think you have to at least put the effort in and you have to have the diverse perspective from people and you, you need.

People asking the hard questions of the product team and of apparently of the founders [00:35:00] sometimes to challenge the ideas like it can't just all be about there's money to be made and you have some altruistic use case in mind for people with Alzheimer's or whatever the altruistic use case is here that they're justifying this but at some point like it's what you have to give up and and and the sacrifices you have to make to like people's privacy and well being of the larger, you know, society.

It's just not worth it. And I right, unfortunately, like that's not how tech works. They sometimes just race forward and build things because they can and they don't want to hear that. Maybe they shouldn't.

Mike Kaput: So in our third main topic today, Google actually just announced, something called Assistant with Bard, which is a personal assistant powered by AI.

And this is basically an AI assistant that will act as a personalized helper across Google apps like Gmail and Docs. So for instance, you could [00:36:00] ask it things Help me catch up on important emails from last week in Gmail, or where is the birthday party I'm attending today? And you know, it can look at your calendar and your emails to help you leverage information across Google apps And do valuable tasks with that that information and make your life easier According to Google quote it combines bards generative generative and reasoning capabilities with assistance personalized help.

You can interact with it through text, voice, or images, and it can even help take actions for you. Google says this will be rolling out on Android and iOS in the coming months. So Paul, this actually seems like a pretty significant update from Google. Could you kind of give us a sense of its importance and put that into context with what Google's released AI wise in the past?

Paul Roetzer: Yeah, it just seems like the obvious next iteration we've talked many times about like if Siri just became what Siri was supposed to [00:37:00] be like this is basically it that I think we're moving into this phase and again 2024 is probably going to be the year where these personal assistants and in these cases like voice assistants like Alexa and Siri and Google Assistant that basically people use to get the weather and maybe some sports scores and like ask some basic questions but they really seem like they've struggled to ever move beyond just that basic answering.

This appears to be a prelude to a truly valuable personal assistant that is connected to all this data and you can ask questions of it and it can help you do things and eventually take actions on your behalf. And so, you know, we talked about a couple of weeks ago, I think it was Ajax. Maybe is what the rumor is that Apple's building like their next, which I assume is a multimodal model of some sort.

Gemini from Google would make this even smarter Bard right now, I think is based on what Palm to, and so the general experience [00:38:00] so far with Bard is it's just not. Not good, but I assume the next version will be much better. And so, you know, if you think forward 3, 6, 12 months from now and all of a sudden we have Surrey and Google Assistant and Alexa that are all truly conversational agents and able to, you know, see and hear and speak and like all these things like that.

That's where we start to actually see behavior change, like consumption of information, how you take actions. So. You know, we look out into next year and you assume other models will do the same thing. Other companies will do the same thing. I think that's really, it's a, it's a precursor to it. And just like we were talking about with GPT 4V, the vision, I think you can start to see what's going to be possible, with this kind of technology.

And, you know, if it does what it's potentially able to do, it's transformative stuff.

Mike Kaput: So we're [00:39:00] saying we might be looking at a pretty near future where every one of us has one or more AI assistants, essentially doing things for us and acting on our behalf.

Paul Roetzer: Yeah, and I think it goes back to the question earlier of do you have a single, horizontal model?

Do you just have OpenAI, ChatGPT enterprise, and it just does everything, and I don't need AI writing tools, and I don't need, you know, analytics tools. It's just basically, I just use ChatGPT for everything. I think the same question comes here. Is siri just going to be trained to do everything?

Is it going to become my life coach, my business coach, my athletic trainer, am I going to be able to just interact with Siri? And it's just going to be this almost infinite storehouse of knowledge and capabilities. Or am I going to have Siri for like the stuff that's on my device and I'm going to have inflection pie for my life stuff that I'm dealing with, like.

That's the part that everyone's trying to figure out is are we going to have multiple language models in our company? Are we going to have multiple personal assistants that specialize in [00:40:00] stuff? And that's that horizontal, horizontal versus vertical. Discussion is we don't know, like I haven't read anything yet that seems to confidently state how this plays out, because I think we just don't know yet how good these foundational models are going to be.

So that that's again, when I look forward to next year, one of the things I'm really curious about is how powerful do these models become? And how much other software am I going to need if they're really good at, you know, the whole, the whole premise of like AGI artificial general intelligence is being like human level or beyond at most cognitive tasks.

If you can build single models that start approaching that ideal, that it's pretty much at human level and anything I want it to do, then what other software do I need? And I don't know, again, the best AI researchers in the world don't really seem to know the answers to that yet. So [00:41:00] you as a listener, as a business person, as a, you know, a leader, a practitioner, you're not expected to know this either.

But I think it's something to pay close attention to.

Mike Kaput: So, yeah, talk to me a little bit more about that. As a marketer or a business leader, obviously there's this bigger picture here, but what are some short term questions or considerations I should be... Thinking through in the next, say, three to six months,

Paul Roetzer: you need to, I mean, education is key.

Like we always talk about education and training, like you have to stay up on top of this stuff because it is moving pretty quickly, but I think you have to build dynamic roadmaps, roadmaps internally for AI, don't make two-year bets. Like if you're looking at tech, think about, you know, 3, 6, 12 months out.

So if you're looking at, you know, building it, let's just say AI writing tool, you're thinking about marketing and sales service. We want to go get an AI writing tool or a language model company to help with all these use cases in marketing and social media and email and blog posts, writing and website development and logo design and like all these things.

Look out and say, okay, [00:42:00] like best tech today is this, but we know that that's likely changing in six months. So let's solve for it today. We're not going to wait six months to do this. Let's, let's start doing this, but let's assign a couple of people whose job is partially to stay on top of where these models go and constantly be checking to make sure that there isn't new technology, new capabilities we're not aware of.

But I mean, I really think that if you just take ChatGPT plus. And really focus energy on understanding all of its capabilities and use cases and just take advantage of that. Especially for small and mid-sized businesses, that's probably enough. You can do a lot of stuff with that. And then mix in some of the new tech we're going to talk about in a minute from some other companies that you may already have in your tech stack.

And all of a sudden, you don't got to go finding a bunch of AI companies. You're just better leveraging the tech you already have and using some of the AI capabilities they're introducing.

Mike Kaput: So let's dive into some of those product updates. As part of our [00:43:00] rapid fire topics, we've got a lot of really interesting product updates and a couple other compelling topics to get through.

So first up, we have Adobe is teasing a new AI photo editing tool that it claims will quote revolutionize its products. They said they'll announce this at their Adobe Max event, which is on October 10th, I believe is when it starts. This new tool is dubbed Project Stardust, apparently it will make it even easier to edit images using AI within Adobe products.

And they say it is quote object aware, which means it's able to automatically identify objects and images and in turn allow those images to be easily edited. So basically it sounds like a really powerful way to use AI prompts to kind of seamlessly do really sophisticated editing work using really simple commands.

Now we'll talk about this more when the release actually happens. It's a teaser right now, but Paul, what did you make of this? And like [00:44:00] maybe give us some sense of how Adobe is innovating with AI in their particular domain.

Paul Roetzer: Yeah, Adobe's definitely very aggressively doing this. Adobe's been investing in AI for years, but they certainly seem to be releasing things at a much faster scale.

I think, you know, the initial preview I saw of this, my reaction was, great developers, designers, people who are really good in Adobe are going to become way more efficient and productive. So the example I saw was it was an image of someone resting their hand on a umbrella. And the object aware part of this means it recognizes every object in there.

You don't, you don't even need the selecting tool. Like you used to have to use the lasso and like outline the umbrella. You just grab the umbrella, yank it out and the umbrella is out of there. And it backfills everything that was behind the umbrella. So it looks seamless. And then you just drop in oh, let's put them holding flowers instead.

And it you know, create flowers and it drops it in and you just drag it into their hand. So every individual object is already able. It's almost like imagine like a million layers of this [00:45:00] thing and everything you grab, you just move out. So if I wanted to like in my background, get rid of that poster, I would just like.

Click it, drag it out and drop a different poster in behind me. So the people who are already good at Adobe are going to get superpowers. People like me who have no capabilities in there could probably go in now and play in Adobe and get value out of it. So I think it's just going to, you know, it's going to level up everybody who doesn't have the capabilities and give them the capabilities and people who are already familiar are just going to be able to do things at super levels.

Mike Kaput: So also in the design space, Canva has made some pretty big announcements. They have announced what they're calling magic studio. This is a suite of AI features, right within Canva that let you do various AI image and video generation functions. It also has some AI editing capabilities now. Canva says you can use Magic Studio features to quote, describe what you'd like to see or upload.

Your media and Magic Design uses AI to craft professional [00:46:00] presentations, videos, and social posts at the same time, Canva and Runway, which is a really popular AI image and video generation tool, have announced a partnership that will make runway's video generation features directly accessible within Canva.

So how big a deal are these new features? Can Canva keep up with what Adobe's doing?

Paul Roetzer: I mean, I think Canva's got a pretty large, loyal base of users. I'm pretty sure we use Canva. I think Canva uses Canva within, you know, the Institute. So, yeah, I mean, I think if you're a Canva user, I don't know that you're switching from Adobe to Canva because of these, because every, they're all going to have them.

It's just kind of now a race. Like everybody knows what's possible with AI and now you're just building all these capabilities in. So I think the moral here is if you're a Canva user, it's going to get smarter, it's going to be cooler to use. It's going to make you more efficient, give you new creative capabilities.

I will note runway just this week. I haven't had a chance to test it myself yet, but Gen-2 from Runway, which we've talked about the text to video, they apparently [00:47:00] rolled out an update. That's like really, really good in terms of the quality of the output. So if you haven't tested Gen-2 from Runway recently, that does the text to video, give it, give it a test.

Sounds like it's pretty impressive what they've done in the last week or so.

Mike Kaput: Yeah, that's maybe a good reminder too. If you're a marketer or a business person who's oh, okay, we use Canva or any of these tools, it might be worth you just sitting down and reimagining what's possible. You don't just have to use Canva to do the same blog images faster.

You might say, oh, we actually have really cool abilities now within these tools to make better visuals or to do different things visually.

Paul Roetzer: Yeah, that's one of the ways we teach. You know, organizations, how to select pilot projects is you can look at, you know, your job description. And one of the things you do is the way for AI to assist.

You can work, look at your standard workflows, like the repetitive workflows and say, is there ways to infuse AI into this? The other thing you can do is go look at your core tech stack and say, what is HubSpot added? What is Canva added? What is Zoom added? And just [00:48:00] look at the tech you already use that your team's already familiar with, that you already have licenses for.

And just say, are there smarter features in these that we're not using? And so just put the practitioners in charge of it who are in those things daily. Like we do this with the script all the time. Script releases new stuff that we use for our webinars and podcasts. And so Cathy keeps up with it. It's oh, wow, they introduced this new feature.

That's going to save me 10 minutes a week on the podcast. Awesome. And so that's, you know, if that's all you do moving into 2024, just like those three things, like job description, roles and responsibilities. Repetitive workflows and take advantage of all the AI features. These companies you already work with are using like you're going to get ahead next year.

Mike Kaput: So another major product that released some AI updates is, is Zoom. So they've announced some new AI products and features as part of their Zoomtopia event. These include things like something called Zoom docs, which is an AI powered workspace to collaborate on documents. [00:49:00] They've expanded features of their existing AI companion, which is an AI assistant that they have.

They will now use that companion to help you kind of answer chats and review the content of meetings. You can now use it to generate a digital whiteboard to organize ideas. And they say the companion will also soon include conversation analysis features and the ability to quote automatically detect meeting intent in Zoom team chat messages and display a scheduling button to help streamline the scheduling process now Zoom also has some customer service products, Zoom contact center and Zoom virtual assistant.

These are also getting AI updates. One of them is called AI expert assist. So the AI will actually serve up knowledge based articles and insights during live engagements between an agent and customers. Zoom [00:50:00] Docs is coming in 2024. Some of the Generative AI features in the Zoom AI companion are available already.

And the customer service updates are quote coming soon, whatever that means. Did any of these features jump out to you, Paul?

Paul Roetzer: This fits in the category of we've been using Zoom. The same way for like five years or six years. And I I'll see that they'll add other stuff and it's I don't even look at it.

So this for me, and I don't, that's not necessarily the right approach. I'm not advising you, you know, you do that. But Zoom to me has like these very specific uses for us. And when I see all these other things they're adding, it's I don't know if that's for us or not, but I think that it's enough where you probably say, all right, let's have Tracy, our COO, you know, go spend a couple hours, explore what they're launching.

See if any of this is worthwhile over it's redundant to things we're already doing, because I think that's what we're going to start to see is as all these software products that you already use, [00:51:00] start adding all of these like dozens of AI features, you're going to start to get into the situation where it's a bunch of redundancies and.

Like you don't really need all these things they just added because you already have this other vendor over here that's doing that for you. So I don't know. I mean, that's for example, with transcription, you know, is Zoom better at transcribing our podcast than Descript is. And if we've already decided Descript's better or opening, or OpenAI whispers better than I don't know, like whatever, it's such a pain to keep switching.

Like this is, we're happy already with the way it is. So I don't know, again, if you're a Zoom shop, like we are, it's probably worth checking out. There was nothing I saw in the initial go where it's Oh, that's going to change our life. Like I really need that product. It was just okay, that's cool.

They're doing some stuff. So I wasn't super excited about it, but we like Zoom, it's a good company.

Mike Kaput: So another set of product updates, project management software, Asana, which we [00:52:00] also use just announced some new AI features. These include things like smart fields, which auto generate custom fields, smart editor, which generates notes, smart summaries, which actually, produce highlights from task descriptions and comments and create key action items to work on. They also teased some upcoming AI features to be. Released in the future. One is a smart status tool that will use real time work data to create comprehensive status updates for ongoing projects. It will actually highlight roadblocks, open questions, and more.

They're also using that same technology to provide smart answers to questions about specific projects. So you can basically just ask questions about where a project's at, identify blockers and determine the next steps. Asana has also teased AI features that will eventually generate goals for teams, help you plan for different [00:53:00] scenarios, and adjust resources.

Now, it sounds like according to VentureBeat, most of these will be available either later in the year or in 2024. So, Paul, what did you think of these given that we use Asana pretty regularly for our own project management?

Paul Roetzer: Similar reaction I had to the Zoom stuff. I love Asana. Like I couldn't live without Asana.

Like we've been using it for years. It is, it literally manages my personal and my professional life. There wasn't a single one of these I looked at. It's oh, that's going to, like I would use that. I don't, I mean, maybe there, maybe we don't use Asana in like the advanced way some bigger enterprises would, where these would be like super relevant.

But the way I use Asana already drives massive efficiency. There's no AI in the way I use Asana and that's, it's fine. Like I love it. I'll gladly keep paying them whatever we're paying them every month. And again, I would probably throw this to Tracy and say, Hey, you know, while you're looking at Zoom's AI capabilities, go ahead and take a look at Asana and see if it could improve our [00:54:00] workflows processes in a reasonable way where it's worth.

I would assume you have to upgrade, you know, the payments each month to do it. You think it'd be worthwhile? So again, like what I would always say is I have a point person whose job is to think about this stuff. If as the CEO, I look at this and I'm yeah, I don't, I don't, I don't really see it.

I'm not going to go take an hour of my Saturday to investigate this myself. Then I'll say, Trace, hey, over the next 30 days, let's take a look at it and see if there's anything there for us. If not, cool. Let's just keep doing what we're doing. But Asana, again, great company. They've actually had a vision for AI for a while.

I've been excited to see what they do. But without digging further into these, none of these are life changing for me personally or for our company

Mike Kaput: So in a pretty crazy story this week, actor Tom Hanks has warned his followers on social media that there is a scam using an AI version of his likeness, trying to get them [00:55:00] to, believe he has endorsed a product. And this is in response to an advertisement for a dental plan of all things that he has nothing to do with, and that features an artificial intelligence version of him.

promoting it. So he actually wrote on Instagram, beware, there's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it. Paul, you mentioned on LinkedIn this week that this whole subject worried you. Can you break down for us why these types of scams are such a concern for you?

Paul Roetzer: Yeah, I'll just read real quick what I put on LinkedIn and probably like follow up with this topic in the fall. I guess we're in the fall, you know, sometime later this fall, we'll maybe explore this a little deeper, but this is really worrying to me. So what I said on LinkedIn is that I get asked all the time, what is, what I worry most about.

And so I said it's threefold. Number one is AI is now capable of generating synthetic media, including text, images, video, and [00:56:00] audio that is nearly indistinguishable from real content and the technology as we've talked about is only going to get better. Number two, the average person has no idea this capability exists and therefore is unaware content they see online may not be real.

And we're not talking about like Photoshop. We could have been doing this all along. It's give me a break. Like. There's only so many people could scale Photoshop abilities like we're talking about anybody being able to create anything they can imagine at any time spreading it online. Third, there is no existing way and this is a really important one.

There is no existing way for social media platforms or news outlets for that matter to quickly and reliably verify the authenticity of content. It is a race of AI to create the content versus AI to try and catch the content. But the catching of the content is losing because they can't actually do it.

So, I mean, the government has been working on this for years. Like deep fakes have been a problem for a while. And at the moment, we still don't have a solution to this. [00:57:00] So I said, we're going to see this problem run rampant in elections. Brands will have to battle it to maintain trust. Celebrities and thought leaders are going to have to battle it as like synthetic versions of them.

And maybe like people like us, like you don't have the resources to fight this stuff. So if it just starts spreading all over the place and there's all these like fake versions, you'd be what are you going to do about it? And then unfortunately the one I do worry about a lot, I have two younger kids and I talk with high schools and colleges, the cyber bullying that is going to happen because of these capabilities, because any school aged kid can go to some open source model and create anything of anyone.

And it's happening. Like I've heard from kids that it's happening too. So. This is what really worries me. And I think we're going to see endless stories about synthetic media and the impact it's having on people, on elections, on democracy, because preventing it isn't possible. All we can do right now is AI literacy, make sure that as many people as possible know that this is able to be done and that [00:58:00] you can't trust what you see online without verifying it with trusted sources.

So that's. You know, again, we can expand on this at another time, but I just encourage all of our listeners, do your part to make sure that friends and family and co workers know that this stuff is possible and that they have to be more cautious about what they spread. Online on Facebook on Twitter on Instagram, whatever that they're not just retweeting and sharing things that are sensational without verifying it because it's going to be next to impossible to know if it's real or not.

Mike Kaput: So we talked on last week's episode about how AI company Anthropic just received commitments from Amazon to raise. Billions in funding, and they are very quickly back at it again. According to the information, Anthropic is in talks with investors to raise 2 billion at least [00:59:00] in new funding. And what's more anthropic apparently has indicated that Google, which has previously invested in the company is expected to invest in this new round.

If it goes ahead. So Paul, can you unpack what's going on here? Like in the AI arms race, aren't Google and Amazon competitors. They're both investing in the same company. What's up here.

Paul Roetzer: Yeah, they're all investing in these companies. I don't know. I mean, I've known for a while. Anthropic or Anthropic. I don't know how you actually say it was really important, but it appears there may be more important than we maybe gave them credit for initially.

So yes, the up to 4 billion from Amazon last week, which was weird because they took a bunch of money from Google in like February or March and they had some preferential contract with Google at that time. And then also it's wait, did, did Amazon just like step in over Google's territory and then Tuesday night, it's Oh wait, Google's putting another 2 billion and maybe you're leading in and around or whatever it is, it's a lot of money being poured into one of these foundation model companies.

[01:00:00] And again, if you want to play around with Anthropic, it's Claude C L a U D E. anthropic. com, you can go test their technology and actually saw something from them earlier this week that I've read like four times and I still don't really comprehend it, but they found a way to actually drill into the individual neurons within these language models and try and figure out why they're doing what they're doing.

So one of the challenges of language models is we don't know why they do what they do. You can't like. You know, go in and do an autopsy, the thing. And well, why did it, you know, what happened to it? Why did it make these choices or do an X ray or whatever? So they seem to, they had this theory of a way to do this and they released a new paper.

That seems like they're on the right track to be able to explain why the models do what they do and therefore create more safety, restrictions within them. So there's a really fascinating chain of thought and like how they're pursuing it. And maybe like once I understand it a little better, we'll dive into it in a future [01:01:00] episode.

But they are an extremely important company that is going to be a major player in what happens next. So just keep them on your radar.

Mike Kaput: All right, Paul, that is all we got this week in AI. Thank you so much for, yeah, right. We really appreciate you breaking it all down for us. I mean, you know, in just a week, we've only talked about about eight different sci fi concepts.

Paul Roetzer: So, you know, we appreciate it.

This week as a listener, go check out, Claude, if you haven't yet played around with that one. Go get GPT 4V, like get the ChatGPT plus accounts, start playing around with the vision thing. Yeah, there's just, that's the beauty now is we're talking about all this crazy stuff, but you can actually go play with it.

So go check it out and, keep experimenting. It's going to be a really interesting fall. So thanks Mike, as always, happy travels. I think we're both on the road again very soon. So one of these days we'll be in the office at the same time. I actually saw [01:02:00] Cathy in Boston yesterday. I hadn't seen Cathy in person in like three weeks.

She and I were at marketing profs together yesterday. So yeah, we go to Boston to see each other. It's funny. Awesome, well thank you Paul. Talk to everyone soon.

Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com.

Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Cathy McPhillips | October 3, 2023

This week's episode of The Marketing AI Show covers AI advancements from ChatGPT, Anthropic, Meta, Spotify, and more.

[The Marketing AI Show Episode 54]: ChatGPT Code Interpreter, the Misuse of AI in Content and Media, and Why Investors Are Betting on Generative AI

Cathy McPhillips | July 11, 2023

Generative AI is advancing, and this week it’s two steps forward, and one step back. Learn more in this week's episode of The Marketing AI Show.

[The Marketing AI Show Episode 49]: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

Cathy McPhillips | May 30, 2023

This week's episode of the Marketing AI Show covers AI updates to Google Ads, Microsoft's AI copilots, and much more happening this week in AI.