<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

47 Min Read

[The Marketing AI Show Episode 72]: Our Hands-On Experiments with GPTs, Is AGI Coming Soon?, and New AI Wearable From Ex-Apple Veterans

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Last week's episode of The Marketing AI Show delved into the recent GPT announcement and this week, we're taking it a step further with insights from our hands-on testing. Join us in Episode 72 as we explore the latest capabilities of GPTs, delve into predictions about the rapid approach of AGI, and share our thoughts on the newly released AI wearable by former Apple experts. Stay tuned for an in-depth analysis and much more in this exciting episode!

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsor:

Meet Akkio, the generative business intelligence platform that lets agencies add AI-powered analytics and predictive modeling to their service offering. Akkio lets your customers chat with their data, create real-time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI analytics service to your site or Slack. Get your free trial at akkio.com/aipod.

Listen Now

Watch the Video


00:01:49 — Our hands-on testing with GPTs

00:23:31 — A new paper was released that proposes a framework for classifying AGI

00:36:30 — A wearable 'AI Pin' was launched by Humane

00:44:48 — Bill Gates claims AI is going to completely change how you use computers

00:47:59 — The Actors Strike in Hollywood has come to an end

00:50:28 — Meta to require advertisers to disclose AI content in political ads

00:54:13 — Microsoft announces five steps to protect electoral processes in 2024

00:56:52 — Amazon is training a new large language model, Olympus

00:59:43 — Google AI features introduced across performance max campaigns within Google Ads


Hands-on with GPTs, everyone can now be a product manager.

Mike and Paul share their initial experimentation learning the interface, the features, and their first projects.

This included Paul creating a generative AI policy builder using GPTs in just under 15 minutes and Mike building a personalized daily assistant to plan his day.

In both experiments, Paul and Mike found GPT capabilities powerful. They basically allow you to code with words, and very quickly and easily empower non-programmers to build tools. that share knowledge and automate existing processes and frameworks.

While GPTs are still very early—and have the same flaws as ChatGPT—they're already impressive and unlock tons of potential for knowledge workers and businesses interested in building smarter tools to improve productivity and performance.

The Levels of AGI

A new paper is out that proposes a framework for classifying the capabilities and behavior of artificial general intelligence (AGI).

It’s notable not only for the topic but also because it’s co-authored by Shane Legg, one of the co-founders of DeepMind, a major AI lab.

DeepMind is now part of Google after being acquired in 2014. Legg’s X profile lists him now as the “Chief AGI Scientist” at Google DeepMind.

The paper’s topic is important because people like Legg believe it is not only possible to achieve AGI, or AI that is as smart as or smarter than humans across a wide range of tasks.

But he also believes it’s coming soon. In a recent interview on the Dwarkesh Podcast, he said he expects AGI to be achieved as soon as 2028.

The paper offers one possible system for evaluating just how advanced a possible AGI system is when we get it, and classifying its capabilities much like we might evaluate just how “self-driving” a self-driving car system is.

This is not speculative or science-fiction. It’s a serious attempt to classify and evaluate broadly better-than-human systems—systems that people like Legg believe are coming very, very soon.

Wearable 'AI Pin' launched by Humane, backed by ex-Apple execs and Microsoft

AI startup Humane just released the “AI Pin,” a wearable device that clips to the lapel of your shirt and allows you to talk to a virtual assistant powered by OpenAI and Microsoft.

The release is getting a ton of buzz because Humane was founded by ex-Apple veterans, including people who’ve worked on the iPhone.

The company has also raised $241 million from Sam Altman and Microsoft, among others.

The Pin can do things like compose messages in your tone of voice, summarize your email inbox, and take pictures, which its AI can then scan with computer vision in order to perform tasks. (Like telling you the nutritional content of a food item you just snapped a picture of.)

The AI Pin will be available in the US starting November 16 and will cost $699.

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: everyone will be a developer in the future. And I think this is probably the first. Contact with what that looks like to people like you and me, where this idea that we could literally just build anything we want

[00:00:12] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:32] Paul Roetzer: My name is Paul Roetzer I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:41] Paul Roetzer: Welcome to episode 72 of the marketing AI show. I'm your host, Paul Roetzer, along with my cohost, Mike Kaput we are here on Monday, November 13th, about 10 AM. I always timestamp this because you never know what's going to happen like an hour after we record this thing. We are [00:01:00] a week into the existence of GPTs in the world.

[00:01:03] Paul Roetzer: So we are going to get into that a little bit more today now that we have had a chance to play around with them. But first this episode is brought to us by Akkio. The generative business intelligence platform that lets agencies add AI powered analytics and predictive modeling to their service offering.

[00:01:20] Paul Roetzer: Akkio lets customers chat with their data, create real time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI analytics service to your site or Slack. Get your free trial at Akkio, that's Akkio.com/aipod. All right, Mike, I know we got... A lot to cover today, including the world of GPTs and some AGI.

[00:01:46] Paul Roetzer: And, I don't know, my head's kind of swimming with this one. I was like playing around building GPTs this morning. So, I'll let you kind of guide us through this one. Go, go for it.

[00:01:49] Hands-on with GPTs

[00:01:57] Mike Kaput: Yeah, absolutely. So this week, [00:02:00] Paul, we're talking about hands on experiments with GPTs. Last week, we talked about the introduction of GPTs from OpenAI.

[00:02:09] Mike Kaput: As a reminder, these are essentially custom versions of ChatGPT that anyone can create simply using A natural language conversational interface provided in ChatGPT plus eventually there will be a GPT store where you can start downloading and sharing the most popular creations that you or others have created.

[00:02:33] Mike Kaput: So basically you can create a GPT to do anything you can imagine that you would do with an AI assistant. So. We have both been experimenting with GPTs and trying to build some things. it is still very early days, but I wanted us to walk through what we have been working on to give our audience a sense of what some of these GPTs can do, [00:03:00] how we're thinking about them in marketing, business.

[00:03:03] Mike Kaput: Our personal lives. I've already found these to be extremely useful and possibly game changing in terms of how I go about my own day. So I wanted to kick it over to you, Paul, and say, see exactly how are you thinking about building with GPT is what have you built so far? And what have your results looked like already?

[00:03:25] Paul Roetzer: Yeah, it is so we got access what like Wednesday afternoon. I think it was Wednesday or Thursday afternoon. Yeah, and I definitely had that immediate oh, my God, I got to jump in and start building some stuff. But my week didn't accommodate the ability for me to just jump in and build some stuff. So, Separate story another time.

[00:03:44] Paul Roetzer: There's some things we're working on at the institute, some things we're building that I won't get into right now, but it took away the four days. I was hoping to build some GPTs, but in a good way. So more to come on that stuff. So I think first we have [00:04:00] to address that these things still have flaws.

[00:04:02] Paul Roetzer: So, you know, I think just even within the first few days of people building with them, you're starting to already see some of the limitations they have. Some of the challenges, like these things still hallucinate, like no matter how much example data you give them, they'll still make stuff up, like they're flawed, like ChatGPT is flawed.

[00:04:20] Paul Roetzer: So just. No, that I think for the most part, you should think of them as a experimentation, you know, low risk level uses. You're not training this thing on your company's data, and it is going to be 100 percent accurate. So it is not. You have to use it the way you would use ChatGPT. So whatever guardrails and precautions you take with ChatGPT, take that with these GPTs, including what data you give to it, and the files you upload, the data, you know, all the same precautions still need to be taken.

[00:04:53] Paul Roetzer: So, with that being said, What I've seen is, you know, over the weekend, even a lot of people [00:05:00] in my network now, granted, I kind of have like this AI bubble, I think I live in with, with my network in a lot of cases. So there was people like putting all kinds of cool examples of GPTs up. So I think that one thing for people to do is pay attention to whether people are building as examples.

[00:05:14] Paul Roetzer: The other thing you can do is when you go into ChatGPT, they have the explore tab now where you can go see, I don't know, it was like 15 or so, OpenAI examples and some of the first thing I did honestly, was I went in and started just looking at the OpenAI examples. I saw what are they building?

[00:05:30] Paul Roetzer: How are they using it? Because I think to your point, Mike, it is all about like. Trying to find the functional use of these things for you. And so the way we look at it, the way I started to look at it was what are the processes and frameworks that we either use ourselves internally or that we recommend to people externally.

[00:05:51] Paul Roetzer: So like when we go out and give talks, I'll always end with here's the five essential steps every company needs to take an AI. And so. Two of the steps that I recommend are responsible [00:06:00] AI principles and generative AI policies. So when I was trying to think of something to build first, I was okay, well, let me think of something that would be valuable or that I could demo in a talk.

[00:06:10] Paul Roetzer: So when we're going through and I'm saying, hey, build generative AI policies for your company. Here's an example from Wired Magazine that I like to show that then they're on their own. And I thought, Oh, that'd be pretty cool if I could build a, like a generative AI policy builder that would take what we recommend and actually let someone just go and do it.

[00:06:27] Paul Roetzer: So for me, it is that beautiful world. And like you and I dealt with this when we were at the agency all those years, you have these ideas, but like we can't build stuff like. I would have to go get, I don't know, whatever like survey tool we'd have to customize, like type form, we'd have to turn it into this, we'd have to go get some massive expensive subscription to some tool building thing, just so us non developers could actually build something.

[00:06:51] Paul Roetzer: So for me, this is like this freeing thing of Oh, I can actually go in and start playing around with this and building it. So the first thing I actually tried to do [00:07:00] was this idea of a Gen AI policy builder. And so one of the things that, I think I'll share what I got, where I got, and I'll kind of give a tip of where I want to go with this.

[00:07:11] Paul Roetzer: So when you're building your generative AI policies in your company that dictates how your employees are allowed to use text generation, image generation, audio, code, video, those are kind of the five main categories we always talk about. There's like these layers people don't think about. And so I'm trying to find ways to build this in and out.

[00:07:28] Paul Roetzer: Maybe next week's episode, I'll share what I build. But the things you want to go through, like. Permission levels. So if I'm going to use an AI tool to do something, what level of permission is required to use AI for that intended application? So if I'm, you know, depending on what kind of company you're in, there may be no restrictions on my use.

[00:07:47] Paul Roetzer: So you can use ChatGPT, however you want, it doesn't matter. Do whatever you want with it. There may be some restrictions where it is you can use it, but don't put any proprietary confidential information in the inputs, or there's every use case requires [00:08:00] permission. And I have talked with plenty of company leaders where that is the case, whether they're in nonprofits or they're in financial services companies or healthcare companies, like they need permission from it and legal for every single use case, every narrow use case that they do.

[00:08:15] Paul Roetzer: So you think about permission levels, you think about disclosure level. What are we telling external and internal audiences about our use of this? Are we disclosing it? Is it full disclosure, partial disclosure, no disclosure? Confidential information risk. What's the risk of information getting disclosed in the system?

[00:08:32] Paul Roetzer: Legal risks. You have to look at intellectual property, data privacy, liability, bias and discrimination, regulatory compliance. There's all these things that go into it. And then the other one that people don't really think about is what is the importance of the accuracy of the output? So if you're using these tools for like ideation and like brainstorming, it is not that critical.

[00:08:49] Paul Roetzer: Like if it makes a mistake, it makes a mistake, but it hallucinates on some data. it is no big deal. Like you're really just using it for internal, like inspiration. But if you're using this to write a report [00:09:00] based on a spreadsheet. It better be correct. Like if you're turning that into someone, if you're giving it to the CEO, or if you're using it, you know, for some other external use.

[00:09:09] Paul Roetzer: So I started thinking about all these like challenges and then I kind of worked back and said, okay, let me just try and simplify this. So again, this isn't publicly available. Maybe, you know, at some point next week, I'll kind of finish what I'm doing, but I honestly built this in like 15 minutes this morning.

[00:09:23] Paul Roetzer: But what I did is I went into to the builder. And so again, if you haven't done it yet, you can go in and you can just tell it what you want to build, or you can go in and click the configure button. And so I went into configure named a gen AI policy builder. The description was defined policies to guide your team's responsible use of generative AI text.

[00:09:43] Paul Roetzer: By the way, Responsible use is like the keywords there of Genitive AI text, image, video, audio, and code tools. And then for the instructions, I actually followed the, the process that Ethan Malik recommends we'll put his blog post in there, but [00:10:00] he's got what he called structured prompting. And so he gives like the different things you're supposed to do as part of a structured prompt.

[00:10:06] Paul Roetzer: and so real quick, I'll, I'll walk through that. Those are just for background. So role and goal, you tell the AI its role and what the goal is, you give it step by step instructions, you give it expertise, so here's what your background is, you set up constraints, and you give personalization, you know, of the output.

[00:10:25] Paul Roetzer: And so those were kind of the key things I started looking at. And then what I did is I went in and I said, so I'll just, I'll read you the instructions. Cause I think it gives a sense of kind of like how to think about building these. You are a business leader charged with defining policies that guide your company's use of generative AI text, image, video, audio, and code tools.

[00:10:45] Paul Roetzer: You will go through a series of questions to determine guidelines for both internal and external applications. Again, I think sometimes people Just race to use these tools and they, they think about Oh, I'm going to write articles and I'm going to disclose it or not. it is well, what about internal [00:11:00] when you're using it for emails and proposals and reports?

[00:11:02] Paul Roetzer: And so internal and external is real important here. So then I gave it text generation is the first category. And here's the questions I had to go through. Are employees allowed to use AI tools to generate external text based content such as blog posts, social shares, and ads? Are employees allowed to use AI tools to generate internal text based content, such as emails, reports, and presentations?

[00:11:26] Paul Roetzer: Are employees allowed to use AI tools to edit text based content? And then the last two, do employees have to disclose their use of AI and final outputs for internal audiences? Do employees have to disclose their use of AI and final outputs for external audiences? So I use those same questions for text, and then I adapted it for images, but it is basically same premise.

[00:11:46] Paul Roetzer: You could then do the same for video, audio, and code. I haven't written those ones out yet. And then you give it conversation starters. So I just chose AI text tool policies, AI image tool policies. And so then when you go into the builder and [00:12:00] you're using it, you could just say, I want AI text tool policies for my company.

[00:12:04] Paul Roetzer: And so my initial testing of this is actually worked really well. It did exactly what I instructed it to do. So. When I said, you know, give me the text tool policies. It went through and said to develop AI text tool policies for our company. Let's address each question in the text generation section. Use of AI for external text based Medicaid overview.

[00:12:24] Paul Roetzer: So it actually went through and did this. So my initial reaction is I think this could be really helpful, especially for these like internal processes where you're still going to edit this. But what changes now is. As a, as a speaker, as like an educator, as someone who's trying to drive AI literacy and responsible adoption, we teach so many things and we, we give like steps for people, but there's still a lot of work.

[00:12:51] Paul Roetzer: To execute those things. And so I start looking at these as wow. So anytime I say in a talk, like I have a talk tomorrow [00:13:00] in Boston, when I put up, Hey, build generative AI policies for your company, rather than me just saying, here's the word magazine example, I can say, here's a public gen AI policy builder, you can use this tool for free.

[00:13:13] Paul Roetzer: And it'll actually walk you through how to do this. That's how I start to see these tools. So again, I, like the ones that I was excited about, I had, you know, I'm not going to tell everybody everything I wanted to build, but generative AI policy, responsible AI principles, I already mentioned you and I, Mike run AI strategy workshops all the time.

[00:13:32] Paul Roetzer: So we teach the use case model that's in our book. I thought there's probably a way to build a tool to do that pretty easily. I'm running a strategic AI leader workshop on Friday this week, I think. And so my mind is racing, like how could I build some tools to make that workshop more interactive so people can actually go through and build these things.

[00:13:52] Paul Roetzer: So I, course builder is another one I thought of, like how do we build curriculum for online courses, follow specific standards of like what a [00:14:00] good course looks like and how, you know, how to build learning objectives. So I'm immediately just thinking about all of the repetitive. Generative, data driven things we do all the time, where we're trying to teach someone else a process.

[00:14:15] Paul Roetzer: And I'm, really excited about the potential. So I'm not looking at these as, any kind of necessarily replacement for anything we're doing. I mean, I think, podcast summaries might be an interesting one, like writing podcast summaries. Because we do that already. Building AI roadmap audits, like there's some cool things, but they're all like enhancements to what we're doing.

[00:14:38] Paul Roetzer: They create efficiencies for other people and they, they really help us like advance our mission of AI literacy, I think. And so that's, that's how I'm doing it. So having now built one again, very early, but understanding now how it. Does what it does and seeing, you know, an early output, I'm really excited about taking a lot of these processes.

[00:14:58] Paul Roetzer: We have both external facing and [00:15:00] internal facing and developing some tools where there isn't risk of us giving a bit confidential information away. We're not uploading files, proprietary data. That's going to find its way and open a foundation models like. These are just fundamental tools where the knowledge is already out there publicly, that I think we could create a lot of value for people where they could go in and apply these things and get value like immediately out of them.

[00:15:24] Mike Kaput: I said this on last week's episode. This is maybe the holy grail of tools. If you are a consultant or strategist, in my opinion, you need to be looking at these as aids to education and knowledge. If you do anything related to that for a living.

[00:15:41] Paul Roetzer: Yeah. And it, I think the thing we have to keep stressing is this is, this is like an alpha release.

[00:15:48] Paul Roetzer: Like this is just a prelude to more intelligent agents that can take actions and do things at a much higher level of accuracy and stuff like that. So I know you played around a little bit, [00:16:00] Mike, did you have any initial use cases or responses or outputs that you thought were fascinating?

[00:16:05] Mike Kaput: That was very similar to you in my reaction of being incredibly impressed with what I was able to do in a very short amount of time.

[00:16:13] Mike Kaput: So the way at least initially I look at these is I want to clone myself and my thought processes on my best day. And for anything that I do, right, because I'm not always having my best day in terms of being the sharpest or most on point, especially early in the morning. Right? So I naturally gravitated towards.

[00:16:38] Mike Kaput: Okay. What's the 1st? You know, cognitively intensive thing I do in my day and it is planning my day and planning my day for me. I won't get into all the details, but I have quite a few goals, habits and systems that I use at the beginning of my day. The first 30 or 60 minutes even to really carefully plan my day to get.[00:17:00] 

[00:17:00] Mike Kaput: As much out of it as possible and that actually takes like a fair amount of work. So that's where I started building a daily planning assistant and I have a lot of work to do. But in probably 15 minutes, like you mentioned, I had a really solid assistant where I say, hey, go ahead and plan my day and it knows.

[00:17:21] Mike Kaput: Because I've given it a pretty extensive knowledge document. I'm lucky enough to have been a weirdo and documented all this stuff already. The steps that I take to think about my day, which things are most important, which are least important, what kind of time commitment, a lot of things. Take what my daily schedule that I found really works for me looks like, for instance, I tend to try to at least block off, you know, 4 hours a day of really intensive deep work.

[00:17:51] Mike Kaput: So no distractions phone on do not disturb actually focusing. I found that to be really, really helpful. Not everyone does that. But that's how I [00:18:00] structure a lot of my most important priorities. It knows that and I got very quickly to you. An hour by hour schedule for my day that reflects very closely what takes me about 30 or 60 minutes to come up with every morning.

[00:18:14] Mike Kaput: And it is extremely valuable time. If I never shortened that by a second, it is still the most important thing I do in a day, but it took 30 seconds using this tool. But most importantly, it is all this recurring stuff that I have to think about and organize is often right. When I wake up that I now don't have to bother with as much.

[00:18:36] Mike Kaput: And so I can reallocate. That limited bandwidth to actually two other things that require more creativity or could be solved better if I had more bandwidth in a day. So a couple of things about that experiment did jump out to me. And the first is that it really felt like everyone is now just a product manager.

[00:18:56] Mike Kaput: I really had to sit down and almost create, [00:19:00] like a brief. Describing the end product I wanted to create it wasn't super extensive, but it really helped me organize my thinking around what I wanted to see in this instead of just willy nilly kind of prompting it to try to get results. And another thing that jumped out is it really did just feel like I am coding using language and that was exciting to me as someone that's always been technologically minded, but never.

[00:19:31] Mike Kaput: Went very deep on learning how to program beyond some basics. it is an incredible opportunity for a writer or a strategist to be able to sit down and the one thing I can do with ease is write pages and pages and pages of documentation about what I want this thing to do, how I might be thinking about the problems and refine it over time.

[00:19:51] Mike Kaput: And it really just produces incredible results. Doing that. So I found this to be just stunning already. [00:20:00] Obviously, it has limitations. You have to invest time to figure out the best ways to get the results you want. I'm still refining and learning. This is just day one, in my opinion of experimenting with this technology, but I was pretty blown away.

[00:20:17] Mike Kaput: And I've already Found the half a dozen ways in my personal life that I think I could be creating a few different GPTs to save myself immense amounts of time.

[00:20:28] Paul Roetzer: I mean, it really is. And again, I don't get too hyped on AI advancements. Like sometimes there's stuff that's just really impressive, but I try and kind of stay pretty level about all this stuff.

[00:20:40] Paul Roetzer: You know, this is one where I just, I don't think it is overhyped. Like I really think it is just the very, very early days of the tech, but yeah. I mean, just as a. Like as an entrepreneur, as a business leader, as an educator, like there's so many ways to think about this and. I think you [00:21:00] touched on it, but you know, a lot of times as a, like a business leader, you're trying to convey processes and like chain of thought and like ways for people to think about things, like whether it is problem solving or, or, or ways to go about producing, you know, producing an assignment.

[00:21:16] Paul Roetzer: And so much of what we do is just listen, here's the 10 steps, go through these 10 steps or ask yourself these seven questions before you do this strategy doc. And to be able to just codify that into a simple tool is it is very exciting to me because, so to take the way your mind kind of works and be able to train a tool on it is just we have just never had that ability before.

[00:21:41] Paul Roetzer: Like I said, without, you know, we, we have a tool we built together, but I mean, we pay like 25, 000 a year for the tool and like. It doesn't, it is, you have to go through a masterclass to build anything in it. Like to build a simple form is like crazy. And [00:22:00] here we have this ability. So I think that it also is a preview of we have talked about everyone will be a developer in the future.

[00:22:07] Paul Roetzer: And I think this is probably the first. Contact with what that looks like to people like you and me, where this idea that we could literally just build anything we want, and it might be six months from now, a year from now, but I think like Replit and their mission of like a billion developers and, you know, opening, I put in these kinds of tools in people's hands, I really think it is just a preview of.

[00:22:29] Paul Roetzer: How we're going to be able to build whatever we can imagine in the future. And it is just, yeah, I mean, it is really cool to sort of start to sit down and think about. So I know like with the Thanksgiving holiday coming up, like I'm going to take some time off and I could see myself just like playing around with these things and building some fun ones for the kids and building some cool things for the company and for myself.

[00:22:52] Paul Roetzer: So yeah, again, if you haven't done it yet, like it is, it is obviously Mike and I are both. Pretty passionate that this is [00:23:00] something worth spending some time experimenting with and looking at some of the cool things other people are building for inspiration. I think it is going to, you'll be pleasantly surprised at what you can do with it.

[00:23:10] Paul Roetzer: Yeah, and as a

[00:23:11] Mike Kaput: final note here, given our backgrounds, Paul, I think if you're an agency owner, you need to be on this yesterday, because I would have just killed, to your point, to have One of these we had trained on all of our processes, strategic thinking, ability to get new hires on boarded and actually have them almost have like a scalable version of us as an assistant would have been worth its weight in gold.

[00:23:31]  The Levels of AGI

[00:23:43] Mike Kaput: All right. So our second big topic today is about a new paper that's out that actually proposes a framework for classifying the capabilities and the behavior of what we call artificial general intelligence. And we'll talk about the definition there for a [00:24:00] second in a second, but we're really talking about broadly AI systems that are smarter than humans.

[00:24:08] Mike Kaput: at a wide variety of tasks. Now, this is notable not only for the topic around AGI, but because this paper is authored, co authored by Shane Legg , who is one of the co founders of DeepMind, a major AI lab. Now, DeepMind is part of Google after being acquired way back in 2014. And Legg 's profile on X actually lists him as the chief AGI scientist at Google.

[00:24:36] Mike Kaput: DeepMind. Now, the paper's topic is important because people like Legg believe it is not only possible to achieve AGI, but it also could be coming very soon. Legg just gave an interview on the Dwarkesh podcast where he said he expects AGI to be achieved as soon as 2028. [00:25:00] So this paper goes through a possible system for evaluating just how advanced an AGI system could be when we get it and classifying the capabilities of current systems.

[00:25:13] Mike Kaput: Basically on a scale like you might evaluate a self driving car, what level of intelligence and autonomy do current AI systems have? So in the mind of people like Legg , this is not speculative or science fiction. it is a serious attempt to classify and evaluate better than human systems, systems that we could have very, very soon.

[00:25:39] Mike Kaput: So, Paul, to kick this off, why are people like Legg trying to build this framework to evaluate AGI systems?

[00:25:49] Paul Roetzer: Yeah, I mean, this is a topic I've been fascinated by, fascinated by for a long time. It was a topic that I would say was a little bit fringe and even taboo in the AI [00:26:00] research world until a few years ago.

[00:26:02] Paul Roetzer: And it was a topic that I generally avoided. Bringing up, within our stuff, because I just didn't think people were really ready for it. Because until people experienced any form of AI, the idea of AGI was just too science fiction for everyone, I think. So the reason it is, it matters is you touched on Shane is a major player here.

[00:26:26] Paul Roetzer: He coined the term AGI back around 2006, 2007, for a series of papers on a book that was put together on a series of papers about AI. And he founded DeepMind with Demis Hassabis, who we have talked about many times, who's the CEO of Google Salihman, who is now the co founder and CEO of Inflection AI. So, Shane is a major player in, in AI.

[00:26:52] Paul Roetzer: We don't talk about him as much as some of the other founders, but he is a key player here. So, I think what's happening is... [00:27:00] AGI is becoming much more mainstream. So you're going to hear that term all the time. If you watched the OpenAI, event where they introduced GPTs and everything else last Monday, when Satya Nadella, the CEO of Microsoft came on stage and did his somewhat awkward, like two minutes on stage with Sam.

[00:27:21] Paul Roetzer: When Satya went to leave, Sam said, I look forward to building AGI together to Satya. So all the major research labs have been pursuing AGI for years. And it seems like they generally, agree that it is within reach. So for us, the challenge has been trying to explain AGI in a cohesive way when none of these labs seem to agree with what it is.

[00:27:49] Paul Roetzer: So, the way I have explained it, and I'm, I was kind of going through their paper today to try and see is, are they on the same page as we have been, is [00:28:00] right now AI is capable of very specific things, like it is trained to do very specific tasks. And in some cases, it is just better than humans. So, AI is better than humans at chess.

[00:28:13] Paul Roetzer: So it is a narrow application of AI, but it is superhuman at that application. And so when we look at the future impact of AI on knowledge work, in particular, we think about the idea of cognitive tasks. Tasks that require thinking, reasoning, understanding, language. And so AGI is the idea that you have these general purpose AIs that are near or at human level in many cognitive tasks.

[00:28:45] Paul Roetzer: So you have a single AI that can win at chess, can beat the grandmaster at chess, And it can also maybe write a research paper, and then it can jump over and do a Sudoku puzzle, and then it can take a medical exam, and then it can go get, you know, 1600 [00:29:00] on the SAT exam. So a single AI that is generally really, really good or above human level at almost every cognitive task.

[00:29:09] Paul Roetzer: That's how I've thought about AGI for years, and it seems like that jives with what they're talking about. So in the paper, and I know we'll dig into kind of their different, levels of AI, they say AGI is an important and sometimes controversial concept in computing research used to describe an AI system that is at least as capable as a human at most tasks.

[00:29:31] Paul Roetzer: So the challenge I have seen, and again, I've listened to probably every interview that Demis, Mustapha, Shane, Dario Amedi, it is I listen to podcasts nonstop where there's interviews with these people and anytime you ask them to define AGI. I have yet to hear one of them say in like 10 words, this is what it is.

[00:29:50] Paul Roetzer: They're always Oh, it is a tricky thing to define. And so I think the value of this paper is to start to put some hopefully [00:30:00] universal guardrails around what exactly is it so that we can measure. Our progress toward it. I know I listened to an interview I think it was Sam Altman just like a week ago or two weeks ago where it was the same deal.

[00:30:12] Paul Roetzer: it is well, what are the milestones we should watch for? And I think generally what you should expect is there isn't going to be this moment, and actually the Dwarkesh podcast with Shane is phenomenal, like I would highly recommend listening to it. I think what seems like it is universally accepted at this time is there isn't going to be this moment where A press conference is held that says, we did it.

[00:30:35] Paul Roetzer: We achieved AGI because X happened. They see it as this progressive thing where over time, you'll start to look at a bunch of signals and say, I think it is there. I think you mentioned the sparks of AGI paper from Microsoft after GPT 4 came out, where they're it seems like there's some elements of AGI Within ChatGPT.

[00:30:58] Paul Roetzer: Like we're starting to see [00:31:00] GPT for showing signs of this, but again, and from the outside, what does that mean showing signs of it? Does it mean we're almost there? Does it mean we're not there? So I think the significance here is AGI is going to play a major role in the future of humanity, the future of knowledge, work, the future of business society, what is it, and when do we get there are really important for us to prepare for it.

[00:31:25] Paul Roetzer: So I've, I've advised a number of. Like technology companies and some other companies that they really need, like an AGI horizons team within their organization, because I think you need some people who are looking beyond the one to two years of what's generative AI going to be when we have GPT five or Google Gemini, and they need to start looking and saying, what's the world going to look like?

[00:31:46] Paul Roetzer: What is our business going to look like? What's our industry going to look like? When AGI is here and Shane and others seem to think it is within this decade, OpenAI certainly would fall under the umbrella of thinking it is possible within this decade. [00:32:00] So it does seem more sci fi. Because it kind of is, but that doesn't mean it is any less real, real or that the probability of it being achieved is any less just because it seems really abstract at the moment.

[00:32:15] Paul Roetzer: If we rewound two years ago and I said, Hey, AI is going to be able to write everything. You're going to be able to build your own tools. Like you'd have thought I was crazy. And yet that's exactly where we are. And so I think AGI is going to follow a similar pattern right now. It seems really weird, and abstract to think about, but a year from now, two years from now, it may be a very real thing we're dealing with in society.

[00:32:40] Mike Kaput: I'd encourage everyone to take a stab, perhaps with AI assistance at reading the full paper. it is about 19 pages, much less once you remove the citations, it is actually. Dense, but not super technical. it is understandable for anyone, but on page six of the paper, [00:33:00] they actually have a nifty table that talks through their levels of AGI.

[00:33:04] Mike Kaput: Now I'd recommend you go check it out for yourself, but just really briefly, they kind of look at it as rows and columns. So imagine rows where you have the level of performance that they rate. AI systems on level zero, no AI level one emerging, which is equal to, or somewhat better than an unskilled human.

[00:33:23] Mike Kaput: And they go up multiple levels all the way to level five superhuman outperforms 100 percent of humans. Now the key here is what would be in the columns. They split up AI systems into narrow systems, which are clearly scoped tasks or sets of tasks. That the AI can do. So you talked about AI playing chess would fall into a narrow AI system and then general AI, which is able to perform a wide range of non physical tasks, including metacognitive abilities, like learning new skills.

[00:33:57] Mike Kaput: Now on this rubric [00:34:00] Legg and the team have really only identified emerging AGI. In level one, so equal to, or somewhat better than an unskilled human. And they would classify things like ChatGPT barred and llama too. In that category. Now the other levels of competency level two through five, they say, look, we do not have a GI today that is competent.

[00:34:26] Mike Kaput: You know better than 50 in the 50th percentile of skilled adults expert 90th percentile skilled adults virtuoso 99th percentile and superhuman outperforms 100 of people. So it does seem Paul like they have made an effort here to demystify a little bit of. What we're talking about when we talk about AGI, and when you look at this table, do you agree that you start to see, oh my gosh, I understand now why they're saying they think we could get to some pretty significant AGI in the next, say, [00:35:00] 10 years?

[00:35:01] Paul Roetzer: Yeah, you know, and I think, again, download the doc, go look at it. It'll make a lot more sense when you're looking at it. The more time I spend with this chart, the more I like it. Because I think it actually makes it really understandable. So like level three expert narrow AI, Grammarly would be an example.

[00:35:16] Paul Roetzer: Grammarly is like better at spelling and grammar checking than 90 percent of humans who do that task for a living. But when we look at general intelligence, there's nothing there. If you've watched the movie AlphaGo, which is the documentary about You know, deep minds alpha go system beating world go champion.

[00:35:33] Paul Roetzer: They put that at level four virtuoso, meaning it is better than 99 percent of people who are skilled at that task. And then they go into like level five alpha fold, which predicts the folding of proteins at a superhuman level. So you start, I love that they like show examples of what they mean at that level.

[00:35:48] Paul Roetzer: Like what would be an example of level three? So I think this is, it is really good. I think this'll be very helpful moving forward. And I do believe that. You know, by this [00:36:00] time next year, AGI will be a much wider understood, concept and that it'll be, in your life and your business life and your personal life, it'll start having much more meaning and application because I am from everything I've read and learned and heard.

[00:36:20] Paul Roetzer: I don't see any reason why AGI isn't a viable thing by the end of this decade. Like it just does seem we're on that path.

[00:36:30] Wearable 'Ai Pin' launched by Humane

[00:36:30] Mike Kaput: So in our third big topic today. An AI startup called Humane just released what it is calling the AI Pin, which is a wearable device that clips to the lapel of your shirt and allows you to talk to a virtual AI assistant powered by OpenAI and Microsoft's technology.

[00:36:48] Mike Kaput: Now this release is getting a ton of buzz because Humane was actually Founded by ex Apple veterans, including some people who worked on a somewhat popular [00:37:00] consumer device called the iPhone. So the company has raised 241 million from people like Sam Altman at OpenAI and Microsoft among other investors.

[00:37:11] Mike Kaput: And their goal here is to create an AI wearable. Now this wearable pin. Can do things like compose messages in your tone of voice. It can summarize your email inbox and it can take pictures, which humanes AI can then scan with computer vision in order to perform tasks. One example they give is you could snap a picture of the food in front of you and get the nutritional content of it given to you by your AI assistant.

[00:37:40] Mike Kaput: Now, the AI pin is available in the U. S. starting this week, November 16th, and it will cost 699. So, Paul, first up, what was your initial impression of this product?

[00:37:56] Paul Roetzer: I try to be so neutral on these things, [00:38:00] and this is another one, I just can't. So, anybody who listened to our episode a few back, we talked about the Rewind Pendant, similar idea, it is a wearable, you wear it around your neck, you...

[00:38:10] Paul Roetzer: People around you have no idea what it is or why you're wearing it and whether or not it is recording anything and there's all these ethical questions related to the product. And this is, I just felt the same way about this. So I was actually shocked, like knowing who these founders are, like major players from Apple, knowing their investors and how much funding they have, that that's how they launched this product as a, like a marketing and communications.

[00:38:40] Paul Roetzer: Professional. It just hurt to see. It was maybe the worst product launch video I've ever seen. So my initial take was they needed to hire an ethicist and a communications team because who How, how are they decided to do that? I just don't get it. So if you watch the video, it was like [00:39:00] 11 minutes or something.

[00:39:00] Paul Roetzer: It was a little longer. I couldn't make it all the way through. So I don't know how long it actually was, but they led with like all these technical specs that mean nothing to like the average user, unless they're only selling this thing to like tech geeks, I don't even know what they were saying, and then they're like.

[00:39:18] Paul Roetzer: I don't know. They're just talking about the features and there's no, why would I use this thing? Then they're showing like examples that were just the most ridiculous examples that had no value to me as a consumer. So I, I don't know. I just, I don't like the AI wearables category, not like Fitbit.

[00:39:37] Paul Roetzer: Fitbit is great. Watches are great. Like I'm not talking about that. I'm talking about things that are supposed to be observing the world around you and. Recording the world around you when people may or may not know what's happening. I'm not a fan of the category and my, my initial take on this product was, I don't understand why I would buy this when I [00:40:00] have a phone and a watch.

[00:40:01] Paul Roetzer: Like there's, it is all redundant. Like it does nothing other than record the world around me that my phone and my watch don't already do. And I don't want to record the world around me because I feel like it is unethical to just be recording people. So, and then there's like the form factor of you got to take it off.

[00:40:23] Paul Roetzer: I'm what do I put it on? I don't, I don't know. it is just. It just seemed like a Saturday Night Live skit, honestly. That was my initial take, it was this, this could just be a Saturday Night Live skit. They don't even have to, redo it. Just put this on Saturday Night Live. So, I, you know, was having a back and forth with our friend Tim Hayden on Twitter about it.

[00:40:41] Paul Roetzer: And, you know, what I, what I said, and I ended up talking to Tim. And Tim's got amazing perspective on this. And he sees a little bit out into the future of where this category could go. And so, you know, his, his feedback was really valuable. My, my comments that I made in a series of threads. One was as I just read it [00:41:00] as a consumer who loves tech.

[00:41:01] Paul Roetzer: I have zero interest in this wearables category as an investor. I would not invest in it as someone who watches the tech space very closely. I'm curious and admire their entrepreneurial spirit. But I think this product is dead on arrival. The second one I said, and this was maybe a little harsh, the product actually, no, it is not, this is actually, this product feels like Alexa 1.

[00:41:22] Paul Roetzer: 0 with a camera, crappy projector and a sound bubble, whatever the sound bubble is. Again, I could be completely wrong here, but I get the concept that next gen large language models in some wearable could be significant. Guess I still see a powerful large language model on my phone as the play. So I admit fully, I may just be completely wrong.

[00:41:42] Paul Roetzer: And this wearables category may be the hottest thing ever. And they may sell you know, millions of these pins. But I just don't see it right now. And I, there's no way in the world I'm spending any, I wouldn't, I wouldn't wear one of these things if someone [00:42:00] gave me 600 to put one on. it is just, Zero interest in it as a product.

[00:42:04] Paul Roetzer: So again, I respect and admire their willingness to go out kind of in the frontier and try something. I just think the category is a bad idea.

[00:42:16] Mike Kaput: And it sounds like for the category to really take off would require a pretty significant change. In consumer and social behavior, which is not impossible, but it is a pretty big ask to suddenly flip a switch and we're all.

[00:42:33] Mike Kaput: Surveilling each other.

[00:42:35] Paul Roetzer: Yeah. And I think, you know, the product launch that came to mind, I may be dating myself here, but the Segway scooter, like remember how groundbreaking that thing was supposed to be in, we were, it was going to transform, you know, transportation and it became this like really interesting niche product for like security guards and I don't know who else they sell the thing to.

[00:42:55] Paul Roetzer: And it is still around, but I feel like that's this, like maybe there's [00:43:00] some version of this. Pin or pendant or whatever that has like really cool applications for medical professionals or I don't know senior citizens or like something like I could see some sort of vertical solution to this, but as a consumer product that I assume is expected to sell millions of of units no way like it just.

[00:43:21] Paul Roetzer: Not in this form, like there's got to be something else to this. So I don't know. I can't, the other feeling I got was kind of like magically, like we heard for years, how magically it was just going to change everything and everyone was investing in it from Disney and whatever. And I feel like that's kind of what this is.

[00:43:37] Paul Roetzer: it is just we were waiting and waiting and waiting and it is Oh, that's it, that is what the projector looks like. And I don't know, it was bad. Yeah,

[00:43:49] Mike Kaput: Google Glass also comes to mind.

[00:43:52] Paul Roetzer: Yeah, but even that I could see, I get the glasses. People wear glasses, you know, the Ray Bans with [00:44:00] meta.

[00:44:00] Paul Roetzer: There's, you know, everybody's going to still try the glasses thing. Vision Pro, Apple get in the game on a glass. I could see glasses eventually working because the form factor is there. And it is, it is like a thing you see. But to think that people are just going to start wearing, These pins on things and I don't know, I think it is just, it is stretching like you said, like consumer, consumer behavior, would have to change so dramatically to wear these things.

[00:44:28] Paul Roetzer: And I almost feel like you'd have to get, I mean, I guess in a way, like kind of like GoPro, I don't know, maybe there's a market for like the GoPro market where it is like a smaller version of that camera. I don't know. I don't, I don't, I just don't see the market. I would love to see their pitch deck.

[00:44:44] Paul Roetzer: Who they're selling this to and how.

[00:44:48] Bill Gates claims AI is going to completely change how you use computers

[00:44:48] Mike Kaput: All right, let's dive into some rapid fire topics here. First up, Bill Gates. Just published an extensive article that claims, quote, AI is [00:45:00] about to completely change how you use computers and in this article, he argues that the rise of AI agents, which he defines as, quote, something that can respond to natural language and can accomplish many different tasks based on its knowledge of the user.

[00:45:16] Mike Kaput: That these agents will transform how we interact with software and with computers. He makes the bold claim that agents are not only going to change how everyone interacts with computers. They're also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.

[00:45:41] Mike Kaput: So Paul, this is a pretty notable. Personality to be weighing in on the future of technology. Do you see gates as being right? Are AI agents going to change the game?

[00:45:53] Paul Roetzer: Certainly not alone in that opinion. I mean, it is what we have talked about AI agents a lot recently on the show. I think I said, like 2024 [00:46:00] is sort of the year to watch this time next year.

[00:46:02] Paul Roetzer: I think we're going to have very Good agents that are actually doing real tasks for businesses. You can go, we'll share the link in the show notes, but ADEPT, which we have talked about before, I think they've raised north of 400 million, this is their thing for trying to build like action transformers.

[00:46:20] Paul Roetzer: You can go and look at their experiments. So they have ADEPT experiments, which I just joined the wait list for, where it'll show you it doing it within a CRM system. What you do is right now you program it to take actions. So I've used the example of. HubSpot, like if I want to send an email and HubSpot is 21 clicks, I would train the AI on those 21 clicks.

[00:46:41] Paul Roetzer: And then instead of me having to go do it, I would just say, run the email workflow and it'll go do the 21 clicks. So it recognizes the buttons, it knows the slider scales, it knows the form fills. So you teach it everything. And so I think right now is what is happening is. We're in the training mode where we're having to [00:47:00] kind of program these things, but then their computer vision and their action ability lets them go do it over and over and over again.

[00:47:06] Paul Roetzer: So you teach it once and then it goes and does it eventually it'll just learn by watching you. So, and then I, by, by, by eventually, I mean like next year, it'll learn by watching you. So yes, I think AI agents are a massive play. I think if you are a software company and you're not planning for this within your product roadmap and user experience, you are going to miss Just like you missed with generative AI.

[00:47:28] Paul Roetzer: So I think you have to be preparing for AI agents. There will be a ChatGPT level moment, I think, with AI agents where they all of a sudden do this stuff. And as a software company, you cannot get caught sleeping again. And then as like a business leader, as a practitioner, I think just study the space and be ready for the fact that the way you do things is probably going to evolve within the next year.

[00:47:54] Paul Roetzer: And these things are going to become very capable.

[00:47:59] The Actors Strike in Hollywood has come to an end

[00:47:59] Mike Kaput: [00:48:00] So the actors strike in Hollywood has come to an end as the Actors Guild has reached an agreement finally with film studios. Now, what's notable about this story is that the use of generative AI by the studios became a huge sticking point between actors and the studios.

[00:48:18] Mike Kaput: during negotiations. Both sides went back and forth on how AI could or couldn't be used to produce digital likenesses of actors. Now on Tuesday, it sounds like the studios budged and agreed to adjust AI language in their proposed deal. So the final details of the deal are not still yet fully clear, but it does suggest that the actors.

[00:48:45] Mike Kaput: Did move the needle in their favor more towards increased protections around AI now, Paul, this is notable just because it is a really interesting example of how AI is becoming a very real and present issue for [00:49:00] a lot of different types of professionals, especially in contract negotiations. Can you unpack for us the concerns that actors have about AI and kind of why this matters more broadly?

[00:49:13] Paul Roetzer: Yeah, I mean, just at the, broader implications are this is a profession where they're looking at ahead and saying, wow, AI is going to be able to do this. So extras and backgrounds, if you just digitally create twins of them, like you don't ever have to pay for extras again, or you pay a one time fee and you have the use of that person for, you know, eternity, there's lots of.

[00:49:34] Paul Roetzer: Concerns about where the tech is and where the tech is going. And so I think that it sounds like their union did a pretty good job of looking out ahead and saying, this could be our one chance to negotiate this right. If we come back to the table three to five years from now. We could have lost all of this capability and kind of given away everything.

[00:49:52] Paul Roetzer: So I'll be really interested to read the final details and see, but I think we're going to see a lot more of this moving into next year where different [00:50:00] unions are sort of stepping in and trying to protect, the future of those workers, because it is again, AGI is. Four years, five years away. What does that mean to all of these different workers across different industries?

[00:50:15] Paul Roetzer: So, yeah, I think it'll sort of set a precedent that you may then see some actions early next year where other unions start to try and protect. Their workers in a similar way.

[00:50:28] Meta to require advertisers to disclose AI content in political ads

[00:50:28] Mike Kaput: So Meta announced this week that it is going to require advertisers disclose when AI generated or AI altered content is being used in political ads to depict or imagine events that never happened.

[00:50:43] Mike Kaput: AI generated content depicting fake people is also going to need to be disclosed under these new rules. According to Meta President of Global Affairs, Nick CLegg , quote, In the new year, advertisers who run ads about social [00:51:00] issues, elections, and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AIncluding with AIncluding with Meta.

[00:51:10] Mike Kaput: to show real people doing or saying things they haven't done or said. However, it sounds like the AI usage for editing that has nothing really to do with the claims in an ad or its message, so like cropping or color collect correcting, does not need to be disclosed. So Paul, I know that you see kind of deepfake election content Generated by AI as a really serious near term threat to society.

[00:51:41] Mike Kaput: So how effective do you think this kind of measure will be in counteracting that?

[00:51:46] Paul Roetzer: I'm not sure, but I'm glad they're doing it. I think we're going to see more actions like this. I, we'll have to double check terms, but I think OpenAI doesn't allow you to use ChatGPT for like political stuff. [00:52:00] Um. So, you know, I think this, there's a couple of ways to do this.

[00:52:03] Paul Roetzer: One is at the, like the model level where the companies that create the image video, text, like runway, for example, in video and mid journey, I can't imagine mid journey putting any guardrails in place, but like that they would. Detect political content in the creation process and shut it down. So if you're asking for political figures, if you're using political language that they just wouldn't allow their systems to be used for that.

[00:52:31] Paul Roetzer: Now, open source sort of ruins that. And I'm sure the open source models can be used for whatever you want, which is one of the. Arguments for closed models is if we determine that it is dangerous for these models to be used in politics and the open source just lets everybody go at it doesn't really matter how many, you know, closed models shut it down.

[00:52:51] Paul Roetzer: So, yeah, I mean, I think we're going to see a lot more actions being taken, especially with the executive order from the White House. I think there's going to be more [00:53:00] pressure on these different companies. So the foundation companies and then the social companies that allow the dissemination of that information and content.

[00:53:08] Paul Roetzer: I guess it'll probably make some impact. I don't think it saves us from a train wreck of an election cycle. I just think people should prepare themselves for, um a really messy next 12 months in the United States, I personally have already like stopped going into my newsfeed. I just, I can't for my, my own mental wellbeing, I can't go in my newsfeed more than like once a day right now.

[00:53:36] Paul Roetzer: And I just imagine it is going to get worse once all the fake stuff starts really emerging. So I don't know, take care of yourselves out there. Like don't, don't let it is what it is. Like this is the reality we are going to be in. And I think, you have to. Understand your limits, and know, when to sort of step back from this stuff.

[00:53:59] Paul Roetzer: And this is one that [00:54:00] I do, it bothers me a lot and I do have to be very careful with how much I expose myself to it because I know what's coming and I'm just not, not there where I can like deal with it on a day to day basis.

[00:54:13] Microsoft announces five steps to protect electoral processes in 2024

[00:54:13] Mike Kaput: It sounds like some other companies are waking up to this threat as well because this week Microsoft also announced what it is calling five new steps to protect electoral processes in the United States and other countries where critical elections will take place in 2024.

[00:54:30] Mike Kaput: These steps include things like Microsoft is actually launching content credentials as a service, which is going to help candidates and campaigns maintain greater control over their content and their likenesses. This is basically a tool that allows users to digitally sign and authenticate media.

[00:54:49] Mike Kaput: Second, Microsoft is forming a team to help political campaigns navigate cybersecurity issues related to AI. Third, they're providing increased security and [00:55:00] tech support for democratic governments that encounter security issues with technology during their elections. They're also saying they're quote, using their voice as a company to support legislative and legal changes that add to the protection of campaigns and electoral processes from deepfakes and other harmful uses.

[00:55:18] Mike Kaput: of new technologies. Interestingly, that includes endorsing in the U. S. a bipartisan bill introduced by Senators Klobuchar, Collins, Hawley, and Coons that is related to protecting elections from deceptive A. I. content. Fifth and finally, Microsoft is empowering voters with authoritative election information on Bing from what they consider credible sources.

[00:55:44] Mike Kaput: So Paul, which of these measures, if any, did you see as? Significant at all.

[00:55:49] Paul Roetzer: I mean, they're all very good signs. I think we need to see all the major tech companies doing things like this and hopefully [00:56:00] actually executing them and following through on them. I, I think in total, if all the big tech companies are moving in the same direction and working to protect the elections and democracy, that gives me hope.

[00:56:13] Paul Roetzer: Like I know people are thinking about this a lot. I know a lot of resources are going to it. And I just, I mean, the optimist in me wants to know this stuff's going to make a difference and an impact. And at the end of the day, it is better than no action. So I think I look at it not necessarily as one individual item, but more as just the totality of, it seems like they're taking a comprehensive approach to this, or at least on paper, they are.

[00:56:39] Paul Roetzer: And I hope more companies, you know, step up and follow similar paths. To give us the best chance for this election cycle to not be a train wreck.

[00:56:52] Amazon is training a new large language model, Olympus

[00:56:52] Mike Kaput: So we have got some pretty big rumors about Amazon coming up here. Amazon is training a mammoth new large language model [00:57:00] codenamed Olympus, according to reports from Reuters.

[00:57:04] Mike Kaput: Sources familiar with this project told Reuters that the model has two trillion parameters. That would make it one of the largest models that's in existence today. There's speculation that GPT 4, as a comparison, has one trillion parameters. Interestingly, the team running point on Olympus is reportedly led by someone named Rohit Prasad, the former head of Alexa.

[00:57:33] Mike Kaput: So Paul, when you read this, how big a deal is this? Understanding, of course, there's still just rumors.

[00:57:38] Paul Roetzer: I mean, Amazon's not going to sit on the sidelines for this one. we have. Talked about earlier this year, they came out with Amazon bedrock, which is like a collection of language models. So if you're are, if your datAIs already in AWS, you can go pick, but I don't remember which ones are in there.

[00:57:52] Paul Roetzer: I think Anthropics in there, I think coheres in there, their own model, I believe it is called Titan is in there. So [00:58:00] AWS has the data that you trust, you know, it is in their cloud. Why not connect with some language models and train it on data with the provider you already trust. And, you know, if they see.

[00:58:12] Paul Roetzer: An opportunity to build a better model than what's currently offered through third parties of their own. They obviously have the bandwidth, or the compute power to train something more powerful. So, yeah, it makes total sense. I would have been more surprised if they didn't do something like this.

[00:58:29] Paul Roetzer: And so now we can look forward to the rest of 2023 or into 2024 is. Whatever OpenAI is building, but if I'm sure they're working on GPT 5, whatever that is, and whenever it comes out, we know Google's working on Gemini. We know now Amazon's working on Olympus. We know Microsoft is working on their own model, so they're not as reliant on OpenAI.

[00:58:50] Paul Roetzer: We know, Who am I missing? Microsoft, Google, Amazon, NvidiAIs playing in the game. Like it is just, there's going to be [00:59:00] inflection. I know is training a much larger model. Anthropics training a larger model. Like everyone is training massive models right now. Grok, you know, Twitter aired an interview. Lex Friedman podcast had Elon on it.

[00:59:12] Paul Roetzer: I just listened to that interview and he said they trained. Grok on 8, 000 GPUs from NVIDIA, the original Inflection Pi was trained on NVIDIA. So, I mean, we're going to get six, seven, eight, there's probably a LLAMA-3 in training for meta. The next generation of foundation models are coming and they're going to be trained on massive amounts of data and massive compute power.

[00:59:36] Paul Roetzer: So 2024 is just going to be nuts. I don't think we're going to have a lack of things to talk about on this show.

[00:59:43] Google has begun to roll out AI features across performance max campaigns within Google Ads

[00:59:43] Mike Kaput: So in our final topic for today, Google has started to roll out some more AI features across its performance max campaigns within Google ads. So within performance max, you can now get AI to suggest and generate headlines, [01:00:00] create descriptions, and generate images, as well as allow you to provide text prompts to generate more assets.

[01:00:07] Mike Kaput: So to scale up and spin off assets from what you've created. You'll also now have access to AI powered image editing right within Google ads. And as part of the announcement, Google also noted that all the images created with generative AI in Google ads, including via performance max will be identified as AI generated using the synth ID tool, which we covered on a previous episode.

[01:00:33] Mike Kaput: This basically invisibly watermarks images that are AI generated. Now, Paul, the big question I had here is this really seems like another case of a major platform releasing AI capabilities that essentially obsolete some of the features in third party tools and startups.

[01:00:53] Paul Roetzer: There wasn't it last week we were talking about Amazon doing this with like their product listings.

[01:00:57] Paul Roetzer: Meta obviously does this already. Yeah, for sure. [01:01:00] I mean, this is just. You know, when you start mixing it with performance data where it knows what's working specifically for your brand, your channels, yeah, like this is just a, not even a sign of things to come. This is what's happening. Like if you're running ads on any network or through any of these platforms.

[01:01:18] Paul Roetzer: And you're not using these tools, like they're there for you and they're going to be extremely prevalent, you know, again, a year from now, it'll be weird to not use these tools and just rely on the AI. So yeah, lots of cool tools rolling out. And I know we didn't have it in this week's rapid fire, but like we're such huge fans of descript and they keep rolling out all these AI tools.

[01:01:39] Paul Roetzer: And just for our podcast alone, like we just had Claire and our team do a brief. Last week on all the AI tools within Descript. So again, like when you're looking at where do we start, one of the best ways to start with AI, we'll kind of end with this is go look at the tech you already have and see what tools they're, they're rolling out with AI that can [01:02:00] drive efficiency and creativity across your team, because they're all going to be doing it and it is going to be a nonstop cycle moving forward.

[01:02:09] Mike Kaput: That is a great note to end on here, Paul, and right before we sign off, I want to just make a few very quick announcements that might create some more value for our listeners. First up, we have completely revamped our newsletter that goes out every week. It is now themed around This Week in AI. So we cover both the stories we have discussed on today's episode as well as all the other news, links, and information.

[01:02:35] Mike Kaput: That we weren't able to cover in this episode. So it is really, really helpful if you want to stay on top of everything going on in AI. So if you go to marketing, AI institute. com forward slash newsletter, you can go ahead and subscribe to that. I do also want to encourage you to subscribe to our. Podcast regularly.

[01:02:56] Mike Kaput: If you listen regularly, you may as well get notified the [01:03:00] moment we come out with a new episode. They drop on Tuesday mornings and go ahead and be sure to share the episode and the show if you really are getting value out of the content. Last but not least, I want to communicate maybe to some of our newer audience members that Marketing AI Institute does a phenomenal amount of public speaking.

[01:03:21] Mike Kaput: Myself and Paul are usually on the road in any given week. And we do a bunch of speaking engagements to address a really profound need in the industry. And the need is this, AI is about to have a huge impact on our industry and on our businesses, but too few business leaders understand how they need to start adapting to survive and thrive in the age of AI.

[01:03:44] Mike Kaput: Now, all of our talks are designed to help you do just that by helping companies build a competitive advantage with AI. By giving you highly actionable and engaging content. So if you've been looking for a speaker for your event, or you want someone to come in and [01:04:00] speak with your team about the opportunities that AI presents, please feel free to reach out to either Paul or myself on LinkedIn, or you can go right to our website, marketingaiinstitute.

[01:04:11] Mike Kaput: com. Go to about and click speaking and find all the information on that. Paul, thank you again for breaking down this week in AI for us. We really, really appreciate it.

[01:04:23] Paul Roetzer: Oh, good stuff as always. And, with that, I need to go catch a flight to Boston for a speaking.

[01:04:31] Paul Roetzer: Thanks everyone. We'll talk to you next week is Thanksgiving, isn't it? We'll re, we'll record, we'll, we'll have an episode next week. So, I don't know when we're going to record it, but we'll be back for an episode next week. Thanks again, as always, everyone. We'll talk to you soon.

[01:04:47] Paul Roetzer: thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to [01:05:00] www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:05:08] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 49]: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

Cathy McPhillips | May 30, 2023

This week's episode of the Marketing AI Show covers AI updates to Google Ads, Microsoft's AI copilots, and much more happening this week in AI.

[The Marketing AI Show Episode 74]: The Latest Drama at OpenAI, The Busy Person’s Intro to Large Language Models, and How to Rebuild Companies to Prepare for AI

Claire Prudhomme | November 28, 2023

This week in AI, Roetzer, and Kaput explore the latest OpenAI developments, analyze Karpathy’s insightful video on LLM’s, and explore Ethan Mollick’s latest on AI’s business impact.

[The Marketing AI Show Episode 54]: ChatGPT Code Interpreter, the Misuse of AI in Content and Media, and Why Investors Are Betting on Generative AI

Cathy McPhillips | July 11, 2023

Generative AI is advancing, and this week it’s two steps forward, and one step back. Learn more in this week's episode of The Marketing AI Show.