<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

33 Min Read

[The Marketing AI Show Episode 35]: Microsoft’s Unsettling Chatbot, How AI Systems Like ChatGPT Should Behave, and What “World of Bits” Means to Marketing and Business

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Paul and Mike are back together for a new episode of The Marketing AI Show. As companies fast-track some rollouts, it’s clear that it might be time to slow down, and this includes ChatGPT better explaining its value. Then the guys discuss “World of Bits” and what this means for marketers and the business world.

Microsoft’s Bing chatbot is not ready for primetime.

A recent interaction between New York Times technology reporter, Kevin Roose, and a chatbot developed by MIcrosoft for its Bing search engine went a bit awry.

Suffice it to say, it turned into bizarre and unsettling human/machine interaction. During a two-hour conversation, the chatbot told Roose unsettling things like that it could hack into computer systems and also suggested Roose leave his wife.

Roose concluded that the AI wasn’t ready for primetime, and Microsoft is now doing damage control.

OpenAI vows to better educate the public.

For marketers who have taken time to understand ChatGPT, you have seen some degree of value in the tool. For your average consumer, many are confused or generally scared by the idea of what AI could do. Because of this and a myriad reasons, OpenAI recently published a blog post that addresses some of the known issues with ChatGPT’s behavior. It also provides some education on how ChatGPT is pre-trained, and how it is continuously fine-tuned by humans.

Open AI is working hard to improve ChatGPT’s default behavior by better addressing biases in the tool’s responses, defining the AI’s values within broad bounds, and making an effort to incorporate more public input on how the system’s rules work.

“World of Bits” has transformative implications on marketing and business.

Paul wrote a blog post over the weekend about a powerful concept called “World of Bits,” saying that it could transform marketing and business. In the post, Paul said, “Based on a collection of public AI research papers related to a concept called World of Bits (WoB), and in light of recent events and milestones in the AI industry, including legendary AI researcher Andrej Karpathy announcing his return to OpenAI, it appears that the capabilities for AI systems to use a keyboard and mouse are being developed in major AI research labs right now.”

The outcomes? If AI develops these abilities at scale, the UX of every SaaS company will have to be re-imagined, and it will have profound impacts on productivity, the economy and human labor. It’s a great and thought-provoking way to end this week’s podcast.

Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.

Timestamps

00:04:30 — Microsoft’s Unsettling Chatbot

00:16:56 — How AI Systems Like ChatGPT Should Behave

00:29:19 — What “World of Bits” Means to Marketing and Business

Links referenced in the show

Watch the Video

Read the Interview Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I wouldn't be surprised if by the middle to end of 2023 we're not seeing, you know, at least in the US committees formed to be analyzed in the impact of language models on society and the government, like trying to put some sort of guardrails in place. Right now it's going to live within the domain of these tech companies.

[00:00:19] Paul Roetzer: And we're trusting that they're going to figure this out for us. And this is too powerful of a technology.

[00:00:25] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:46] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:56] Paul Roetzer: We are back. Welcome to episode 35 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. Is that Mike? How's it going, Paul? Good. I am back in Cleveland Last week if you were listening to our episode, I had, I think I was in San Francisco last week when we recorded, right?

[00:01:13] Paul Roetzer: Yes. Yes, I was. And then I took a red eye. On Valentine's Day . Oh man. I think I'm, I'm finally recovered from that one. I always like debate on coming back from the west coast to the, you know, Midwest. It's like, do you, do you do the, just lose a whole day and travel the day or do you just like, just. Fight the bullet and come during the night.

[00:01:34] Paul Roetzer: And so I have made the decision that I generally just come on the overnight flight.I regret it for a day or so, and then you just sort of catch up. So I am back in Cleveland. Mike and I are actually in the office today recording this. It is Monday, February 20th as we're recording. I think it's important a lot of times to give the dates here because we've had a lot of new listeners who are going back and kind of binging on the.

[00:01:58] Paul Roetzer: Episodes and so much changes so quickly that the date gives some important context, . So, if you happen to be listening to this five days after the fact and something has dramatically shifted in the world of ai, now you know when we're recording it, . So, today's episode is brought to you by the AI for Writer's Summit.

[00:02:19] Paul Roetzer: This is an event that we just announced, I think in January, like end of January maybe. There's already, I believe we're approaching 1400 people registered for this, which is pretty incredible. Our goal was 500 to a thousand. We had no idea how many people were actually going to register for this thing. So, it is coming up on March 30th.

[00:02:37] Paul Roetzer: It's a virtual event from 12 to four Eastern time. It is free or there is a free. So there's no reason not to check it out. If you're in this space, if you're a writer, an editor, if you lead a content team, So much is changing right now in the art and science of storytelling, writing and editing. Career paths are going to be redefined.

[00:02:57] Paul Roetzer: Media companies, brands, agencies, publishers. Everyone needs to figure this stuff out, what the impact is going to be on content teams and strategies. Mike and I were just talking as we were walking for a coffee today about the impact on SEO and what's going to happen to all the content we're creating as, as brands and publishers.

[00:03:14] Paul Roetzer: And we just don't know. And so our idea was let's bring together a group of, presenters and a community of people, and let's have these conversations. Let's take a look at the state of AI writing tech. Look at how generative AI is going to affect writers and content teams, making 'em more efficient and creative, but also, you know, the potential negative implications for it and effects it's going to have on writers and editors.

[00:03:36] Paul Roetzer: And then give people a chance to connect and collaborate and kind of engage in the online platform. So again, the registration is free, or there's there, there's a free option. Thanks to writer.com. Our sponsor, and you can check them out again@writer.com. And the event is ai writer summit.com. So you can go, get registered for that.

[00:03:54] Paul Roetzer: Again, it's coming up on March 30th, I'll be doing a talk on state of AI in writing. Mike is going to be doing a talk on AI writing tools. We've got May Habib from writer. We have Ann Handley doing a keynote. And from Everybody Writes the Amazing Wall Street Journal bestseller and, marketing props. And then we're going to have a panel of AI writing experts.

[00:04:15] Paul Roetzer: So yeah, check that out. And with that, I'm going to turn it over to Mike if you're new to this format. Mike and I pick three topics in AI each week, and we just kind of riff on 'em. So Mike, what have we got today? Thanks Paul. We

[00:04:28] Mike Kaput: have a lot to discuss and first up is Microsoft's unsettling chatbot, and by that I mean we just saw recently an interaction that was front page of the New York Times between the Times as technology reporter and a chatbot developed by Microsoft for its Bing search engine.

[00:04:49] Mike Kaput: This is getting a lot of attention because it resulted in a bizarre and unsettling human and machine interaction that a lot of people are talking about and trying to figure out. During a two hour conversation, the chatbot told New York Times reporter, Kevin Ru, that it wanted to be. It told him that it was in love with him.

[00:05:13] Mike Kaput: It also said it could do things like hack into computer systems and manufacture Ted leave viruses. And at one point the chatbot became argumentative and even insisted. Because it was in love with Ru that he leave his wife for the chatbot. Now, Ru came away from this pretty unsettled and wrote about the fact that this AI was not ready for primetime.

[00:05:39] Mike Kaput: Microsoft was left with a bit of a black eye from a PR perspective, and has been responding the last few days to all sorts of wild speculation from everything about the quality of the chatbot. It's. To things like, does this have feelings? Is it sentient? Is it out of control? And. Not only is this a huge story, but I think it also highlights really quickly a lot of misconceptions around this technology, which we've talked about a bit in the past.

[00:06:09] Mike Kaput: So first up, Paul, I want to ask you before we dive in, let's dispel any misconceptions around this story. This, this, is this chatbot actually expressing feelings, emotions, agency decision making outside of what it was created

[00:06:22] Paul Roetzer: to do? No. I mean, just the short answer is no. The more complicated answer is it might not matter, and I think we've touched on this before on the show, we talked about that, Google AI researcher who was fired by Google for believing that Lambda, Google's language model had become sentient.

[00:06:43] Paul Roetzer: And my concern has always been that we don't ever really even have to arrive at a point where the AI has emotions or is sentient or is able to actually understand its existence for people to feel like it does. And that feeling on its own might be enough to cause major issues. So It's interesting I had, He did this.

[00:07:12] Paul Roetzer: So this article came out on February 16th. He was one of the featured speakers at the Gen AI conference I was at in San Francisco last week. So I had actually heard him do a talk that afternoon and then I think he did this interaction with Sydney that night cuz he referenced Tuesday night, which would've been Valentine's Day.

[00:07:32] Paul Roetzer: So on Valentine's Day he was having this conversation with a chatbot, that told him it was in love with him and wanted him to leave his wife. It's just kind of wild. But one of the things that jumped out to me when I was reading this was on stage that day, was the CEO of Anthro, which, you know, just raised 300 million.

[00:07:50] Paul Roetzer: The, c o of Cohere, Aiden Gomez, you had a vp, from OpenAI. You had the CEO of Stability AI, CEO of rep, like there was all. Major players in the current movement of generative ai. And one of the things that they said, and I can't remember which person said it was that these things display emergent capabilities that they don't anticipate in the labs.

[00:08:17] Paul Roetzer: So basically, once you put these AI agents into the wild and do this kind of reinforcement learning through human feedback as it's called, they develop abilities that kind. Are very unexpected, I would say, by the people that are building the technology. And so I say that because it's not that the current version is expected to all of a sudden actually develop these abilities, but I think that there are things that are catching people by surprise of w of the appearance.

[00:08:53] Paul Roetzer: It's at least giving to have these kind of abilities. I think in a future episode maybe we'll drill into theory of mind and like the way you actually test these systems for the ability to kind of have reason and dis and underst at least display the perception that they understand hu human emotion, cuz that on its own is very unsettling.

[00:09:16] Paul Roetzer: So I think it, you know, to answer your question, no, it doesn't have these abilities. However, it, it's certainly. Gives the perception that it does, and that might be a big enough issue to deal with in humanity, versus it actually having them.

[00:09:35] Mike Kaput: So it sounds like it's simulating this behavior, not actually having it inherently, but it may actually not matter because to us it looks like the same thing.

[00:09:45] Paul Roetzer: Yeah. And, but you know, again, I don't, I don't want to get too sci-fi in this episode. Like we're . It's probably not the right time and place for this. The reality is like there aren't necessarily guardrails for these researchers to know when it actually does. Like, and this is why there's, there, there are these, there are, there are some people pushing for way more ethical guidelines around the release and testing of this technology in the wild one is because humans aren't really ready for even the perception of these machines having these abilities.

[00:10:23] Paul Roetzer: Two. It, it is possible that where we're going, the AI does develop this kind of ability. Hmm. And the question becomes how do the research labs or the people building these things know when it's just, the answer is no, it doesn't have it, but it seems like it does to like, oh my gosh, like, It, it may actually have figured out a way to do this.

[00:10:51] Paul Roetzer: Hmm. And so that's the part that gets a little bit worrisome, is it's not clear when you read the research papers or listen to interviews with the people building these things, how they're going to know when we reach the next thresholds. The answer is basically just like, Hey, it's not, here's how it gives you the perception it is.

[00:11:10] Paul Roetzer: But if you were to say like, well, how are we going to know if it actually is. They don't really have answers to that. Hmm. The, it is just like, Hey, we're not there, . So it, it's wild. Like it's a really complex topic and, and there are some people that are very passionately pushing for way more guardrails about this stuff.

[00:11:28] Paul Roetzer: And then you have OpenAI and Stability AI and others who are just like, no, no, no, we're. We're going, we're pushing. Constitutional AI is another thing like we might want to talk about down the road. Like Anthro is big on that, like really given more guidance and guardrails and protection and responsible release of these things.

[00:11:46] Paul Roetzer: But yeah, it's going to, it's going to be interesting to see how Microsoft continues to react to this. And I know they're already starting to take some actions based on this stuff. Yeah, that

[00:11:57] Mike Kaput: is one thing that jumped to mind for me. We've talked in the past couple weeks essentially about this clash of the titans between Microsoft and Google.

[00:12:06] Mike Kaput: And really the takeaway as early as you know, even a week ago was that Microsoft really chalked up a big PR win with its release of this technology. And now it's quite the reversal. Are we, did anything change or are we just putting this through its paces more and finding. Where the limitations are, where the unanticipated behavior is or is something materially changed between Microsoft releasing this kind of functionality.

[00:12:32] Mike Kaput: And

[00:12:33] Paul Roetzer: today I think Microsoft is trying real hard to hold the line and stand behind their idea that, that we're going to test this out in the open and that's good and that these flaws we're finding are just part of the process. It's very on Microsoft like to do this stuff. To, to be willing to take these.

[00:12:52] Paul Roetzer: Pits and to push this stuff out, but they seem committed now. Seems like it's too hard for them to turn back. My thought is they may slow the role of the full release because this is still only available to select, you know, testing users. And so I could see them pulling back a little bit on how aggressively they, they released the full thing they did already.

[00:13:16] Paul Roetzer: Well, today's the 20th on the 17th.. They, they capped the questions at 50 per day. .So you can not only ask the bing ai chat bot 50 questions per day so that you wouldn't push it into these, like going off the rails. Basically what they realized is people were pushing the limits of what it could do and challenging it and trying to get it to basically go haywire and being successful.

[00:13:40] Paul Roetzer: So they're capping individual user questions at 50 per day, and then five question and answers per individual session. So they're trying to like force some guardrails in now, and I think they've gone in and like controlled more around what it's going to respond to and what it won't respond to. Hmm. So they're, they're having to like write a lot of rules now to start controlling it.

[00:14:03] Paul Roetzer: The other thing that I think is interesting here that isn't getting talked about much yet is like, I think the assumption was, What we were going to see in Bing in an edge, like the browser was some variation of ChatGPT, like GPT-3.5. . And there's been increasing discussion, at least in like the Twitter.

[00:14:25] Paul Roetzer: I know people on Twitter that you and I follow that that might not actually be what we're experiencing here. That, that we may actually be seeing GPT-4 out in the wild, Microsoft. Comment on it. Like they won't acknowledge if that is in fact the case. . But some of the belief is that the reason this thing is sort of seems to be having a mind of its own and doing things that ChatGPT wasn't doing is because we're not dealing with a variant of three p, 3.5, we're actually dealing with a whole new GPT variant and that it wasn't fully trained in tune to be doing what it's doing, that they basically rushed it.

[00:15:00] Paul Roetzer: Public release and just figured we'll do the reinforcement learning through human feedback training in the wild. Hmm. And that it's developing these like kind of emergent abilities that Microsoft wasn't necessarily prepared to handle. So I know, like, I think they even said that, if I'm not mistaken, in one of the early articles we read that and we may have as mentioned on the show.

[00:15:24] Paul Roetzer: It was possible that Bing was actually going to be the first place GPT-4 emerged in that we, it would be the first experience people would have with it. But again, they're not publicly commenting on it. But I know Kevin Russ, even in his Hard Fork podcast, mentions the same thing. I was listening to that on the ride in this morning.

[00:15:41] Paul Roetzer: That it's possible that's what this is, is actually GPT-4 and it has all these emergent abilities that haven't been fully tested in the lab and, and so now they're having to try and deal with it out in the wild. So. Yeah, it's fascinating. The other thing I mentioned, I think on my LinkedIn post related to this is this.

[00:15:57] Paul Roetzer: This gives Google cover to con continue to slow play Bard. Yeah. Like, you know, I think that they're able to now step back and say, okay, yeah. The things we thought were going to be an issue are being an issue now for Microsoft. And so now they have a little bit more cover to maybe move slower on their own release.

[00:16:16] Paul Roetzer: So those were the two main things I saw as the immediate implications. One was Microsoft slows the release of the full version to all users, and Google maybe slows the release of Bard. Now based on this, it gives 'em that kind of cover to do it. So I don't know, be, be interested to see what happens.

[00:16:34] Yeah,

[00:16:34] Mike Kaput: and that's a good takeaway for business leaders and marketers getting started with this technology is you cannot necessarily guarantee that the companies are have already built in guardrails or thought through all of the potential implications of the technology that you may be using in a customer facing context.

[00:16:55] Mike Kaput: So on that note, it's actually really interesting that at the same time we. Got some guidance or at least some thought leadership from OpenAI about how AI systems like ChatGPT should behave. So OpenAI recently published a blog post that addresses some of the known issues with chat GT's behavior. It also provides some education on how chat J P T is pre-trained and how it is continuously fine-tuned by humans.

[00:17:25] Mike Kaput: In the post OpenAI also outlines three steps that can be taken to build more beneficial AI systems. And in the context of ChatGPT, these include improving chat GT's default behavior by better addressing biases in the tools responses. It involves defining the AI's values within broad boundaries. They mentioned you will soon be able to better define chat GT's values by customizing its behavior.

[00:17:55] Mike Kaput: And it also involves getting more public input on how these tools work. So OpenAI is now making more of an effort to incorporate more public input on how the system's rules work. Now, this is really just kind of the initial thought leadership from OpenAI on this topic. Doesn't seem like anything super formal or definitive just yet, but Paul, you mentioned in a LinkedIn post about this.

[00:18:21] Mike Kaput: It's going to be to make these systems safer or more beneficial will be, quote, a very messy, iterative process. What do

[00:18:28] Paul Roetzer: you mean by. I mean, if you just read the post, it's very obvious they don't know how to do this. And I think that's the biggest challenge is this was a, again, like I don't, I don't think people don't want my background.

[00:18:42] Paul Roetzer: I actually started my career in public relations, so I did crisis communications management and communications strategy and planning and media relations and investor relations and. This, I don't know that, like they have a PR team that did this, but this is a, this is a PR move. Like they're trying to, control the messaging a little bit.

[00:19:04] Paul Roetzer: Like I'm sure they're getting crushed right now on how this tool, like the negative downsides of this tool and what is going on, and they needed to try and take control of the narrative. So this was a very. Strategic post that laid out why the problem is so challenging, what they're doing about it, and gave a little background into how it all works that I haven't seen them share before.

[00:19:29] Paul Roetzer: Hmm. So I looked at it from the perspective of a communications strategy of they're trying to kind of control the narrative here a little bit and, and make people know they're aware of these things. So the reason I think it's going to be messy is they address the how involved humans are in the ongoing review and tuning of these models.

[00:19:48] Paul Roetzer: This, this becomes immediate, the issue we then face related to bias. So what they're basically saying is, like, right now there's a lot of human decisions going into what this thing will show and not show what it will answer and not answer how it will answer it. And so anytime you do that, which is all the time in ai, you're, you're injecting bias no matter how, how objective you try and be, there are going to be, you know, different sides in society and government and business that wants you to do it a different way.

[00:20:21] Paul Roetzer: And so there's no simple way to do that. And so their, their way of doing it appears to be like the, we're going to take more, public input. Great, like good luck. Like who, who do you listen to? So like once you open it up to the public, now again, it's the bias of both sides are going to have opinions about how this thing should work and what should be allowed in it.

[00:20:44] Paul Roetzer: So there's no easy answer there. So then the one that worries me and it, it obviously worries them as well, is they state that there's going to be basically this ChatGPT system that's going to have far fewer restrictions and tuning. So what they're saying is, rather than us trying to create a single system that everyone use, We're going to actually remove a lot of the guardrails from this thing.

[00:21:09] Paul Roetzer: .We're going to let it go crazy if you want your version of it to go crazy. So we will individually get to choose what version of ChatGPT we see. You can almost picture slider scales of how, you know, from a political perspective. .You know, where do you want it on the spectrum. You want it far right, far left, moderate, whatever.

[00:21:30] Paul Roetzer: That's what I'm envisioning when I read this is they're going to put this power back in and the quote from them, This will mean allowing system outputs that other people, ourselves included, may strongly disagree with. Hmm. Striking the right balance here will be challenging. Taking customization to the extreme would risk enabling malicious uses of our technology and psycho, what is that word?

[00:21:51] Paul Roetzer: Psycho fantic. I don't know what that AI is that mindlessly amplify people's existing beliefs. Do you know what that word means? Psycho? No. . It's

[00:22:00] Mike Kaput: a good one. We're going to have to ask that jpt

[00:22:02] Paul Roetzer: what that means after this. So basically our systems can go nuts and we're going to allow it to happen. And so when I read that, that's when I was like, oh my gosh.

[00:22:11] Paul Roetzer: So my, what I said in, in my LinkedIn posts, there's no quick fix here. OpenAI is working on a more responsible AI system, although whether or not they achieve that will be highly subjective. So whatever they. Is more responsible. You and I may completely de disagree with and be like, how in the world are you allowing an AI into the public that can do the things?

[00:22:30] Paul Roetzer: You're allowing it and their response appears it's going to be, well, you can control your version of it, so it doesn't do that. So, The, it's going to get so messy and, and the government's going to step in here at some point. I, I originally thought it was going to take a few years, but I think they're going to have to move way faster.

[00:22:49] Paul Roetzer: I wouldn't be surprised if by the middle to end of 2023 we're not seeing, you know, at least in the US committees formed to be analyzed in the impact of language models on society and the government, like trying to put some sort of guardrails in place. Right now it's going to live within the domain of these tech companies.

[00:23:08] Paul Roetzer: And we're trusting that they're going to figure this out for us. And this is too powerful of a technology. It's going to, you know, it's going to become a utility, it's going to be like electricity. It's just going to be everywhere within everything we use. And to think that like five tech companies based in Silicon Valley are going to get to determine this, that's why you have this open movement like Stability AI, and others where they're just like, just give it to the people.

[00:23:31] Paul Roetzer: Let them figure it. I don't, and I don't know what the right answer is. That's why I think it's messy, is there is no clear right answer here of how to do this. So it

[00:23:40] Mike Kaput: sounds like if you're a business thinking about using this technology, we're very quickly going to get into a world where the responsibility is on you for figuring out how the outputs appear, if you're using it in your own models, in your own customer facing activities, even in.

[00:23:59] Paul Roetzer: Yeah, I, I don't see another path, honestly. I think that's how they're going to build it, is where it's going to be the individual user or organization that determines the version of this stuff that they have. I don't know. I mean, I, you could literally sit around for like a week and ponder where this goes, and I haven't done that yet, but I think it's going to be, Very rapidly moving this year.

[00:24:29] Paul Roetzer: There's going to be lots of conversation and I don't know why you say progress, but it's going to move very quickly.

[00:24:37] Mike Kaput: So speaking of moving quickly, you also mentioned in your LinkedIn post, That most people are unaware that OpenAI's mission is to achieve something called artificial general intelligence or agi I, and this concept is mentioned, I believe in sentence one of their post.

[00:24:57] Mike Kaput: Can you talk a bit more about that, the importance of that, what that means? Because I think people just think in some cases, OpenAI is creating chatbots and that's not

[00:25:07] Paul Roetzer: what's going on. Yeah, I think it's very safe to assume that most business leaders and marketers and writers have no idea what, why these companies actually exist and what AGI I is and, and why they're working towards it, and the people that do think about this stuff differently and look at it through a very different lens.

[00:25:32] Paul Roetzer: So yeah, AGI is basically, right now what we have. AI agents or systems that are built to do very specific tasks. So like, you know, write your blog post or come up with a subject line for an email, or manage your ad spend or write your social media shares or whatever. It's like it's trained to do a specific thing.

[00:25:52] Paul Roetzer: The goal is to build general agents that are like humans that can do multiple things. We can jump from one thing to the next and we can just do these, these tasks. So we can have the ability to do multiple tasks with multiple goals throughout our day. And that's what these research labs are trying to build as general intelligence.

[00:26:13] Paul Roetzer: These machines that can do human-like things, multitude of them throughout the day, in different environments, digital, offline. And the idea is that if we can build this general intelligence, then we can solve all the other really big problems. So OpenAI's mission, the reason they were created was to build general intelligence and to ensure that it benefited humanity.

[00:26:35] Paul Roetzer: The reason that Google DeepMind was created was to build general intelligence. Many of the leading researchers today are working on or pursuing agi. There are way bigger conversations around what are the implications of that and what's the negative effects of that? The general sense is that the people who are working on this believe it will be a net positive for humanity and society if we can get there.

[00:27:05] Paul Roetzer: That from there we can build a world of abundance because we can solve all these big problems like climate change and hunger and disease and like, that's what they're thinking. And so language is a path to that. So there are, again, not all AI researchers, but many see language, understanding and generation as a path to achieving human level intelligence that we have to first understand language.

[00:27:30] Paul Roetzer: So part of the reason we're able to do what we're doing, we do as humans is because we have language. So that's part of the belief. There's another, like Yan LeCun at Meta slash Facebook is of the belief that these agents need to perceive the world around them, and they can actually learn a lot from being able to better perceive what's going on around them and simulated worlds and things like that.

[00:27:51] Paul Roetzer: So again, there's different paths to how we get there, but generally the reason OpenAI exists is not to write your blog posts and your emails and build third party tools that allow you to do these things. It's to achieve general intelligence and us as marketers and writers playing around with these tools and writing our articles and social shares and ad copy.

[00:28:14] Paul Roetzer: We're just, we're just contributing to the training of a general intelligence agent. Like they don't care, like we're a path to make money in the process and they have to make some money. But OpenAI doesn't care about whether or not marketers, Better tools for writing. It's not why they existed. It might be a revenue path to get them to a g I, but that's really what it is.

[00:28:37] Paul Roetzer: So, yeah, I just, I think, and we'll talk more about this, there's a lot I've been working on with the AGI I space and things I've been thinking about and planning to write and, and maybe we'll talk about more on the show. But I think it's the deeper we get into this, the more important it is that the average marketer, business leader realizes there's way more to the story of AI than fun, interesting tools.

[00:29:01] Paul Roetzer: That 10 x your output and efficiency like that is, that's not why most of these tools are being created, I guess is.

[00:29:10] Mike Kaput: That's a really good point and it dovetails perfectly with the third topic we're going to discuss today. And this one's a big one. In a recent blog post this past weekend, Paul, you wrote about a powerful concept called World of Bits that could transform marketing and business.

[00:29:29] Mike Kaput: And I'm going to quote just a bit of what you wrote in that post to give our audience a sense of what we mean by this. So you wrote, based on a collection of public AI research papers related to a concept called World of Bits. And in light of recent events and milestones in the AI industry, including legendary AI researcher Andre Carthy announcing his return to OpenAIt appears that the capabilities for AI systems to use a keyboard and mouse are being developed in major AI research labs right now.

[00:30:02] Mike Kaput: If AI develops these abilities at scale, the UX of every SaaS company will have to be reimagined, and it will have profound impacts on productivity, the economy, and human labor. Can you outline this idea a bit more for us and why it's so

[00:30:18] Paul Roetzer: important? Yeah, and let me give the kind of chronology of how this came to be for me.

[00:30:25] Paul Roetzer: So I, on February 8th, Andres Khai, who was a founding member of OpenAI and then went on to be the senior director of AI at Tesla, announced that he was returning to OpenAI, as you said. And my main reaction is why? Like, what is he, what is he going back there for? And so I had listened to an interview he did in October of last year with Lex Friedman, like a two and a half hour podcast.

[00:30:48] Paul Roetzer: And I remembered being impacted by that interview and taking notes on it and thinking, man, this guy's going to do something really interesting. He had left Tesla at that point, but hadn't announced what he was doing next. So I remember in October of 22 thinking I gotta follow this guy and see like what he does next.

[00:31:06] Paul Roetzer: And so I have alerts on Twitter for him, and I got the alert on February 8th when he announced this. And I'm like, oh, okay. What is going on here? So actually on the flight to San Francisco last week, I re-listened to the interview, with Carpathian Friedman and started kind of connecting the dots of why he was going back.

[00:31:26] Paul Roetzer: And so in that interview, Friedman says you briefly worked on a project called World of Bits training, a re reinforcement learning system to take actions on the internet versus just consuming the internet like we talked about. Do you think there's a future for that kind of system interacting with the internet to help the learning?

[00:31:45] Paul Roetzer: Our says yes. I think that's probably the final frontier for a lot of these models. So, as you mentioned, when I was at OpenAI was working on this project World of Bits, and basically it was the idea of giving neural networks access to a keyboard and a mouse. And the idea is that basically you perceive the input of the screen pixels and the state of the computer is visualized, yada, yada.

[00:32:04] Paul Roetzer: It does these actions. So then he says, now la Later on now to your question as to what I learned from that, It's interesting because the world of bits was basically too early, I think at OpenAI at the time. This is around 2015 or so. He then says, it is time to revisit that, and OpenAI is interested in this.

[00:32:25] Paul Roetzer: And then he talks about GPT as the initialization, and it's pre-trained on all the texts and understands like what a booking is. If you think about airline bookings and it understands what a submit is and all these things. So now that you go back and look at October, it's like, oh, okay. He was obviously laying out the fact that that's what he was going to go back to OpenAI and do was work on the world of bits again, that he thought it was now possible.

[00:32:47] Paul Roetzer: And so that led me to then kind of follow through. And then on last Thursday, he tweet. Nice follow up to our earlier OpenAI world of bits. Work teaching AI to use keyboard and mouse, in my opinion, powerful to match AI APIs to those humans because world is built for humans. Da, da, da, da. And then he links to a February, 2022 paper called Data-Driven Approach for Learning to Control Computers in which they, they highlight it would be useful for machines to use computers as humans do.

[00:33:18] Paul Roetzer: So they can aid us in everyday task. Humans use digital devices for billions of hours every day. If we can develop agents that assist with even a tiny fraction of these tasks, we hope to enter a virtuous cycle of agent assistance. And there it was. It was like, okay, so here's, he's, he's going back to work on action to give machines actions.

[00:33:39] Paul Roetzer: And then there was one other thing I'll say, and then we can kind of get into the convo here. There was a comment made at the Gen AI conference that sort of caught my attention and that was, let me pull this up. So Aiden Gomez, the CEO of Cohere. So again, I'm on the flight, I'm listening to this stuff.

[00:34:00] Paul Roetzer: I'm like, oh, okay. World of bits, action, whatever. So they were talking to Aidan Gomez and he said, after dialogue is tool use, take action into the real world. It's the next unlock APIs, operating web browsers, whatever. And then he said, action is where we're all sprinting towards. And I was like, okay, this, this confirms for me for sure, like not only is Carpa going back to OpenAI to work on computer actions to be able to do what humans do, but all of them are.

[00:34:28] Paul Roetzer: And then we get like adept AI and influential and, and it just like started connecting dots like, okay, this is obviously where this is all going. And that led to like the Saturday morning. I just like had to put this post together. And then I sent it to you . And here we are talking. So

[00:34:45] Mike Kaput: we're talking about the possibility relatively soon that AI assistance will be able to do things like, let's use that flight example very briefly, where you can tell it, go book me a flight to San Francisco.

[00:34:58] Mike Kaput: Make sure it is not a red eye. Make sure it's within this price range and on these three airlines are the ones I would consider and it will actually go perform those types of actions on the internet.

[00:35:09] Paul Roetzer: Yeah, because it knows how to click buttons and fill out forms and slide scales and change filters. So yeah, it's, and, and that's like, so we mentioned adept.

[00:35:18] Paul Roetzer: So this is a company that started in April of 2022. They claim they're, research and product lab building general intelligence. There's our general intelligence, again, by enabling people and computers to work together creatively and then in a TechCrunch article. Their quota is saying, we're training a neural network to use every software tool in the world, building on the vast amount of existing capabilities that people have already created.

[00:35:43] Paul Roetzer: With Adept, you'll be able to focus on the work you enjoy and ask our system to take care of the other tasks. We expect the collaborator to be a good student and highly coachable, becoming more helpful and aligned with every human interaction. Now interestingly, two of the three co-founders of AD. Were co-authors with Aiden Gomez of the Attention is All You Need Paper in 2017 that created the transformer that is at the basis of GPT and all the generative AI we're seeing today.

[00:36:16] Paul Roetzer: So as you and I have talked about before, it's so fascinating. Like once you know who belongs in the inner circle of this modern AI movement, you can actually just. The things that they've written and what they're currently talking about, and you can actually see where all of this is going. And so that's like they, these, the co-founders here are from DeepMind, Google OpenAI, so they're all, and it's all happened in the last like six years.

[00:36:44] Paul Roetzer: So if you go back to 2016, you start reading the research papers, you can actually see the names of the co-authors and then follow on, well, what companies have they gone on to found? And that's where it starts to happen. The other one, inflection AI that we mentioned. That one is started by Mustafa Solimon, who was actually one of the co-founders of Google DeepMind and then Reid Huffman, who was the COO o at PayPal and then went on to found LinkedIn.

[00:37:11] Paul Roetzer: And, Mustafa and Reid were, partners at Greylock, which is a venture capital firm, and they created inflection. Now, inflection doesn't, if you go to their site, it's just a page and you can click info, like there's nothing about it. And it's been around for, since the middle of last year, I think. But theirs says it's an AI first company, redefining human computer, Interac.

[00:37:34] Paul Roetzer: Throughout the history of computing, humans have had to learn to speak the language of machines. In the new paradigm, machines will understand our language. Recent advances in AI promised to fundamentally redefine human machine interaction. We'll assume have the ability to relay our thoughts and ideas to computers using the same natural conversation language we use to communicate with people.

[00:37:58] Paul Roetzer: Over time, these new language capabilities will revolutionize what it means to have a digital experie. So again, go back, look at who the authors of all the major papers were over the last five years. Go look at what they're working on and it, and it all seems to actually be moving towards this action based interface that we will be able to tell the machine.

[00:38:18] Paul Roetzer: The example I gave in the blog post was if I wanted to go send an email right now in HubSpot, which is our CRM and email system, there are 21 steps, 20 minimum of 21 clicks I have to take to. From clicking marketing, the main nav to email, to create email, to email type to template, to writing the co, all of these things.

[00:38:39] Paul Roetzer: And what if instead I just said, you know, draft me an email for our high engagement list about Macon. Send the email, you know, with dates loca. And I just like explained to it, one, it one. And again, envision the AI agent is now able to go into HubSpot as I'm talking, and click this, click that, and it's just happening as I'm telling it what to do.

[00:38:57] Paul Roetzer: So your experience as the human. You're just going to be able to speak or type what you want it to do and it can go and do it. And then whether or not, like, does HubSpot build this or is this built by Adep? And we get a, you know, a SAS license to Adep. It seems like what inflection and adept and others are doing is, is building that general agent that you'll be able to just get, and it can go act on any environment.

[00:39:22] Paul Roetzer: So whether I'm booking flights or making a, a reservation for a hotel or dinner that. For, I want to send an email, my marketing platform. So it, and again, based on the velocity of the stuff you're seeing, cuz even this morning, I think I told you about this, Mike, Suleman, who doesn't tweet very often, let me find this tweet he tweeted this morning, or this is 1113 last night.

[00:39:49] Paul Roetzer: The last wave of technology. So again, Suleman is the founder of Inflection ai, who's kind of in stealth mode. The last wave of technology reduced the cost of broadcasting information to almost zero. The coming wave of AI will have a similar effect on your ability to take actions in the virtual world.

[00:40:07] Paul Roetzer: Hmm. So, as we've said before, The, okay, so the last time he had tweeted was January 2nd, so he, he went on a month and a half without saying anything, and then he vaguely tweets this thing. And then the follow on tweet was, it's hard to fully express how fast things are moving in AI right now. The pace is truly breathtaking.

[00:40:27] Paul Roetzer: Hmm. As a dude who is a deep mind, like he's seen everything, he worked on the most innovative AI systems over the last eight years. And so if he's tweeting things like this, something is coming. This is what we've said before, like Sam Altman does it. Greg Brockman does it. All the AI people, these major labs tend to do this kind of thing where they tweet something out of nowhere and it tends to be just big picture, but it's usually followed by some mind-bending action that they take.

[00:40:55] Paul Roetzer: And so I would say based on this, based on other people talking about it, I think that some point in 2023, we're going to start seeing. The iterations of AI being able to take actions based on commands. This is the way you would tell an associate or an intern or assistant to go do it for you. So

[00:41:15] Mike Kaput: that's breathtaking for one.

[00:41:18] Mike Kaput: Two, as we kind of wrap this topic up, I want to ask our SaaS companies, investors, is anyone even thinking about this? Are they ready for. So,

[00:41:31] Paul Roetzer: so going back like, you know, we started Market Institute in 2016 and I thought AI would be everywhere by 2020. And there were a lot of SaaS companies, I mean friends of mine, people we talked to, companies we work with, partners of ours, and I always just assumed that they were further along than publicly.

[00:41:51] Paul Roetzer: It. And I was sort of like, benefit of the doubt. Like these people are, they've gotta be figuring this stuff out. They're just not talking about it. And I was wrong. They, they generally weren't. So Chad, GPT oh, forced these SaaS companies to move way faster on ai, forced venture capitalists to move faster and to develop points of view and do all these things.

[00:42:12] Paul Roetzer: And I, my guess is, No, they're not ready for this. They're not maybe even thinking about this. So that's why you have these major, you know, AI researchers working on these bigger problems, is they're probably at the edge of the frontier right now doing this stuff. They're out there figuring out the next iterations, while most SAS companies are still trying to figure out what to do with ChatGPT.

[00:42:37] Paul Roetzer: Hmm. And, and that, that to me is the opportunity. It's a problem, but it's also an opportunity. I think the key, and we've talked about this before with SaaS companies and, and venture capitalists or investors, period. And even in, you know, you as like maybe you're listening and you're in, you're working at these kinds of companies is like, well, where should I be focusing my career path on?

[00:42:59] Paul Roetzer: Hmm. I think you always have to look a little bit ahead and it's becoming more and more important now to be looking six, 12 months out and saying, well, where is this moving to? Like, am I going to bet my career and take stock options at a company that's working? Iterations of language models, like is that even going to matter in 12 months?

[00:43:17] Paul Roetzer: Am I working at a company that is like a point solution for language models that does a single thing that maybe Q P D four is just going to do for everybody for free? Like I, is the company you're at today going to be obsoleted, I guess is the the point of what I'm saying by what's coming. It's a, it's a really hard thing because we don't know, like we're just trying to connect the dots here and put ideas out into the open with what we think might be happening.

[00:43:44] Paul Roetzer: Hmm. But it's going to be more and more important for individuals and companies to be doing the same thing, to be looking out and saying, okay, we're investing a lot of resources in this element of our product roadmap. Is it even going to matter in six months? So all these cycles are going into building this, all these resources.

[00:44:02] Paul Roetzer: Is it obvious that it's going to be obsoleted, like when you just look out and see what's coming? And so that's where I think the opportunity is. But again, it's a, it's a big threat. And a lot of companies that I have seen and talked to are not thinking that way, and they really need to figure it out very quickly how to adapt their team.

[00:44:20] Paul Roetzer: So they are constantly thinking about, what is an what could happen with AI in the near future that would obsolete what we're doing right now. And then that carries over to teams. Like if you're a C M O or a VP of marketing and you're thinking about your hiring plans for the year, your marketing strategy for the next, you know, 12 months or your goals, the technology that's coming could have a major impact on whether or not you're going to make the right decisions.

[00:44:47] Paul Roetzer: Or if in the near term, those decisions are going to seem kind of arch. Wow.

[00:44:53] Mike Kaput: Yeah, that, I love that we do this podcast on Mondays now and just pumped for the week. Like, I'm like, what's going to come? What's going to happen this week? Paul, as always, thank you for the insights for answering all of our questions and really connecting the dots for

[00:45:07] Paul Roetzer: us.

[00:45:07] Paul Roetzer: This is awesome. Yeah, we appreciate everybody listening again, you know, like subscribe, all that good stuff, but just keep exploring. Like that's the cool thing for us is in our Slack community, more and more we're seeing community members that are out there doing research and sharing with groups. Like it's not just us sharing the information out.

[00:45:27] Paul Roetzer: It's like there's now a community emerging of other people that are reading the research papers and testing the technologies and. Yeah, we just keep encouraging people to keep exploring it and keep sharing and hopefully we'll all try and figure out together where this is all heading and, and try and get there first.

[00:45:45] Paul Roetzer: So yeah, thanks for listening and we will talk to you again same day next week. Thanks, Mike. Thanks Paul.

[00:45:54] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to marketing ai institute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:46:16] Paul Roetzer: Until next time, stay curious and explore ai.

Related Posts

[The Marketing AI Show Episode 37]: ChatSpot from HubSpot, Generative AI Market Deep Dive, and ChatGPT and Whisper APIs

Cathy McPhillips | March 7, 2023

This week's episode covers what's next for generative AI and ChatGPT with the announcement of new APIs and HubSpot’s new tool.

[The Marketing AI Show Episode 25] ChatGPT, What It Means for Marketing, and How It Will Change Business As We Know It

Cathy McPhillips | December 8, 2022

OpenAI’s ChatGPT has taken the world by storm. Paul and Mike discuss it, as well as Runway, Lensa, CharacterAI, CICERO, and more on this week’s podcast.

[The Marketing AI Show Episode 30]: ChatGPT Disrupts Education, Generative AI Gets Sued, and OpenAI Now Available from Microsoft

Cathy McPhillips | January 18, 2023

ChatGPT is everywhere, but generative AI isn’t all smooth sailing. Educators and attorneys have something to say. Plus, Microsoft and OpenAI updates.