<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

48 Min Read

[The Marketing AI Show Episode 75]: Sam Altman Returns to OpenAI, Amazon Introduces Q, and Google’s AI Content Problem

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A new week, a new update from OpenAI…

Paul Roetzer and Mike Kaput cover everything from OpenAI's official CEO announcement to Amazon's introduction of Q, discuss the potential risks associated with generative AI, and touch upon Google's recent decision to postpone their AI software release. With so much happening in the AI landscape, we're here to keep you informed. Tune in this week to stay up-to-date with the latest news!

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsors:

Algomarketing connects ambitious B2B enterprises to the competitive advantages of the autonomous. Their workforce solutions work to unlock the power of algorithmic marketing through innovation, big data, and optimal tech stack performance. Visit Algomarketing.com/aipod and find out how Algomarketing can help you deliver deeper insights, faster executions, and streamlined operations through the power of AI.

Use BrandOps data to drive unique AI content based on what works in your industry. Many marketers use ChatGPT to create marketing content, but that's just the beginning. BrandOps offers complete views of brand marketing performance across channels. Now you can bring BrandOps data into ChatGPT to answer your toughest marketing questions.

Listen Now

Watch the Video

Timestamps

00:02:20 — OpenAI formally announces Sam Altman returning as company CEO

00:13:47 — Amazon introduces Q

00:23:19 — Mollick warns on the danger presented by AI-generated content

00:33:34 — Microsoft research indicates the power of Generative AI

00:40:54 — Google postpones big AI launch as OpenAI trails ahead

00:43:29 — Ego, Fear, and Money: How the A.I. Fuse Was Lit

00:50:10 — More than half of generative AI adopters use unapproved tools at work

00:54:17 — Sports Illustrated Published Articles by Fake, AI-Generated Writers

00:58:07 — Apple launches personalized voice

01:01:03 — A fun experiment with ChatGPT / DALL-E 3

Summary

Sam Altman returns as CEO, OpenAI has a new initial board

OpenAI formally announced that Sam Altman is returning as CEO of the company, and also outlined some other important personnel updates.

Mira Murati is returning as CTO, and the board now consists of Bret Taylor (previously the co-CEO at Salesforce), Larry Summers (a former Treasury secretary), and Adam DeAngelo, the CEO of Quora and a member of the previous board.

In a note to the company published on its website, Altman shared some words about his return. He mentioned that he loves and respects Ilya Sutskever, a leader of the coup against him, and harbors no ill will towards him.

Altman did not mention why he was fired. He did say that the company has three immediate priorities:

“Advancing our research plan and further investing in our full-stack safety efforts…”

“Continuing to improve and deploy our products and serve our customers…”

“...building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events.”

OpenAI is also adding Microsoft to its board in a “non-voting observer seat” according to The Verge. (So far, the company has not said who will fill the seat.)

In an interview with The Verge, Altman repeatedly avoided answering questions about why he was fired, saying “The board is going to do an independent review here. I very much welcome that. I don’t have much else to say now, but I’m looking forward to learning more.”

He also said he doesn’t feel ready to talk yet about misunderstandings between him and the board that fired him.

When asked about Q*, OpenAI’s rumored breakthrough in AI reasoning we covered last week, he said “No particular comment on that unfortunate leak.”

Amazon Introduces Q, an A.I. Chatbot for Companies

Amazon has released its own AI assistant, named Q. Q is designed to help employees with tasks at work like summarizing documents or answering questions using company data. That makes Q a competitor with tools like Microsoft Copilot and ChatGPT Enterprise.

Q is built to address enterprise concerns around security and privacy. For example, it can be set up to allow or restrict access to certain types of data within a company based on an employee’s role.

The tool is built on Amazon Bedrock, which uses a variety of models—not just one—including Amazon’s Titan foundation model, as well as models from Anthropic and Meta. Q will cost $20 per user per month.

However, Q is already becoming a bit controversial, according to leaked documents obtained by Platformer (platformer.news). The documents show that some employees at Amazon “are sounding alarms about accuracy and privacy issues.”

Synthetic content in Google search results, and the potential impact

AI expert Ethan Mollick is one of the people posting about a new danger caused by AI-generated content. Last week, Mollick posted on X the following:

“It isn't just AI-generated text that is starting to bleed over into search results. The main image if you do a Google search for Hawaiian singer Israel Kamakawiwoʻole (whose version of Somewhere Over the Rainbow you have probably heard) is a Midjourney creation right from Reddit.”

What he’s describing is the fact that your main Google image results are increasingly being populated with AI-generated images, many of them hyper-realistic.

Says Mollick: “Seriously, don't trust anything you see online anymore. Faking stuff is trivial. You cannot tell the difference. There are no watermarks, and watermarks can be defeated easily. This genie is not going back in the bottle.”

This has major implications across a range of businesses and use cases:

“Doing sentiment analysis on a large corpus like Reddit or Twitter to track social or political changes? Your data is forever corrupted Assessing documents for a lawsuit assuming a human wrote them in some way? No longer true. Seeing which opinions are popular online? Mostly AI”

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: how much human do we really need in the loop once we do that?

[00:00:03] Paul Roetzer: And the answer is, you can't get the human out of the loop. The human still is responsible for the final output or decision that's made.

[00:00:10] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:30] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:39] Paul Roetzer: Welcome to episode 75 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are both traveling today, but both able to record before we travel. So today is Monday, December 4th. It is 9: 10 AM. I always timestamp this in case something [00:01:00] crazy happens right after we record this.

[00:01:02] Paul Roetzer: so we are back for our regular edition with our main topics and rapid fire items. First up, I want to recognize our sponsors. Today's episode is brought to us by Algo Marketing. Algo Marketing connects ambitious B2B enterprises to the competitive advantages of the autonomous. Discover our workforce solutions and unlock the power of algorithmic marketing through innovation, big data, and optimal tech stack performance.

[00:01:27] Paul Roetzer: With Algo, visit algomarketing.com/AIpod and find out how Algo Marketing can help you deliver deeper insights, faster executions. and streamlined operations through the power of AI. And we also have BrandOps. So many marketers use ChatGPT to create marketing content, but that's just the beginning.

[00:01:49] Paul Roetzer: When we talked with BrandOps team, we were impressed by their complete views of brand marketing performance across channels. Now you can bring BrandOps data into ChatGPT to answer your toughest [00:02:00] marketing questions. Use BrandOps data to drive unique AI content based on what works in your industry.

[00:02:07] Paul Roetzer: Visit brandops.io/marketingAIshow to learn more and see BrandOps in action. So thanks to Algo Marketing and BrandOps for sponsoring today's episode. Mike, it's all yours.

00:02:20 — OpenAI formally announces Sam Altman returning as company CEO

[00:02:20] Mike Kaput: Alright, Paul. The saga continues at OpenAI. OpenAI has formally announced that Sam Altman is returning as CEO of the company. That deal appears to be done.

[00:02:32] Mike Kaput: That was previously announced but now they have You know, signed on the dotted line to bring Altman back. And they've also outlined some other important personnel updates. Mira Mirati is returning as CTO and the board now consists of Brett Taylor, who was previously the co CEO at Salesforce, Larry Summers, a former treasury secretary and Adam D'Angelo, who is the CEO of Quora and was a previous [00:03:00] member.

[00:03:00] Mike Kaput: of the board that tried to oust Altman. Now, in a note that was published on the company's website, Altman shared some words about his return. He mentioned that he loves and respects Ilya Sutskever, who led the coup against him. and harbors no ill will towards him. He mentioned that while Ilya no longer serves on the board, Altman and the company are discussing how, if at all, he can continue his work at OpenAI.

[00:03:29] Mike Kaput: Now, Altman did not mention why he was fired, but he did say that the company has three immediate priorities. The first is advancing our research plan and further investing in our full stack safety efforts. The second is continuing to improve and deploy our products and serve our customers. And the third is building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events.

[00:03:56] Mike Kaput: Also notable, OpenAI is adding [00:04:00] Microsoft to its board in what they're calling a non voting observer seat, according to The Verge. So far, Microsoft has not said who will fill the seat. In an interview with The Verge, Altman repeatedly avoided answering questions about why exactly he was fired. He said, quote, The board is going to do an independent review here.

[00:04:21] Mike Kaput: I very much welcome that. I don't have much else to say now, but I'm looking forward. to learning more. He also said he doesn't feel ready to talk yet about what he categorized as misunderstandings between him and the board that fired him. And last but certainly not least, when he was asked about QSTAR, which is OpenAI's rumored breakthrough in AI reasoning, which we covered heavily last week, he said, quote, no particular comment on that unfortunate leak.

[00:04:54] Mike Kaput: Paul, a lot going on here. First up. Where do we stand now at OpenAI? [00:05:00] What's going to remain the same? What's going to be different moving forward? What do you predict we're going to be seeing?

[00:05:07] Paul Roetzer: Well, if anyone else is confused, like join the club, I swear, like I had to go back and look at the last two episodes of the podcast and be like, did we talk about this already?

[00:05:14] Paul Roetzer: And then it's like, oh wait, no, this is legitimately like last Wednesday, they officially put it up on their blog and they had statements from Altman. It's like, man, I feel like we're just living in the twilight zone right now, just on this repeating loop. So I think first thing that jumped out to me is Altman acknowledging QSTAR is, is a thing.

[00:05:33] Paul Roetzer: So that was obviously, I think an episode or two ago, we talked about that had come out. I think it was in the information maybe that had that, that story. So, you know, that, that is a thing. if you, if you didn't listen to last week's episode, go back and listen to that. We went pretty deep on QSTAR and some theories of what that could actually be.

[00:05:54] Paul Roetzer: so you can check that out. The board, you know, I think what's relevant here again, like, [00:06:00] this is a lot of just inside baseball, you know, I don't think the average marketer business leader thinks about the makeup of boards of tech companies they work with very often. But I think in this case, it's, it's really relevant to just have like a baseline understanding of it.

[00:06:16] Paul Roetzer: so first, obviously it needs more diversity, you know, having three men leading this is not where this is going to end. I believe I saw their goal was nine. I think the board is going to have nine people on it. So they obviously need to focus on diversity on the board. The one, addition so far that I found most intriguing is the addition of Larry Summers that you had mentioned.

[00:06:40] Paul Roetzer: So he was the Secretary of the Treasury under Clinton from 99 to 2001. He was the Director of National Economic Council 2009 2010 under Obama and then he also served as the President of Harvard University from 2001 to 2006. I went back and kind of looked at Some of his [00:07:00] previous, conversations and interviews around AI.

[00:07:05] Paul Roetzer: he gave one last week to Bloomberg. I believe he's a contributor for Bloomberg. And he said that, OpenAI needs to, quote, needs to be a corporation with a conscience. We need to be always thinking about the multiple stakeholders in the development of this technology. And then he went on to cite one of his peers.

[00:07:22] Paul Roetzer: in terms of what he means by conscience he said is the knowledge that's that someone is watching kind of a weird definition of conscience, but um so he's you know, he's going to push for I I don't know. I'm not I honestly i'm not even sure what that means within the current structure He does reference in that interview the nonprofit structure overseeing the for profit structure.

[00:07:45] Paul Roetzer: So I'm not sure which Cooperation within that structure is functioning with the conscience, but, it's, it's what he said, he also said that OpenAI has to be prepared to cooperate with key government officials on regulatory [00:08:00] issues, on national security issues, and on development of technology issues, He also, I went back and looked at, something he put up April 7th of this year.

[00:08:11] Paul Roetzer: So that was just three weeks after GPT-4 came out. So, you know, a few months after ChatGPT overall, but, just shortly after GPT-4 was introduced and in an interview on Bloomberg TV, he said, more and more, I think ChatGPT is coming for the cognitive class. It's going to replace what doctors do, hearing symptoms and making diagnoses, before it changes what nurses do, helping patients get up and handle themselves in the hospital.

[00:08:39] Paul Roetzer: It's a great opportunity to level a lot of playing fields. So the reason I think that this is significant is You know, I'd have to look up what episode it was, but we've had a number of episodes where we talked about, my belief that AI is going to dramatically affect knowledge work. I do think it's going to be very disruptive in the near term.

[00:08:58] Paul Roetzer: I think over time, there's a [00:09:00] chance it levels out and more jobs are created through AI, but I do believe that in the next few years, we're going to see some significant disruption to knowledge work. So the idea that a leading economist is now on the board of one of the most influential AI companies. I think matters to all of us.

[00:09:17] Paul Roetzer: I think we need more economists thinking deeply about the impact of AI and being more proactive in preparing for it. I don't think OpenAI has been that to date. And I don't, I haven't seen it from Google. I haven't seen it from any of these companies. Microsoft will talk a little bit about later on.

[00:09:33] Paul Roetzer: Salesforce has done some studies, but to date I think it's mostly been research. and there's been a few reports released from key economists, but I haven't, I've yet to see one that I feel confidently projects out what the next few years looks like. And so if, if a leading economist is now on the board at OpenAI, my hope is that OpenAI is then more proactive.

[00:09:55] Paul Roetzer: in addressing the potential impact on knowledge work. so that was [00:10:00] something that jumped out to me. And then the last, you addressed what happens to Ilya. So again, if you've been following along, you've heard that name a lot. If you haven't, he is one of the co founders of OpenAI. He appears to have either led or certainly gone along with the coup to get Sam Altman out because of his concerns around safety, it appears.

[00:10:18] Paul Roetzer: He is now off the board. He may stay with OpenAI. It was a very weird quote from Sam that he loves him and respects him, but that's, that certainly doesn't sound very encouraging that, you know, there's going to be a future at OpenAI. We'll see. a couple of logical things to consider here is again, Dario Amadei, who we've talked about numerous times, the CEO of Anthropic.

[00:10:43] Paul Roetzer: He left OpenAI in 2021 due to safety concerns, which seemed to be very aligned with what Ilya was talking about here. took 10 percent of the OpenAI team with him in 2021. So, you know, it's like, well, does Ilya go work with Anthropic and pursue the safety [00:11:00] mission? The one that's kind of like, just out there, and again, I'm just kind of like Throwing out stuff to think about.

[00:11:05] Paul Roetzer: Jeff Hinton, Ilya was a Jeff Hinton disciple. So, Ilya came from If you read, there's a story we're going to talk about in a minute from the New York Times. Jeff Hinton led a team that made a breakthrough in computer vision back in 2011, 2012. Ilya was on that team. And then Jeff, Ilya, and one other PhD student went to work for Google when Jeff Hinton sold his company to Google in 2012 for 44 million.

[00:11:31] Paul Roetzer: So Ilya and Jeff Hinton are very tight and obviously came up together. so Jeff left Google because of his concerns around safety. So I don't know, there's just, again, like we've talked in the last couple episodes about how interconnected all of these people are. And so if for some reason Ilya You know, doesn't find a future at OpenAI and they can't see past what happened.

[00:11:56] Paul Roetzer: just a couple of places where I, you know, I think that, [00:12:00] you know, it makes sense that he's going to be having some conversations and just, there's a, there's a collection of people who are very focused on AI safety moving forward. and they tend to find each other, I guess is what I'm saying.

[00:12:12] Mike Kaput: How significant is Microsoft's increased involvement on this board?

[00:12:17] Mike Kaput: It seems like this timing for the whole coup and chaos at OpenAI could not have come at a worse time for them with their rollout of Copilot and expanding this partnership. Are they Trying to play a bigger role here to be the adult in the room.

[00:12:35] Paul Roetzer: I don't think there was any way they weren't going to have some involvement on the board after everything that happened, you know, somehow to Satya's credit, Microsoft's credit overall, how they came through this with almost no negative PR at all.

[00:12:49] Paul Roetzer: Like they make this 10 billion bet on a company that, you know, goes crazy for. 72 hours and nobody ever really questioned Microsoft, you know, I think they were [00:13:00] proactive. They stepped out and said, we're supporting Sam no matter what. And I don't know, Microsoft played it extremely well. I don't, like I said, I don't remember seeing a single negative headline about Microsoft.

[00:13:12] Paul Roetzer: so I think that. This is a major investment for them. Yes, it's critical to what they're doing. And I don't think there was any chance they were not going to have a greater involvement and greater, transparency into what was happening. You know, I think the fact that Satya got the call, like what, 20 minutes before Altman got fired, that that can't happen.

[00:13:33] Paul Roetzer: And honestly, it's just bad business overall, but. In an environment like this, you can't have that. You can't have those kinds of surprises. So yeah, I mean, I would, I would have been more surprised if Microsoft didn't have some involvement in the board.

00:13:47 — Amazon introduces Q

[00:13:47] Mike Kaput: So next up, Amazon has actually now released its own AI assistant and it is named Q, the letter Q.

[00:13:55] Paul Roetzer: Not to be confused with Q* at OpenAI.

[00:13:58] Mike Kaput: Right. We need a [00:14:00] whole podcast episode on how some of these things get named because we need to get some marketers in the room. Q is not actually a consumer facing AI assistant. It's actually designed to help employees with tasks at work and in their jobs.

[00:14:15] Mike Kaput: Things like summarizing documents or answering questions using a company's data. So this makes Q much more a competitor with tools like Microsoft Copilot. and ChatGPT Enterprise. Q, Amazon claims, is built to address enterprise concerns around security and privacy. For example, Amazon says it can be set up to allow or restrict access to certain types of data within a company based on your role, your permissions, your seniority, etc.

[00:14:46] Mike Kaput: Q can also access data that isn't on Amazon's servers if you want to give it access, including data from things like Slack and Gmail. Now, Q is actually built on Amazon Bedrock, which [00:15:00] uses a variety of models, not just one. And that includes Amazon's Titan foundation model, as well as some third party models from companies like Anthropic and Meta.

[00:15:11] Mike Kaput: Q is going to cost around 20 per user per month is what's being reported right now. However, Q is already becoming a little bit controversial. We have seen some leaked documents reported on that were obtained by a website called Platformer. Platformer. News. The documents show that some employees at Amazon quote are sounding alarms about accuracy and privacy issues.

[00:15:38] Mike Kaput: They say quote Q is experiencing severe hallucinations and leaking confidential data, including according to platformer, the location of AWS data centers, internal discount programs, and unreleased features. So there's two pieces to this, Paul, I want to talk to you about. First up, can you kind of contextualize Amazon's [00:16:00] role in the AI landscape, your thoughts on how big a deal Q is, and then we can dive into that second piece here about some of these security features.

[00:16:10] Paul Roetzer: It's unlike, you know, a lot of other tech companies. Amazon wasn't, sleeping at the wheel with AI. I mean, Amazon's been an AI company for 15, 20 years. from the robotics in their, warehouses to predictive engines on Amazon, predicting purchases to AWS, the leading, you know, cloud provider. they're, they have AI infused into everything.

[00:16:34] Paul Roetzer: Generative AI wasn't really their domain. So again, like a lot of companies, once ChatGPT emerged last November, we started seeing a lot of these companies kind of racing to catch up. So the initial play from Amazon was this bedrock, at least publicly facing, where they kind of did the Amazon play. They became the everything store for language models.

[00:16:57] Paul Roetzer: So this idea is you already trust [00:17:00] AWS, just like if you're a Google Cloud or Microsoft Azure, you trust them to host your corporate data, your confidential information. And so they, they built the ability to take these other models. So you mentioned there was Anthropic Claude, their own Amazon Titan, Stable Diffusion, Llama-2, and Jurassic are the ones that are in there right now, where you could go in and take any of these models and connect it to your data.

[00:17:24] Paul Roetzer: So that was their initial generative AI play. And now with the introduction of Q, they're basically trying to take advantage of the fact that you do trust them to have your data, and if they build these gender AI capabilities in, you're more likely to do this. So, you know, I think that the significance, you know, we've, we've talked with a lot of enterprises who are slow playing their decisions around generative AI because in part they are concerned around the security and privacy side of connecting their data and trusting unknown third party vendors to, to have access to [00:18:00] that data.

[00:18:01] Paul Roetzer: So it seems very logical that as we go into 2024, enterprises are going to look to Microsoft, Google, and Amazon as primary providers. Now you would think OpenAI would be in that conversation too, But I gotta think that there's some hesitancy right now, especially at bigger enterprises, to make a bet on OpenAI until everything sort of calms down.

[00:18:24] Paul Roetzer: Because if you're a CIO, or a CDO, or even a CMO, and you were watching what happened the last three weeks, and you, you were either already in with OpenAI, or you were planning on betting on OpenAI next year, and that was going to be your primary provider, like a ChatGPT enterprise, you, you gotta like, Question the, how good of a decision that would be right now.

[00:18:47] Paul Roetzer: And so I think what's going to happen is going into next year. You're going to have some companies that will just rely on their single vendor. Amazon, Google, Microsoft are the [00:19:00] logical ones. I think you're also going to see some enterprises that take a bit of a, a diverse approach to this, like spread their bets out a little bit, because we just don't know how it's going to play out.

[00:19:13] Paul Roetzer: and we'll talk a little bit more about like large language models, foundation models in a minute in another topic, but. You just don't know what they're going to be capable of six months from now. And so you're hesitant to make a big bet on, say, you know, a third party SaaS company that's wrapping their technology over a foundation model that isn't theirs because you don't know if the foundation models are going to be capable of doing things that you're going to be paying separate license fees for.

[00:19:39] Paul Roetzer: So Amazon Q is is interesting in that sense that it's going to give you these abilities, but you highlighted some of. the concerns that they claim is the whole reason you would use this service. So that, you know, they're saying it's security and privacy is part of the reason why, and that's what the New York Times article, you know, I think that you mentioned touches on.

[00:19:59] Paul Roetzer: And yet, [00:20:00] Within two days, you see stories that, yeah, these things still hallucinate just like other language models, like it's, it's still going to make stuff up and you can't rely on it.

[00:20:10] Mike Kaput: So maybe unpack that for us a little more because I think there's a misconception sometimes that just because you're layering on an LLM or conversational assistant over your real company data that somehow we've solved all these problems around hallucinations, around accuracy, and around how these tools operate.

[00:20:31] Paul Roetzer: Yeah, I think it's, it's just good to keep remembering that large language models and these generative AI capabilities and applications They're, they don't function like traditional software where they, you tell it to do X and it does X every time the same way the output isn't, you know, always reliable.

[00:20:49] Paul Roetzer: So there's, I think there's going to be a natural assumption that will, once we connect our data to it and it's looking at our knowledge base. And so if it's going to answer a [00:21:00] customer service question or an internal question about data, that because it's going and looking at our files and AWS and our data.

[00:21:08] Paul Roetzer: 100 percent of the time it's going to give us the correct answer. Or if we ask it to analyze our marketing analytics or run a CRM report for us, that it's going to be accurate a hundred percent of the time. That's a natural thing to assume. That is not how these things work. So, It's, I mean, I, we see it all the time.

[00:21:27] Paul Roetzer: You, you, you do a lot of talks. I do a lot of talks. You meet with these people after these talks, and these are the kinds of questions you get. It's like, oh, okay, so once we connect it to our data, they'll be accurate, and we can trust them. And, like, how much human do we really need in the loop once we do that?

[00:21:42] Paul Roetzer: And the answer is, you can't get the human out of the loop. You still have to have the people there. Like, we, I did a talk last week, for accounting firm leaders. And We were having these kinds of conversations where you're going to look at infusion of generative AI technology into the accounting industry.

[00:21:59] Paul Roetzer: And so you start [00:22:00] thinking about like filling out tax forms. It's a natural thing that AI is going to be able to do, but you can't remove the CPA from that equation. Like, they're still going to have to verify, they're still going to have to go through and do it. And so I think with Amazon Q, It's that kind of idea.

[00:22:17] Paul Roetzer: Like, yeah, it has all these interesting use cases. There's all these ways you could potentially apply. Like it talks about summarizing strategy docs, filling out internal support tickets, answering questions about company policy. It's going to be able to do all those things, but not with a hundred percent accuracy.

[00:22:34] Paul Roetzer: And so this goes back to what we've talked about so many times that. Education and training are so essential because if you're a big enterprise and right now, Amazon Q is just available in preview, but once it's rolled out for 20 a month or whatever it's going to cost, you have to teach your team that you can't just assume everything these things do.

[00:22:54] Paul Roetzer: is true and there still has to be human agency. The human still is [00:23:00] responsible for the final output or decision that's made. And so I think as we move into 2024 and we start seeing wider scale adoption of generative AI technologies, we have to, as companies, Make sure we're providing the proper education and training so that people know how to use these.

[00:23:16] Paul Roetzer: That it is not like traditional software.

00:23:19 — Mollick warns on the danger presented by AI-generated content

[00:23:21] Mike Kaput: So in our third main topic today, AI expert Ethan Mollick, who I feel like we probably reference every week because he's so good at what he does, is one of the people who is now posting about a new danger being caused by AI generated content. So last week, Mollick posted on X the following.

[00:23:41] Mike Kaput: He said, quote, It isn't just AI generated text that is starting to bleed over into search results. The main image, if you do a Google search for Hawaiian singer Israel Kamakawiole, whose name I'm sure I probably just mangled, but you probably do know him. Because he does a beautiful [00:24:00] version of the song Somewhere Over the Rainbow.

[00:24:02] Mike Kaput: You've probably heard this. The main image, if you do a Google search for him right now is a mid journey creation, an AI generated image right from Reddit. Now, what Mollick is describing is the fact that your main Google image results are increasingly being populated with AI generated images. And many of these are hyper realistic.

[00:24:25] Mike Kaput: Mollick actually goes so far as to say. Quote, Seriously, don't trust anything you see online anymore. Faking stuff is trivial. You cannot tell the difference. There are no watermarks, and watermarks can be defeated easily. This genie is not going back in the bottle. A huge amount of what you learned or think you know about how to evaluate images or texts is no longer valid.

[00:24:48] Mike Kaput: Not an exaggeration. Now, this has some major implications across a range of businesses and use cases. He kind of riffs on a few of them just off the top of his [00:25:00] head that are really interesting from a marketing and business perspective. For instance, he says, Look, if you're doing sentiment analysis on a large corpus of data like Reddit or Twitter, to track, say, social or political or business changes, your data is forever corrupted.

[00:25:16] Mike Kaput: Are you assessing documents for a lawsuit, assuming a human wrote them in some way? No longer true. Are you seeing which opinions are popular online? Mostly AI. So there's this whole piece of this danger of AI generated content making it highly suspect that the data you're using to do any type of analysis online is valid.

[00:25:38] Mike Kaput: But Paul, you also responded to Mollick with a related concern. And you said, I don't think we're talking enough about how quickly this could deteriorate the value and reliability of Google Search and what that will mean to consumer search behavior and Google's dominance in that market. Can you walk us through what you're worried about here?[00:26:00]

[00:26:00] Paul Roetzer: I was kind of like talking we meaning like almost you and me in some ways when I wrote that because I haven't heard anything about this, like it's just not a top and maybe I'm just it's not in my network like people in my network aren't talking about this maybe in the SEO circles where I don't spend a ton of time, it is being discussed more, but I have, I don't know that I've seen a single article about this, like all we hear about is that you know, could ChatGPT replace Google search?

[00:26:30] Paul Roetzer: Like, are the large language models going to replace the need to use, search in a traditional way? I've personally talked on the show about how it feels obsolete now to go to Google and click on the 10 links, because it's just easier to just ask and get the narrative back. I hadn't really considered, what if, within like three months, All of Google search is just a bunch of fake stuff and Google can't identify it.

[00:26:57] Paul Roetzer: Like they can't reasonably [00:27:00] identify what was written by AI. They, they can't identify images that were generated by AI other than their own. They won't be able to identify videos that were generated by AI other than their own. So. Unless Google has some technology they haven't told the world about yet to identify third party AI generated content, how can they ensure the reliability of search results?

[00:27:29] Paul Roetzer: And so, you know, like I've said before, my feeling is I don't want AI generated writing. I want writing from people with an actual perspective, like, you know, to know that it's their experience, their points of view that are coming through. And so if I feel like I go into Google and not only do I have to click through links, but when I'm clicking through things, like it's not even human generated stuff anymore, I'm just going to stop using it.

[00:27:57] Paul Roetzer: And knowing how easy it is to [00:28:00] create this stuff, like we've seen some people in the SEO space or in the digital marketing space who like doubt the fact that they're writing thousands of articles with AI. So we know it's happening. We know the image stuff is going to explode because now you can do it.

[00:28:14] Paul Roetzer: We know it's going to happen with music and with voice and with video, like. None of that by the end of 2024, if we fast forward a year from now, I mean, this stuff's just going to be everywhere. And so all of a sudden you're like, wow, is like, is Google search going to be obsoleted because all the content is synthetic?

[00:28:33] Paul Roetzer: And I just hadn't personally stopped and really pondered that, but I think that's a very near term concern. Like, I don't, I don't even think that's like a two, three year thing out. I mean, I think it's like a three to six month thing out to where people could start realizing the quality of Google search results just isn't.

[00:28:49] Paul Roetzer: what it used to be. So I mean, have you thought about that at all, Mike? Like, I don't know, interesting concept.

[00:28:55] Mike Kaput: Yeah. I mean, I just, I'm increasingly getting [00:29:00] obsessed and almost losing sleep at night over how quickly I'm seeing my own behavior change to your point about search results. I mean, I basically essentially replaced Google in my daily life with something like Perplexity AI.

[00:29:12] Mike Kaput: You still have to check your information, your answers. It's stunning to me. I honestly don't know if I've done a traditional Google search seriously for something that's not like the time of the Browns game, right? Yeah. In weeks. And that, to me, might still be such an outlier, but the fact it's even possible is crazy.

[00:29:34] Mike Kaput: And I also think we're going to run into this idea of, like Mollick says, he says that literally there's going to be a dividing line now. There's content that was created before mid 2023. and content after mid 2023, like what you are evaluating and how you behave, how you consume information online. That's the cutoff now.

[00:29:56] Mike Kaput: It's going to be different. And I don't even, I haven't [00:30:00] begun to figure out what all the second and third order effects

[00:30:02] Paul Roetzer: of that could be. Yeah. And I started like playing it out in my own head a little bit where, You know, if you start mashing up some of these things like AI agents, we've talked about where they can go and take action on your behalf.

[00:30:15] Paul Roetzer: So another example where this could become a real big issue is social media, you know, I think specifically of LinkedIn. I'm sure it's all over Twitter right now. I don't get into the Twitter comments very much, but LinkedIn, I look at comments all the time. You could easily. probably train an AI agent, if not now, some point in the very near future, to monitor certain profiles for posts, read the post, and draft comments.

[00:30:43] Paul Roetzer: So, you have to start wondering in the not too distant future, like, what social media comments are even real. Like, right now, you can generally tell when they're not, because they usually just pull some keywords and then repeat the keywords in the comment, and, so I see those things all the time, like, even in my own feed.[00:31:00]

[00:31:00] Paul Roetzer: But, you know, I think you start to wonder things like that, I don't know. And like you said, I think our behavior is starting to change, but we were a little bit more ahead on a lot of this stuff, like than the general public per se. but even last week, like I flew into Sarasota. I'd never been to Sarasota.

[00:31:18] Paul Roetzer: At least I don't think I've ever been to Sarasota. And so I was in Uber. going from the airport to the hotel, and I just went into ChatGPT, it's like, hey, first visit to Sarasota, what, what should I know about the city? And it just wrote me like, hey, here's the sites, here's, you know, recommended restaurants kind of thing.

[00:31:32] Paul Roetzer: And I was like, oh, I didn't realize Siesta Key was right here. Like, tell me a little bit more about Siesta Key. I've heard it's beautiful. And it was way better experience than if I'd have gone into Google and asked the same question and got five sponsored posts up front, and they were all like paid listings of things that, like.

[00:31:48] Paul Roetzer: And that's when I realized, like, wow, my behavior, to your point, is starting to change. And that wasn't even because I thought the content would be fake that I went and found on Google. Yeah. That was just like, this is just a better experience to [00:32:00] have a conversation about where I am right now. So, I don't know, between that and the fake stuff, I could totally see a massive shift in behavior next year.

[00:32:10] Mike Kaput: 100%. And I also wonder, specifically in our world, what the implications will be for things like, we already have a huge problem with fake product reviews, you know, feedback online of businesses. I just see that being a complete nightmare when it comes to synthetic generated content. I mean, we're already review, review bombing.

[00:32:32] Mike Kaput: things we don't like by, you know, marshalling an army of cheaply paid people

[00:32:36] Paul Roetzer: online. Yeah, you do AI agents to do it. Right. Yeah, I don't know. I mean, I feel like I don't want to be the one to have to do this, but I hope someone does. Like, what is the next 12 months look like? Kind of digital marketing story.

[00:32:51] Paul Roetzer: if you write it, let us know. Cause I don't have the brainpower to think about this, but I mean, you really do need to sit down and think about the impact of online reviews and [00:33:00] search and social media engagement. It's like, You know, if you think about our last 12 to 15 years in the profession, like I started my agency in 2005.

[00:33:10] Paul Roetzer: so that was right as social media was starting to emerge, or as blogging was starting to emerge, or as podcasting was starting to emerge. And that's really the world we've known. Like you create content, you engage on social media. it's how you do marketing. And like, what if What if that isn't what the future is?

[00:33:27] Paul Roetzer: I don't know. I mean, it's a really interesting thing to think about. All right, so

00:33:34 — Microsoft research indicates the power of Generative AI

[00:33:34] Mike Kaput: in other news this week, Microsoft just published some new research that appears to demonstrate just how powerful general AI systems are becoming, even when we give them very small specific domains of expertise in which to perform actions and understand information.

[00:33:52] Mike Kaput: So in this research, Microsoft looked at how OpenAI's GPT-4 model performs as a specialist, [00:34:00] not as a generalist. So instead of seeing how decent GPT-4 is at a lot of different things, Microsoft wanted to see how great it could be at one very specific thing. In this case, that one very specific thing was medicine.

[00:34:16] Mike Kaput: In their tests, Microsoft found that GPT-4 actually outperformed a leading AI model that was specifically fine tuned for medical applications. So they basically evaluated both tools based on the same medical benchmarks, and GPT-4 outperformed the specialist system by quote, a significant margin. This seems to suggest some big implications for how we use AI and where we're going because for one, it seems to align with previous assessments from Microsoft that showed GPT-4 is capable of general problem solving skills.

[00:34:53] Mike Kaput: Second, Microsoft found that by using the right prompting approach, GPT-4 on its own could become [00:35:00] very, very good at these domain specific tasks. So why does this all matter? one, because previously the way to get top performance in these really domain specific areas like medicine involve fine tuning these models on specially curated data in this domain.

[00:35:17] Mike Kaput: So getting a bunch of high quality data within medicine or within an area of medicine and teaching the model to perform tasks within those domains. Now, these findings seem to suggest that less fine tuning, or none at all, could be needed to make generalist systems like GPT-4 very, very good at these complex, specific things.

[00:35:41] Mike Kaput: Since fine tuning is so complicated and expensive, it could also speed up dramatically the development of domain specific AI capabilities. Now, Paul, what did you make of this research and its implications for marketing, business, and AI development overall?

[00:35:58] Paul Roetzer: This is back to what we were talking [00:36:00] about early on of the challenges that enterprises face going into 2024 about which vendors they work with.

[00:36:08] Paul Roetzer: So, one of the reasonable assumptions throughout 2023 has been that fine tuned models on specific data for an industry or a company will outperform these general models. you know, this would seem to imply that that may not be true. Now, we don't know this to be fact. This is one report. We've seen a lot of research in the last few months that say smaller models fine tuned on specific data outperform the big models.

[00:36:38] Paul Roetzer: This would appear to say that's not true if you prompt it correctly. So then you could have the companies that are building the smaller, fine tuned models say, well, you can't develop the perfect prompt all the time, it's, it, you know, you still need prompting. I don't know, like, we've seen leaps forward in the last two months in the ability of the [00:37:00] AI systems to prompt on our behalf.

[00:37:02] Paul Roetzer: So if you know that better prompting in the general model leads to greater outputs, then you develop the prompting ability behind the scenes for the AI to do that, so it doesn't rely on the human. So, I don't know, I mean, I think that right now it's just really important to monitor this, especially if you are a decision maker or an influencer at a company.

[00:37:26] Paul Roetzer: Who is building the generative AI strategy and making decisions around generative AI vendors going into 2024. I think you have to take all this into consideration. And again, this is with, like, they ran this, I think, over the summer. Microsoft did this research. So, If we fast forward to mid 2024 and we hopefully have the Google Gemini model, which we'll talk about in a minute and we hopefully have a GPT-5 and Anthropic 3 and whatever, like there's going to be more advanced models and you assume [00:38:00] they're only going to get better.

[00:38:01] Paul Roetzer: We talked last week about how right now, like Karpathy, when we talked about his intro to LLM video last week. Right now, the scaling laws seem to be holding up that if you keep training these big models on more data and give them more time and more, you know, processing power, they keep developing these capabilities.

[00:38:21] Paul Roetzer: So, I don't know, like, no one knows the answer right now. We don't know if the smaller open source models, fine tuned, are going to be the answer. We don't know if the big general foundation models are going to do everything you need them to do, or if it's going to be a symphony of them. You're going to have a couple models that do these things and a big model that does this thing.

[00:38:40] Paul Roetzer: The big model costs more, so sometimes you use a smaller model. Like, we just don't know. And that's the challenge that enterprises are facing right now, is they're trying to figure out where this goes. And then the other thing is like, They're trying often to rely on outside consultants to help figure this out.

[00:38:57] Paul Roetzer: Like in some cases, you're spending 10 million or whatever to get [00:39:00] McKinsey to help you. In other cases, it might be smaller, but those consultants don't really know. No one's done this for a 12 month period and proven out. It's the right strategy. So I think going into next year, we just need to be very nimble with decisions that are being made and know that it's hard to make long term bets around these technologies right now.

[00:39:22] Paul Roetzer: They're still so new.

[00:39:24] Mike Kaput: And I would just add to that. that while it's not satisfying that there's no easy answer here, I think some of the value of us discussing these topics is like these decision makers need to understand that this is something they don't know. That this is even one possible path that nobody has figured out because it's all too easy to get blinders on and say, Oh, okay, we've got this figured out.

[00:39:46] Mike Kaput: Here's the strategy. Then something like this can completely torpedo, you know, that 12, 24, 36 month plan you're trying to come up with.

[00:39:55] Paul Roetzer: Well, yeah. And the reality is like, you may be the chief marketing officer or VP [00:40:00] of marketing or head of sales or customer service, whatever your role is. And it's, there's a chance that this decision lives with the CIO or, you know, CTO or something like that.

[00:40:10] Paul Roetzer: Well, they may have deep relationships with Google, Microsoft, Amazon, whomever, and they may be convinced. that this is the way they're going. And they may not take the time to step back or even be the right person to step back and say, yeah, but what about this model? Like there's a SaaS company that's built specific for what we need in marketing.

[00:40:29] Paul Roetzer: Like, we don't need to do what you're doing over there. And again, like we've sat in the room where these conversations are happening. So I know that these are things that are being debated. So, yeah, it's just important for everyone to understand, like, nobody really knows, even if they tell you confidently that they're sure, they're, they're not.

[00:40:50] Mike Kaput: All right, so, I just want to tell our audience, we are not trying to, you know, kick

00:40:54 — Google postpones big AI launch as OpenAI zooms ahead

 Google while they're down in this podcast, but they are [00:41:00] giving us material that we need to discuss. They just reportedly delayed the public release of Gemini, their long awaited conversational AI tool and ChatGPT competitor.

[00:41:13] Mike Kaput: According to the information, Google CEO Sundar Pichai canceled a number of Gemini events quote, after the company found the AI didn't reliably handle some non English queries. These events had not really been publicized, but one of them was intended to showcase the technology to policymakers and politicians.

[00:41:34] Mike Kaput: Now, Paul, we've talked time and time again that Google obviously is one of the top players in the world in AI, but it just seems like there are story after story about how things are getting delayed or falling flat. What did you make of this latest development?

[00:41:51] Paul Roetzer: Yeah, this is another one where you may feel like, didn't we talk about this last week?

[00:41:54] Paul Roetzer: We probably did, but this article was new and there was new information in this article. So what we [00:42:00] previously talked about on the podcast was that Google had told their corporate customers November 2023 was when they would like start rolling out Gemini. And then it had been delayed till sometime first quarter of next year.

[00:42:13] Paul Roetzer: So this article said that they had actually planned events in three cities for this week, including I think it was D. C., L. A., and New York, maybe, where they were going to be rolling out this technology. Now, the article didn't say when those three events were pulled. They made it sound like it was a very recent, last minute decision that they were not going to announce these things.

[00:42:34] Paul Roetzer: and the only one they explained, the only reason they gave was this non English query thing. I gotta guess it's way more than that, way more of a complex decision. But yeah, it just It just keeps getting delayed and everybody just kind of keeps waiting around and you know, the funny thing is within AI circles, it's always like, well, they just want to be confident.

[00:42:56] Paul Roetzer: It's better than GPT-4 at pretty much everything. Like they want to beat [00:43:00] GPT-4. Well, GPT-4 came out in April of 2003. It was six and a half months old when it came out, which means it's already well over a year old technology. So if you're Google and you're releasing Gemini in spring of 2024, beating GPT-4 isn't what you need to do.

[00:43:19] Paul Roetzer: You need to beat GPT-5, whatever that's going to be. So I don't know how it's going to play out, but we'll, we'll keep you posted as we hear more about Gemini. The mythical Gemini.

00:43:29 — Ego, Fear and Money: How the A.I. Fuse Was Lit

[00:43:30] Mike Kaput: Yes, mythical as of today, yeah. So, the New York Times just released a really interesting in depth report titled Ego, Fear, and Money, How the AI Fuse Was Lit.

[00:43:43] Mike Kaput: And it basically looks at the history of the friction and competition between some of the individuals in the world of AIncluding Elon Musk, Larry Page of Google fame, Sam Altman at OpenAI, and many others. And the Times basically [00:44:00] spoke with more than 80 executive scientists and entrepreneurs to kind of tell this story of the ambition, fear, and money that's been involved for almost a decade.

[00:44:10] Mike Kaput: in this race to develop AI and develop it safely. Now, Paul, this jumped out to you as particularly notable in terms of the ground it covers and the people involved. Could you maybe unpack for us why this is important?

[00:44:25] Paul Roetzer: I, so I often recommend Genius Makers by Cade Metz, as a book to understand the last like 12 years of AI and some of the progress that's been made.

[00:44:33] Paul Roetzer: He is one of four authors on this article. So four New York Times writers, teamed up to do this one. so, I don't know, I mean, there was a couple of things I found interesting in this one. I would just recommend go read it. It's a really good, like, cliff notes version of what's been going on for the last 12 years and how we've arrived at the moment we're in.

[00:44:56] Paul Roetzer: I wasn't, aware. I haven't read the Elon Musk bio [00:45:00] yet. so you may have known the story. Maybe that's where this came from, but we'll often hear Elon, say that part of the reason he decided to create OpenAI with Sam Altman and others was because of Larry Page at Google calling him a speciist, meaning he was pro human.

[00:45:18] Paul Roetzer: and I didn't know where that story came from. Like he says it all the time in interviews. but apparently it was his birthday party that he and Larry Page were sitting at a bonfire and like talking about the future of AI and Elon expressed his concern around things going wrong. And that's when he called him a species that like, basically he was pro human and that, you know, Page's feeling was we should let the cards fall where they may.

[00:45:43] Paul Roetzer: It's another intelligence takes over the world so be it. So that was like his main last straw as to why he created OpenAI. so that was interesting. I think it, it, again, there's, this article touches on like 17 different [00:46:00] stories in a matter of 5, 000 words. It's truly just like a rapid fire of what happened.

[00:46:05] Paul Roetzer: But I thought one of the important points it brought up is at the heart of the competition, what it says is the brain stretching paradox. The people who say they're most worried about AI are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep AI from endangering the world.

[00:46:24] Paul Roetzer: So I think it's really important that people understand the perspective of these tech leaders who are the ones who are actually making these decisions. They quoted Altman and said, There is disagreement, mistrust, and egos, thus the headline of the article. the closer people are to being pointed in the same direction, the more contentious the disagreements are.

[00:46:45] Paul Roetzer: You see this in sex and, religious orders, there are bitter fights between the closest people, which is what's happening in, in AI right now. The other one that I thought was, was interesting was a story about DeepMind. I hadn't heard or I'd forgotten if I had heard [00:47:00] it. So again, DeepMind created by Demis Shane Legge, London AI Research Lab that got funding from Peter Thiel and Elon Musk in the early days, sold to Google, formed an ethics board that was then disbanded.

[00:47:15] Paul Roetzer: and it is now the leading AI research lab at Google, but it said, the occasion was the first of DeepMind's ethics board on August 4th, 2000, or August 14th, 2015. The board had been set up as the insistence of the startup founders to ensure that their technology did no harm after the sale. The members convened in the conference room just outside of Mr.

[00:47:35] Paul Roetzer: Musk's office at SpaceX. So again, Elon Musk is involved in DeepMind deeply at this point, with a window looking into his rocket factory. But that's where Mr. Musk's control ended. When Google bought DeepMind, it bought the whole thing. Mr. Musk was out. Financially, he had come out ahead, but was unhappy.

[00:47:52] Paul Roetzer: Three Google executives were now firmly in control of DeepMind, which was Page, who Musk was concerned about, Sergi Brin, [00:48:00] and Eric Schmidt, who is Google's chairman, among others, including Reid Hoffman, who co founded Inflection AI years later with Mustafa Salomon. and some guy named Toby Ord, who I'd never heard of.

[00:48:14] Paul Roetzer: but this is the part that I'd never heard. DeepMind's founders were increasingly worried about what Google would do with their inventions. In 2017, they tried to break away from the company. Google responded by increasing the salaries and stock award packages of the DeepMind founders and their staff.

[00:48:32] Paul Roetzer: They stayed put. The Ethics Board never had a second meeting. And then the article ends with, OpenAI had beat the effective altruists at Anthropic. So Anthropic was the one that, you know, in 2021 split off. Mr. Page's optimists at Google scurried to release their own chatbot bard, but were widely perceived to have lost the race to OpenAI.

[00:48:53] Paul Roetzer: Three months after ChatGPT's release, Google's stocks was down 11%. Mr. Musk was nowhere to be found. But it [00:49:00] was just the beginning. So, I mean, this, this article reads like the introduction to an AI book. I don't know if that's what it is, but, just go read it. There's just a ton in there and it's a fascinating, pretty fast read about the last 12 years of AI development.

[00:49:16] Mike Kaput: Yeah, and we say it time and time again, how important it is to just follow and understand the handful of people that are driving all of this innovation forward. The bigger picture is important, but even you can learn a lot and really get a lot of insight by looking at what a few people are doing in this space.

[00:49:34] Paul Roetzer: Yeah, there's, there's really, oh, and I think it was the New York Times released this, like, Who's Who in AI. I'm not even going to put it in the show notes because there literally wasn't a single female on the list. Like, it was just a joke. Some of the people on that list, though, are very important to AI. It just wasn't a diverse enough list, so we won't share it.

[00:49:52] Paul Roetzer: But yes, there are, like, seven to 10 people who are largely driving the future of humanity. And so we [00:50:00] will continue to cover those people on this podcast because they are essential to not only business, but the broader impacts on humanity and society.

00:50:10 — More than half of Generative AI adopters use unapproved tools at work

[00:50:11] Mike Kaput: So, there's some surprising new research out from Salesforce that finds that more than half of employees are using unapproved generative AI tools at work.

[00:50:23] Mike Kaput: Salesforce polled more than 14, 000 workers across 14 countries about how they use generative AI at work. As part of this research, they found that 28 percent of employees report that they're using genAI at work. But over half of them don't have formal approval from their employers to do so. Not to mention, says Salesforce, quote, users are also engaging in additional ethically questionable activities at work when using generative AI.

[00:50:54] Mike Kaput: They found that a whopping 64 percent have passed off generative AI work as their [00:51:00] own, and 41 percent say they would consider overstating their generative AI skills to secure a work opportunity. Now, Salesforce also found that nearly 7 in 10 workers have never completed or received training on how to use generative AI safely and ethically at work, and 79 percent of workers say they do not have clearly defined policies around AI usage, whether that means the policies don't exist at all, they do exist but they're poorly defined, or the worker survey doesn't know if they exist.

[00:51:33] Mike Kaput: Now, Paul, like there's multiple parts for this that I was like, Oh no, this is, this is a problem. But what's interesting is the policy percentage almost directly mirrors the one that we have in our state of the industry report for state of marketing AI 2023, where almost 80 percent of people say they do not have any type of generative AI policy.

[00:51:57] Paul Roetzer: This is how I end every keynote I give [00:52:00] is the steps companies can take. We say education and training, first and foremost. You have to teach people. If they're going to use the tech anyway, you have to teach them. Form an AI council that responsibly applies this stuff, figures out the gaps, thinks about the impact on talent tech strategy.

[00:52:17] Paul Roetzer: Generative AI policies, responsible AI principles. If you haven't done these things This is your homework assignment. Like your company has to have these things going into 2024. And then we talk about an impact assessment on people. That's more of like a leadership thing, where they're looking at their roles within their company, figure out how AI is going to impact them.

[00:52:36] Paul Roetzer: The last thing is an AI roadmap. Where are we going to prioritize pilot projects? But none of this matters if your people don't understand what this tech is and how to use it. And until you put guardrails in place and give them ways to apply it responsibly, they're going to freelance and do this stuff on their own.

[00:52:50] Paul Roetzer: And they may do it irresponsibly, unethically, or they may just be doing it and turning in work as though it was theirs and not even [00:53:00] realizing the negative effects of that. So Yeah, I mean, any reports that we can share with you that further reinforce the importance of taking these steps we outline all the time, we will do it.

[00:53:12] Paul Roetzer: And this is a really good one. It is interesting that it does jive. with what we found in ours, Generative AI Policies, Responsible AI Principles. So, again, like, you don't need advanced AI knowledge to do these things. Just find the people in your company that are, that are willing to sponsor or support education and training, AI council, Generative AI Policies, Responsible AI Principles, and an AI roadmap.

[00:53:35] Paul Roetzer: Like, you gotta do them.

[00:53:37] Mike Kaput: And Paul, I don't know about you, but in my speaking engagements and workshops I've done, it's become so apparent that just how you talk about this stuff publicly to employees matters so much. You want to give them the space. And the permission to be talking about their usage of these tools, because it may upset you, but they are using them.

[00:53:57] Mike Kaput: So, you want to make sure they feel comfortable telling [00:54:00] you about that.

[00:54:00] Paul Roetzer: Yeah, and we've seen the same in school systems, where you have teachers who are using the tools, but aren't sure they're allowed to even talk about it. So, they're using them to help them in their classrooms, but they don't talk to their peers about it.

[00:54:10] Paul Roetzer: They don't talk to the leaders of the schools about it. Because no one's told them what they're allowed to do and not allowed to do so they're just kind of doing it on their own.

00:54:17 — Sports Illustrated Published Articles by Fake, AI-Generated Writers

[00:54:18] Mike Kaput: So we just found a bombshell article from the website Futurism that alleges that Sports Illustrated, the sports publication, has been publishing AI generated articles from fake AI generated writers and they have disclosed none of this.

[00:54:34] Mike Kaput: So, Futurism found an author profile on Sports Illustrated's website of a man who, it turned out, had no social media presence and a headshot that was available for purchase on a website that sells fake headshots that are generated with AI. The fake author's writing also sounded, in the words of Futurism, quote, alien, which led them to suspect it was AI generated.

[00:54:58] Mike Kaput: Futurism then [00:55:00] found some anonymous sources at Sports Illustrated that confirmed the company was creating fake authors and generating AI content without disclosing it. The company that owns Sports Illustrated, the Arena Group, denied the allegations. Right after that, the AI generated author profile that Futurism found had disappeared, and it redirected to the profile of another author.

[00:55:23] Mike Kaput: She also turned out to have a headshot that was for sale from the same fake headshot website, which led Futurism to suspect that one AI author had been swapped for another. Futurism also says this author profile, along with what they suspected was all the AI generated content, was deleted from Sports Illustrated's site.

[00:55:44] Mike Kaput: After they began investigating this story, it also appears that every time an AI author was swapped out, all the articles that the previous author had written were then switched to be attributed to the new author with no explanation. Futurism also [00:56:00] claims this type of behavior happened at another publication owned by Arena, the Arena Group.

[00:56:04] Mike Kaput: a publication called The Street, which covers financial topics. Man, Paul, we've seen, this isn't the first story we've seen like this, but it's pretty stunning to see a major website with this kind of pedigree and brand engage in this type of behavior. What did you make of this?

[00:56:22] Paul Roetzer: I just think this is going to be rampant.

[00:56:25] Paul Roetzer: It probably already is. We just don't hear about it all the time. it's just next gen content farms. Like when we were, again, early days of my agency, you know, when you joined, what, 2011, 2012, I think you joined the agency. You know, by that time you could go buy word, you know, penny, two pennies a word, and people were just pumping out junk content all the time.

[00:56:47] Paul Roetzer: People look for shortcuts to make money. That is the history of society. They will continue to do that. And if they can do it with AI generated people and content, they're [00:57:00] going to do it. And if the penalty is you get caught and you get some bad PR for a little while, they don't care. They're just going to keep doing it.

[00:57:06] Paul Roetzer: So it is disappointing that a brand like Sports Illustrated would do this. Either directly or through this third party. but it's going to be everywhere. Don't do it would be my advice. Like if you want to maintain a legitimate, trustworthy brand, don't take these shortcuts. This also further, addresses the issue we said earlier about like, well, what's even real in Google search results because Sports Illustrated has a really strong domain.

[00:57:31] Paul Roetzer: So there's probably a pretty good chance that these AI generated contents from an AI generated avatar. we're showing up in Google search somewhere. I, yeah, just, it's just disappointing, but it's just the reality of where this is going to go.

[00:57:46] Mike Kaput: Yeah. And to the point we were talking about before, I think it's becoming maybe apparent that this whole system that Google is the backbone of is maybe a little more fragile than we think as of today until there's a major [00:58:00] algorithm change or action on Google's part, that this is happening.

[00:58:03] Paul Roetzer: Yeah, and I'm sure Google has been aware of that way longer than we've been. Yeah.

00:58:07 — Apple launches personalized voice

[00:58:08] Mike Kaput: All right. Next up, Apple has released a feature called Personal Voice, which allows you to create a synthesized voice that sounds like your own. This came out as part of iOS 17 in September, 2023. you can use this voice to actually communicate across FaceTime, phone calls, and even in person conversations.

[00:58:27] Mike Kaput: Now, the reason we're talking about it is because recently Apple CEO, Tim Cook posted on X. A really interesting and novel use case of this feature saying that it's designed quote for those at risk of speech loss So they could actually be using it to further communicate even when suffering from some type of speech issue He also touted the privacy of this new feature.

[00:58:49] Mike Kaput: Personal voice is encrypted and stored securely on your device so that only you can access it. It can also be only used with apps that you personally give access to the [00:59:00] feature. Now, Paul, one of the responses to Tim Cook's posts on this subject kind of made me laugh out loud for how spot on it seems.

[00:59:07] Mike Kaput: So user at Nathan W Chan said, quote, Apple continues to ship AI without calling it AI. Now you're a longtime Apple fan and watcher. What did you make of this?

[00:59:19] Paul Roetzer: I do laugh at that because I've, I've said that like a hundred times, they just like, won't do it. They, they won't call anything AI. I think they're going to change that.

[00:59:28] Paul Roetzer: Like, I feel like IBM did that for a long time too. And now, now that's all IBM talks about. They called it like cognitive computing. They tried to get that phrase to stick in the early days. Um. So yeah, it is, it is funny. They, Apple has tons of AI in there, everything. They just won't call it that. but so this one was interesting to me though, just from a use case standpoint.

[00:59:49] Paul Roetzer: So I think I've told the story before when Mike and I wrote Marketing Artificial Intelligence, the book that came out in summer of 2022. We looked into using AI to synthesize my [01:00:00] voice. 'cause I read the audio book and just as an example, I wasn't going to do the whole book that way, but I wanted to do a part of it and say, oh, you know, we used AI to do this.

[01:00:08] Paul Roetzer: And at that time it was almost impossible to do. Like you, you would have to, we would've to have done a deal with Google. You needed like 40 hours of training data and just like it wasn't possible in spring of 2022. So here we are now in November, or I guess December 4th, 2023. And what the Apple page says is to create a personal voice, read a series of randomly chosen text prompts to record 15 minutes of audio.

[01:00:31] Paul Roetzer: Your speech is processed securely on a device overnight. And like next morning you wake up and you have a synthesized voice. so yeah, we'll put in the show notes, the support article on Apple of how you do this, you literally just go on your phone and do this thing and then read for 15 minutes. you can do this in Descript, like we've talked about for marketing purposes, Descript has this capability.

[01:00:51] Paul Roetzer: I think you read like 30 seconds of. of, text, and then it synthesizes your voice. So voice synthesis has become a thing. Like this is a [01:01:00] here and now technology. It has a whole bunch of potential misuses, but for marketers, for business people, for sales, for customer service, you're going to be able to create voices for whatever you want and use them in whatever application you want next year.

[01:01:13] Paul Roetzer: It is this, this technology is probably not perfected, but really close to being very, very good and being able to apply to all kinds of applications.

01:01:23 — A fun experiment with ChatGPT / DALL-E 3

[01:01:24] Mike Kaput: All right, last but not least on the docket today, Paul, you posted about a really fun and instructive experiment you did with DALLE-3, which is OpenAI's image generation tool right within ChatGPT.

[01:01:36] Mike Kaput: And this experiment shows off DALLE-3's ability to make an image, quote, more of a particular quality simply through further prompting, more beautiful, more serious, etc. Could you maybe walk us through what you did and what it, what it means for people experimenting with these tools?

[01:01:52] Paul Roetzer: Yeah, it was just a fun experiment, but you know, it was the make it more trend.

[01:01:56] Paul Roetzer: That was kind of, I think it was on TikTok and Twitter. I saw it on Twitter, [01:02:00] but in essence, you just pick, pick an image, with an adjective and then keep asking it to make it more by adding like err to the end, like brighter, faster, taller, smarter. or more before the adjective, more fun, more magical, more sophisticated, more amazing.

[01:02:13] Paul Roetzer: So I did, to show it to my daughter, I did, an image, image of an adorable unicorn. And then I just kept saying, make it more adorable. And like the eyes would get bigger, it would add like fuzzy animals into the thing, it added rainbows, like, and it just did it. And so, you know, I think it's, it's a fun like trick to, to do, but you can start to see in business when you apply this to writing, like make this more professional, make it more concise.

[01:02:38] Paul Roetzer: You're going to be able to do this with videos, make it more powerful, more impactful. And so it's just an example of what these generative AI technologies are already capable of. And when you start thinking about them, where all the ways we're going to use generative AI. and this idea of just being able to tell it do more and that it understands what that means.

[01:02:59] Paul Roetzer: That has big [01:03:00] implications to how we're going to use this technology moving into the new year.

[01:03:05] Mike Kaput: All right, Paul. Thank you so much, as always, for breaking down what's going on in AI this week. I would encourage anyone listening also to check out the Marketing AI newsletter at marketingainstitute.com.

[01:03:18] Mike Kaput: forward slash newsletter. We also include in there this week in AI, all the topics we covered today, as well as other topics we didn't get to. So between that and the podcast, you are set every single week with what is the most important stuff happening in artificial intelligence. So Paul, thanks again.

[01:03:37] Paul Roetzer: Yeah, thanks everyone for listening. we'll be back next week with another regular episode heading into the holiday season. But as of now, we're planning on being here every week. So thanks for listening. and as a reminder, again, you can check out the YouTube channel if you want to watch the videos.

[01:03:50] Paul Roetzer: the videos are up there. and we also publish, if you're not aware, like we take each segment, we cut it into its own video. So if you ever want to see a specific segment or share a [01:04:00] specific segment, You can do that right from the YouTube channel as well. So thanks for listening, Mike. Thanks for curating as always.

[01:04:06] Paul Roetzer: We'll talk to you all next week.

[01:04:08] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingAIinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:04:30] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 89]: A New In-Depth Sam Altman Interview, The “Inflection Point” for Enterprise Generative AI Adoption, and Inflection AI’s Big Shakeup

Claire Prudhomme | March 26, 2024

The AI Show analyzes tough news for generative AI companies, including Sam Altman's latest interview, a16z's enterprise research, changes at Inflection AI, and more.

[The AI Show Episode 83]: Google Bard Is Now Gemini, AI Agents Are Coming, and Sam Altman Seeks Trillions to Reshape AI

Claire Prudhomme | February 13, 2024

Episode 83 of The Artificial Intelligence show examines Google Bard's rebranding to Gemini, the inevitability of AI Agents, and Sam Altman's trillion-dollar request to reshape AI.

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.