AI leaders say slow down, Italy blocks AI, the United Nations implements global framework. But, other leaders keep finding ways to integrate ChatGPT, and new companies are launched. This dichotomy makes for an interesting episode. Paul and Mike break it all down.
“The Letter’ heard round the world made waves - but what does it really mean?
In an open letter published by the nonprofit Future of Life Institute, a number of well-known AI researchers and tech figures, including Elon Musk and Steve Wozniak, have called on all AI labs to pause the development of large-scale AI systems for at least 6 months due to fears over the profound risks to society and humanity that they pose.
The letter notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems that no one can understand, predict, or reliably control.
The signatories call for a public and verifiable pause and for the development of shared safety protocols for advanced AI design and development.
What does it mean, will other countries follow suit, is it a PR play, and at this point, does it even matter? Are we thinking about misinformation and job loss the right way?
At the same time, moves are being made internationally: UNESCO (United Nations Educational, Scientific and Cultural Organization) is calling for the immediate implementation of its Recommendation on the Ethics of Artificial Intelligence, a global framework for the ethical use of AI.
And, in a bold move, Italy has become the first Western country to block OpenAI's chatbot ChatGPT, citing privacy concerns. The Italian data protection authority said it would ban and investigate OpenAI with immediate effect, following a data breach involving user conversations and payment information. Will other countries follow suit?
Prompt engineering - a job, a function, or a skill?
Paul recently wrote about one possible future he’s seeing for prompt engineering on LinkedIn, saying: “How soon until we have a Prompt Copilot that helps users write far more effective and optimized generative AI prompts? Think of it as a prompting assistant that improves and expands your prompts as you type them.”
He also talked about how the quality of human user prompts is crucial for the effectiveness and value of generative AI software—and that companies are motivated to reduce the friction in their products and speed up time to value for all users.
The development of a prompting assistant that helps users write more effective and optimized prompts using AI seems like an obvious and achievable innovation to solve this problem and could render prompting as a career path or human skill less important beyond 2023.
Will it become a must-know in any career path?
BloombergGPT is announced
Bloomberg has announced the development of a new large-scale generative AI model specifically trained on a wide range of financial data to support natural language processing tasks within the financial industry.
The model, called BloombergGPT, represents the first step in the development of a domain-specific model to tackle the complexity and unique terminology of the financial domain.
The new model will enable Bloomberg to improve existing financial NLP tasks such as sentiment analysis, named entity recognition, news classification, and question answering while bringing the full potential of AI to the financial domain.
On top of this, Seth Godin and David Sacks are using ChatGPT. What’s next?
Rapid-fire topics include the All-In podcast, a Redditor loses his love of his career because of AI, Replit teams up with Google Cloud, Sam Altman chats with Lex Fridman, Sam Altman launches Worldcoin, and more.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.
00:03:35 — Elon Musk, Steve Wozniak, and others publish a letter asking for a pause in AI developments
00:09:51 — UNESCO wants responsible AI frameworks implemented
00:11:09 — Italy bans ChatGPT
00:13:53 — Increased attention on prompt engineering
00:20:10 — BloombergGPT is the latest partnership
00:25:27 — Seth Godin gets involved
00:28:35 — The All-In podcast
00:29:48 — AI causes a Redditor to lose his love for his job
00:35:39 — Replit teams up with Google Cloud
00:37:11 — Sam Altman sits down again with Lex Fridman
Links referenced in the show
- Main Topics
- Elon Musk, Steve Wozniak, and Others Sign Letter to Pause AI
- Italy Bans ChatGPT
- UNESCO Calls on Governments to Implement Ethical AI Framework
- The Future of Prompt Engineering
- Seth Godin Trains Custom Model on His Blog
- Rapid Fire
- David Sacks Uses ChatGPT for Blog Writing
- Redditor Loses Love for Career Thanks to AI
- Replit Teams Up With Google Cloud
- Sam Altman / Lex Fridman Interview
Watch the Video
Read the Interview Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: I'm just glad people are talking about issues related to AI because it is not all a bunch of fun, fancy tools that save us a bunch of time and energy. It has a real impact on people, has a real impact on its society, and we need more conversations about it. And if the letter does that, then great.
[00:00:15] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.
[00:00:35] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.
[00:00:44] Paul Roetzer: Welcome to episode 41 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, chief Content Officer at Marketing Institute, and my co-author of our book, marketing Artificial Intelligence, AI Marketing in the Future of Business, which by the way, if you don't have a copy, get a copy, came out in July of last year, so a little bit before ChatGPT.
[00:01:06] Paul Roetzer: But it's a great starting point if you're just trying to figure this stuff. All right. Today's episode is brought to you one by my raspy voice. I am, I'm sure I be able to do this today. My voice just disappeared over the weekend after our AI for Writer's Summit last week, and like 10 talks and a little bit of a cold.
[00:01:23] Paul Roetzer: But we'll see if we can make it through without a coffee fit. So this episode is brought to you by brand Ops, so we appreciate them. Sup supporting the show. Brand Ops is built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.
[00:01:41] Paul Roetzer: Leaders. Use it to know which messages and activities will most effectively improve results. Brand Ops also improves your generative marketing. With brand ops, your content is more original, relevant to your audience and connected to your business. To find out more and get a special listener offer, visit brand ops.io.
[00:01:59] Paul Roetzer: Slash marketing AI show. That's brand ops.io/marketing ai show. Man. I should have had you do these reads with my, save my voice for me. And then, we also have, our fourth annual marketing AI conference, or Macon returns to Cleveland, Ohio this summer. Join us July 26th to the 28th for our largest and most exciting event.
[00:02:21] Paul Roetzer: Yet the conference brings together hundreds of professionals to explore AI and marketing, experience AI technologies, and engage with other forward thinking marketers and business leaders. You'll leave Macon prepared for the next phase in your AI journey with a clearer vision and a near term strategy you can implement immediately.
[00:02:41] Paul Roetzer: Hurry. Prices go up on April 14th. You can save $400 on any pass. Learn more and email@example.com. That is m a I c o n.ai. Can't wait to see you. I am working on that agenda as we speak. Well, not literally like over the next 30 minutes, but today we're trying to get that agenda, you know, in a really good place.
[00:03:04] Paul Roetzer: And we'd love to see you in Cleveland at the convention center. Okay, I'm going to take a drink, get my voice settled, and Mike Lay, lay the groundwork for us today.
[00:03:13] Mike Kaput: Sounds good. Just to remind anyone who is new to the audience, we try to cover kind of three main topics happening in the world of ai. You know, ev once a week, I think hundreds of things probably happen in the world of ai.
[00:03:25] Mike Kaput: We try to pick the most relevant, most important ones, and then we do some rapid fire topics and we have a ton to cover today. So we're going to move fast and we are going to get through. First up, the internet is basically on fire about this open letter that has been published by the nonprofit Future of Life Institute, which is then signed by a number of well-known AI researchers and tech figures, including Elon Musk and Steve Wazniak of Apple Co-founding fame, and they have called on all AI labs to pause the development of large scale AI systems for at least six.
[00:04:03] Mike Kaput: Due to fears over the profound risks to society and humanity, that advancements in AI they believe are posing to us today. Now the letter notes that AI labs are currently locked in an a quote out of control race to develop and deploy machine learning systems that no one can understand, predict, or reliably control, and the signatories of which are, I believe about 1200 or so, including many other famous computer science researchers.
[00:04:34] Mike Kaput: They call for a public and verifiable pause on the development. Of AI moving forward for at least six months, and they want the development of shared safety protocols for advanced AI design and development. So this has caused what I would call a major debate in the AI community and the world at large.
[00:04:58] Mike Kaput: How legitimate do you find the concerns? Expressed in this letter, Paul. And would you say this kind of proposed just six month ban on AI development is advisable or realistic?
[00:05:10] Paul Roetzer: Listen, it's, it's not going to happen. Like there, there's no ban coming. I I, so when I read this, like, so Yoon Musk signs that Yahoo Banjo or Yasha Banio, there's some big names Wazniak, as you mentioned.
[00:05:25] Paul Roetzer: I looked at and I thought, okay, this is interesting. The ones, so these two very extreme sides, like Yann LeCun at at meta is, is actually like, no, this is ridiculous. Like, we're not going to ban the technology. We're not going to slow it down. So he is very, very strong in the camp of, this is absurd, like these language models are not a massive threat.
[00:05:46] Paul Roetzer: We will figure this out. Like, just keep going. And then you have the other people. Although a lot of the people that signed it are like, yeah, I don't really agree with everything in it, but we think it's important that we have this conversation. So I looked at it, I did sign it, but I didn't sign it, honestly, under the assumption that they're actually going to do this six month thing.
[00:06:05] Paul Roetzer: I have more concerns with the near term impact on society and the workforce that aren't being talked about enough. And so I looked at this letter as like, well, at least this will bring it to the mainstream, like at least we will now. Get some of these really important issues that we've talked about on this show before, including misinformation, disinformation, propaganda, which is going to be insane in the next election cycle in the United States.
[00:06:29] Paul Roetzer: Like I I am entering the point where I don't even know that I want to go online for like the six to 12 months before the election in the US third. The ability to tell what's real and what's not is almost gone. Like look at Mid Journey, you know, five and the things that are being created there. You're not going to be able to trust photos unless they're coming from a trusted source.
[00:06:48] Paul Roetzer: Like nothing online is going to be real, like you're just not going to know whether the article's real. So we're going to have to very quickly move to an awareness about how much misinformation and truly fake content is going to be everywhere. And so if it takes a letter like this to get the mainstream media talking about very real problems in ai, great.
[00:07:11] Paul Roetzer: I don't think enough people are talking about job loss. Like we try and look at the positive side of AI and then you know, the net positive long-term impact and we'll create new jobs and it's like, we'll, we'll find a path forward. I'm not so sure that that's how it's going to play out over the next like six to 18 months.
[00:07:30] Paul Roetzer: I think there's going to be a lot of pain. I. It's AI's coming for knowledge work way faster than anybody is ready for. And it, I think you're going to see a lot of negative impact that just people aren't ready for. And so I looked at this and thought, all right, I get, I get the people on the one side are like, this is ridiculous.
[00:07:50] Paul Roetzer: Language models aren't agi, we're not, you don't need to be worrying about this crazy thing about agi I taking over the. And I understand their perspective and I see what, where Yann LeCun's coming from. And I can, I can agree with that. And I can also see the perspective of people like an Elon Musk now, Elon's, you know, certainly far to the edge of like, you know, this is a very real danger to, to the world.
[00:08:14] Paul Roetzer: . And I understand why he thinks that, and so I'm kind of like sitting in the middle looking and saying, listen. Most of what we do is what is like the next 12 months look like. Like I'm trying to constantly figure out what is the impact on real people over the next 12 months. And like, I mean, I'm getting texts from my mother-in-law about this stuff and that's where I know like it's, it's become a thing like everybody is talking about this and trying to understand this.
[00:08:42] Paul Roetzer: And so I feel like in one extreme, this letter. Was a PR stunt. And I think that some people in AI feel that's all it was, was just like a PR stunt by some people who can benefit from this positioning. I get it. And I think in other cases these people have very real concerns about ai. They may not agree with everything in the letter, like I don't, but at that minimI'm, I'm just glad people are talking about issues related to AI because it is not all a bunch.
[00:09:10] Paul Roetzer: of Fun, fancy tools that save us a bunch of time and energy. It has a real impact on people, has a real impact on its society, and we need more conversations about it. And if the letter does that, then great.
[00:09:22] Mike Kaput: Yeah, it really seems like we are at this inflection point, both in terms of the technology itself, like we've discussed since ChatGPT, which feels like years ago, even though it was a few months, GPT-4, but it also seems like we're at an inflection point with broader awareness of the risks.
[00:09:40] Mike Kaput: That could potentially re evolve from this technology. And I think we're seeing that in a couple related stories as well. So two big things that came out right around the same time as the letter. So first unesco, which is a UN body, has actually called on governments to implement an ethical AI framework that it developed and that member, all member countries in the UN actually signed onto.
[00:10:05] Mike Kaput: And it is, Coming out and saying that countries need to start implementing this framework, or at least some type of ethical AI guidelines and regulations asap. Essentially, the director general at UNESCO actually is quoted as saying, Regarding this initiative, quote, the world needs stronger ethical rules for artificial intelligence.
[00:10:26] Mike Kaput: This is the challenge of our time. At the same time, whether you agree with it or not, it'll lead just straight up band ChatGPT. Until OpenAI can prove that some of the data that it used to train the tool and the models was not in violation of things like G D P R. So when you kind of see. This increased regulatory and government interest in AI risks, and you see this letter come out, are you anticipating a new wave of possible restrictions or guidelines around the technology?
[00:11:07] Paul Roetzer: I dunno how fast it's going to move. I mean, Italy moved fast obviously, and interestingly enough, I'm going to Italy in June to do a talk about,
[00:11:17] Paul Roetzer: yeah, it should be interesting. I think that, you know, as we've talked about previously on this show, it the government has to get involved. Like there, there has, and even OpenAI is calling for that. Like the AI researchers who don't want the Future of Life Institute letter, who don't think that's necessary, they still are calling for the government to get involved because they, you know, they see the power of this stuff and they, they, they need regulation.
[00:11:42] Paul Roetzer: They need some sort of guidance here. I just don't know how much we can rely on it, but I, like now I'm afraid we're just going to see a bunch of pol politicians jumping on the bandwagon here with no understanding of the technology. It's, you know, it's bound to happen where all of a sudden AI's going to be become a flashpoint within elections.
[00:12:02] Paul Roetzer: Like, oh my gosh, like now. Now we're going to truly sensationalize this thing on both extreme. So, yeah, I just, I don't know. I think it's really important that people get educated on this stuff. I love that the listenership to this, the audience, you know, for this show has been growing so much because so many times we're not.
[00:12:21] Paul Roetzer: Telling you the answers, like we don't have the answers. We're just trying to surface the things that are really important for you to be thinking about critically and challenging yourself. And I think moving forward, that's what's going to be essential is that we have a lot of really smart people thinking about these issues, thinking about the impact they have in their region, in their country, in their industry, and starting to figure out how to move forward because we cannot rely on the government to do it, but they're going to get involved one way or the other.
[00:12:48] Paul Roetzer: And the better that society understands the issues at hand, the better they'll be able to determine what the politicians are saying that actually matters versus what's just straight up politics. So I, again, I think, I think it's good. I think it's what needs to happen. The government needs to get involved, they need to be looking at this stuff, and they need to do it quickly.
[00:13:07] Paul Roetzer: But yeah, whether these, you know, these specific initiatives play out and become a huge, ongoing factor. I don't know. It's too early to say this is all kind of happening pretty quickly.
[00:13:18] Mike Kaput: Yeah, and I think one last note that's probably important to make here is that, you know, especially if you're really new to ai, we're not talking about the letter or Italy banning ChatGPT, because we are sitting here saying, you gotta be.
[00:13:30] Mike Kaput: Really careful about whether or not you use ai. Your company will use ai. Yeah, so you need to figure out as best as you can, using the imperfect information available, how to chart a safe course through some of these regulations. Anticipate what could happen next because your option is not to sit on the sidelines if you want to stay in business.
[00:13:53] Mike Kaput: So another really hot topic these days, especially with the increased awareness of artificial intelligence and more and more non-technical or non-AI people using the technology is prompt engineering. So the art and science of telling a machine like chat, J P T, exactly what you want it to do, and getting really high quality outputs.
[00:14:14] Mike Kaput: Now, there's plenty of guides out there on the internet about prompt engineering, but Paul, you actually wrote about a really interesting possible. That you see as possible for prompt engineering, and you posted this initially on LinkedIn saying how soon until we have a prompt co-pilot that helps users write far more effective and optimize generative AI prompts.
[00:14:36] Mike Kaput: Think of it as a prompting assistant that improves and expands your prompts as you type them, so you're kind of raising this issue that it's possible with all the emphasis and importance we place today on prompt engineering. That it may actually not be that important of a career path or a skillset to necessarily develop, you know, on a long enough timeline.
[00:14:58] Mike Kaput: So have you learned of any companies since posting that are building a prompt co-pilot to help out with prompts? Do you expect someone to build this soon?
[00:15:08] Paul Roetzer: I do expect it to be built really soon. If I had the ability, I would build it myself because I, but I don't think it's going to matter. I think it's going to be essential within any of these application companies, in the language model companies.
[00:15:19] Paul Roetzer: So the point I was making, this is sort of something I thought about Friday as I was doing a talk. It just kind of arrived like, oh wait, that's probably what's going to have to happen. Like I was explaining, prompting to these people. So yeah, there's all this talk, and all this like onus put on the user right now.
[00:15:36] Paul Roetzer: So if you think about these application companies, these AI writing tools, image generation, video generation companies, As a user, it's amazing technology, but your ability to get value from 'em is actually largely dependent upon your ability to develop a prompt. And if you're on Twitter anywhere these days, like everybody's got these threads about how to do prompting and like, it's really impressive stuff like, but they're really going deep on how to get crazy value out of these generative AI tools.
[00:16:03] Paul Roetzer: But the vast majority of people are never going to take the time to do. And so if you're an application company, if you're building a generative AI tool, the the value to market for your company, the ability to scale that company is dependent upon the ability of the users to properly prompt the system.
[00:16:22] Paul Roetzer: That's a major friction point. So there is massive incentive for the SaaS companies building it, and for the language model companies that are developing these models that are built on like coherent OpenAI and Anthro, to not rely on the user to get good at. Because otherwise you're never going to scale the company the way you could.
[00:16:41] Paul Roetzer: So it only seems obvious that you would have this. And so I've known this for a while, like I've thought about like, okay, prompting at some point won't be as important. It was actually one of the points I made in my keynote on Thursday for the AI Writer's Summit, but I hadn't thought about the copilot thing.
[00:16:55] Paul Roetzer: I was like, oh wait, this is actually infinitely solvable right now. So if you think about the way Google's smart compos works, where just like finishes your sentences, it's kind of predicting the words in the sentence. And that's the basic thing that we're doing with any AI writing tool. So one of the most advanced areas right now in general AI is coding.
[00:17:12] Paul Roetzer: So the ability for like in, in GitHub co-pilot and re relet ghost writer, which we'll talk about Relet in a couple minutes. They're completing code as you're going. It's like taking its knowledge base of coding and able to as truly assist you in creating these things. Well, there's no way that that doesn't happen with prompting because a prompt is basically just a set of instructions.
[00:17:33] Paul Roetzer: But the system can learn what a great prompt looks like and help you build it out. So once it knows you and it knows what you're doing, it can naturally start auto completing or assisting you as you're going in this thing. So yeah, it was one of more, more of those, like I just threw it out there like, Hey, I'm thinking out loud here on LinkedIn.
[00:17:49] Paul Roetzer: . But isn't this something that seems obvious and is anyone building it? To your question about building it, Megan from Jasper, I think commented that they had a version of it in there, like almost like a prompt recommender. Okay. Which, which in the tool
[00:18:03] Mike Kaput: jasper.ai.
[00:18:04] Paul Roetzer: Yeah. Which seemed like it was heading in that direction where it was like giving you like, Improving your prompt.
[00:18:10] Paul Roetzer: Basically, it's not exactly what I was envisioning, but I know I'm not the first person to think of this. I, and I'm sure there's developers working on it, but no, I haven't seen anything yet where someone's like, yep, built it. Here it is. Here's the Chrome extension. Go plug it in. But it seems inevitable that it's going to be baked into these applications and into the language models themselves.
[00:18:29] Mike Kaput: So we kind of talk about prompting a as cool as it is, as still this friction point for businesses, right? Yeah. It's like limiting our possible output, even if you're okay at it. And especially as we roll these tools out to people that have no real idea what prompting is or prompt engineering, what happens when we remove that friction point?
[00:18:49] Mike Kaput: Like what are some of the benefits? How does that change the. I
[00:18:52] Paul Roetzer: think more people realize the power of these models way faster, because again, like the example I gave is if I go into Dolly right now, or mid journey even, and I try and create something in it, I'm not a designer. Like I'm not going to get the same value out of it that a designer would.
[00:19:08] Paul Roetzer: And so that to me is if they make it so that I can prompt at the same level as a great designer. Well, now I'm all over that tool because now I can use it with much greater immediate value to me as a user. And so I think you're going to see adoption rates skyrocket for these tools, and also the value people get out of the tools will become greater, but it'll also then trigger the impact on the workforce.
[00:19:32] Paul Roetzer: . And knowledge workers and creative workers much, much faster, which goes back to the importance. Having these really hard conversations and getting the government involved because it's going to have a really quick impact. But I think that's, that's the major takeaway for me is if these companies can build this capability in there, then the utilization rates will skyrocket for the tools and people will get way more value from 'em.
[00:19:54] Paul Roetzer: That's awesome.
[00:19:56] Mike Kaput: So, Our third big topic today is one that I, you know, I think is, we both agree is absolutely critical, to understanding what's possible with ai, but I think has maybe flown a little bit under the radar in terms of the implications here. But Bloomberg actually just announced they're developing a new large scale generative AI model that is specifically trained on a wide range of financial data.
[00:20:21] Mike Kaput: So basically they're layering chat, g P. Over Bloomberg's proprietary datand for anyone who's not familiar with financial services, Bloomberg terminals cost like 25 grand per user per year to license. They sit on top of a large amount of proprietary data that Bloomberg has been collecting and refining for almost.
[00:20:43] Mike Kaput: Probably 30 years now. 40 years. They said in 40 years. Yeah. So that basically Bloomberg has one of the best financial data sets in the world, and now they're layering on top of it something like robust models like chat, p t and GPT-4 derived models to actually create essentially a financial co-pilot for people in financial services.
[00:21:08] Mike Kaput: This is a really interesting and notable example of applying existing models and tools and customizing them to your own proprietary and custom data. Why does that
[00:21:20] Paul Roetzer: matter? This is, we've talked about this one before too. This idea that these customized, personalized models is where this was all going to go.
[00:21:27] Paul Roetzer: And so Bloomberg, G B T is just one of the best examples I've seen, done, done at a large scale. I think that what it represents is what's going to happen in every industry. You're going to have these verticals that, you know, I said in the thing, like, imagine the same idea applied to manufacturing, healthcare, insurance, law, retail education.
[00:21:46] Paul Roetzer: Like if you have a proprietary data set, You have a chance to build a customized language model. So if you think about GPT-4, it's a general language model. It's trained on a corpus of knowledge from the internet and whatever their data sources are. But your data at your insurance company or your healthcare system or whatever it is, that's, that's private data, they don't have access to that.
[00:22:06] Paul Roetzer: It cannot train on that information. So the organizations that have massive amounts of proprietary data that's in, that's organized in a way that can be used to train or tune these models. No one else is going to. And so if you have that kind of data and you can build a model that is trained on that, then not only can you build a much more interactive and engaging internal knowledge base.
[00:22:30] Paul Roetzer: So almost imagine, like, think about some use cases here. You know, if you build a knowledge base for all your like FAQs for your sales team, your service team, your marketing team, your ops team, like it's all just living on a server somewhere. And they have to go query it and they have to search for it.
[00:22:43] Paul Roetzer: And then they gotta find an article and they gotta read an article. Like it's the traditional way of finding and consuming information. Now imagine all of that private data has been used to train a language model that you can an interact with the same way you would inac, interact with like a ChatGPT, where you just go in and query it and you say, Hey, well, what happened with this client in September of 2022?
[00:23:03] Paul Roetzer: Who was on that team? What was the issue that came up? Like, anything you can think of. And it's all there. It's all been trained into this model. And you can now ask it whatever you want about analytics, about customer service, about operations in the organization. And it gives you a narrative response versus having to send you down and go find links and, but clicking back and forth.
[00:23:24] Paul Roetzer: And that's the kind of thing we're looking at here. So Bloomberg talked about, you know what I said, they're going to use it for. Improving existing financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification, question answering among others. They talked about there's going to be internal uses for this thing.
[00:23:41] Paul Roetzer: There's going to be some value to customers that could, you know, benefit from this. So you can think about. Taking all of your knowledge, all of your datand having it trained on a specific model that only your team can access, or an interface you could create that could be public facing for people to interact with, you know, data you're willing to make public.
[00:24:01] Paul Roetzer: But the point is, you own that data. No one else can build a model like yours. And that, I think is the real key here, is you're seeing. Bloomberg is a very high profile example of a company that knew they had a ton of data, they had the right team in place to envision what was possible with it, and they built their own language model using that data that no one else has.
[00:24:21] Paul Roetzer: And I think we're going to see a flood of these things this year. You're going to see massive amounts of this. So It's just kind of like a, something to. And then the final thing I said in that post was, if you work at an organization that has unique and valuable data sets, you should be racing to explore this.
[00:24:36] Paul Roetzer: . So if you work at a big enterprise, you should be talking to the cio, or if you are the cio o you should be like talking to people who know what to do with this stuff. Go get a language model company like Cohere and bring 'em in and say like, what do we do? So, yeah, I just, I think this is going to be a huge story moving forward this year.
[00:24:52] Paul Roetzer: We're going to see a lot of examples like
[00:24:53] Mike Kaput: this. Yeah. So we're not talking even here about. Eventually having financial services. GPT or insurance, GPT, it's, insert your company name here. GPT.
[00:25:04] Paul Roetzer: Yeah, I think it'll be both. Yeah. I think you're going to have like vertical specific models that people may like, curate a bunch of data sources and like license that data to train a model.
[00:25:16] Paul Roetzer: So you could see groups or associations play in that space. Or you're going to see just big enterprises build their own models and it might be both. Like I I, you're just going to have language models every.
[00:25:27] Mike Kaput: A really interesting example of this to Hammer home the value of custom data is that Seth Godden, who is a legendary marketer, I mean everyone in the marketing industry is familiar with Seth Godin's work for 20 years.
[00:25:41] Mike Kaput: He's written a mega popular blog with his thoughts on marketing and sales. He actually announced the other day that he has train. Version of ChatGPT on all 5 million words of his block. So you can go into the tool and essentially ask questions of Seth, or at least Seth's public writing. So he
[00:26:05] Paul Roetzer: is, that's GPT?
[00:26:07] Paul Roetzer: Yes. Seth got to call that. That.
[00:26:10] Mike Kaput: And. He is treating it as an experiment. I'm not sure there's anything beyond it being an experiment, but it really proves out this idea that custom training models on your own data is going to be the way of the future. Do you kind of see that happening as well for
[00:26:28] Paul Roetzer: individuals?
[00:26:29] Paul Roetzer: Oh yeah. I mean, I think every journalist, every author, every, podcast. Anybody who creates a corpus of knowledge of their own information, whether it's, right now we're doing this in text, but just imagine being able to feed it video as well. Or the transcripts. So, yeah, I mean, I have, I have tons of friends in the, in the marketing industry who are authors and podcasts who've been doing it for 20 years.
[00:26:53] Paul Roetzer: I mean, like Seth, you have 5 million words. So yeah, you can train really cool chat modules that are just trained on your data. Now the trick here is if all of Seth's data was already publicly available, there's a pretty decent chance that OpenAI sucked that up when they were training GPT-4. So is it really.
[00:27:12] Paul Roetzer: Any more valuable than, you know, just going into G, you know, ChatGPT and ask a question. I don't know. I haven't tested it. Right? Right. But I think that, again, to drive home the point of proprietary data that people don't have access to train on versus data that is just OP on the open web and was probably already ingested into training data.
[00:27:29] Paul Roetzer: But yes, I think corporations will do this, and I think individuals will do this to where you have these personal bots and either they're used for your own internal sake, like you're just trying to query like, I mean, imagine if you and I had this, we have . I've written three books that combined have, what, 150,000 words.
[00:27:46] Paul Roetzer: We've written a thousand blog posts. We've done, what is this, the 41st episode of the podcast? I. 400 video. Like if I could just like dump all that, just hit, upload all that stuff, you know, two days worth, just load it all in. And then I could just start talking to that. Like, what did I say right when I was here?
[00:28:04] Paul Roetzer: What did I say about this topic? Or what have I previous that would have such utility to me? Like, I would use that all the time. And I think that we're heading there. Like I'm, I'm fairly confident that that's a 2023 thing for . A lot of organizations you'll be able to build that kind of stuff.
[00:28:19] Paul Roetzer: This. You heard it
[00:28:21] Mike Kaput: here first. Paul, GPT is coming.
[00:28:23] Paul Roetzer: Yeah. Gotta get that. Uh oh man. I was going to say it did url. Don't buy that url. I'm going to go buy it before we publish this, just in case.
[00:28:30] Mike Kaput: Yeah, good call. So this actually does segue really well into some of our rapid fire topics because another, high profile in vigil who's experimenting with chat.
[00:28:39] Mike Kaput: GPT was a guy named David Sachs, who is a well-known VC investor, entrepreneur. He's been involved with some major tech companies over the years. And notably, he is a one of the four co-hosts of the All In Podcast, which is one of the top business podcasts in the world. It has several, billionaires, or at least a hundred millionaires who are investors, VCs, and some of the top people in business today, kind of riffing on the week's topics and today, or this past episode, that I believe dropped on Friday.
[00:29:10] Mike Kaput: Actually was almost all about artificial intelligence. And so they talked about David Sack's experiments with ChatGPT. He was using it to cut down. The time it took him to write a post that would've taken him a week, he said to about a day. And it's a very in-depth post on startup advice that he used, chat, PT to help write.
[00:29:32] Mike Kaput: So that was quite impressive as a use case. But I think another thing that came up during that conversation, Was this crazy reddick post about AI and careers that is making the rounds, and unfortunately it's not positive. So the All-in hosts, talked about this Redditor, who is a 3D artist at a small games company, and I found the Post, this past week.
[00:29:59] Mike Kaput: And in it he says the following. My job is different now. Since Mid Journey version five came out last week, I am not an artist anymore, nor a 3D artist right now. All I do is prompting Photoshopping and implementing good looking pictures. The reason I went want to be a 3D artist in the first place is gone.
[00:30:19] Mike Kaput: I wanted to create form in 3D space, sculpt, create with my own creativity, with my own. It came overnight for me. I had no choice and my boss also had no choice. I am now able to create Reagan and animate a character that spit out from mid journey in two to three days before it took us several weeks and three B.
[00:30:37] Mike Kaput: The difference is I care. He does not meaning his boss. For my boss, it's just a huge time money saver. Now, this person also mentioned that one of their colleagues who in their estimation produces slightly lower quality work than them, has embrace. The technology and as a result is producing comparable work now and is getting all the praise from their superiors for using ai.
[00:31:01] Mike Kaput: What did you think about seeing this, hearing the commentary on all in around it? There were definitely some differing perspectives on is this positive for the artist eventually or negative? What were your thoughts?
[00:31:14] Paul Roetzer: I it's inevitable. I mean, this is what we've talked about. The last year since Dolly came out is you're, you're taking away the thing that makes people feel fulfilled in their life.
[00:31:26] Paul Roetzer: Like, artists don't necessarily want efficiency. They want to create, they want to, you know, like, like this person's saying, they want to use their own creativity and imagination. It's not about how many characters can I build for this video game in two days. It's the process of building and imagining that is what makes them love what they do.
[00:31:48] Paul Roetzer: One perspective on this is, well now you can do 10 x the work. Like you, you can make, you know, 10 video games in the time and it would make one. And the artist is going to say, yeah, but I'm not actually making anything. I'm just prompting the AI to make the thing. So the thing I enjoyed about this is gone, and this is even at our writer's summit last week.
[00:32:09] Paul Roetzer: I ended it with that, my, my keynote with that idea. Every industry, every profession is, you know, what is going to be lost? What is it going to be gained? And when, and in this case, what was lost is the thing that this person loved. What is gained is productivity. But is that enough for this person? That's the question.
[00:32:27] Paul Roetzer: and when is like right now, like it happened overnight as this person said, this is hard. Like it's, it's really hard. I think I said in, you know, my keynote last week, like, I wish I. Better guidance for people, or I had better answers for people. I think we're all going to have to figure this out together, and I don't know exactly what this person's career path becomes like.
[00:32:55] Paul Roetzer: I . I don't If, if you don't accept the technology and what it does, and that your role is evolving into this prompting role. I'm not sure what that means yet. And so I just think like, we're going to hear lots and lots of stories like this. We're going to hear writers from designers, illustrators, video producers, architects, like it's, it's going to come really fast.
[00:33:19] Paul Roetzer: . and I, that's why I said, like, I want to believe that the net positive is going to be there in the end and like, The All In podcast guys talked about every time there's this like, you know, major innovation, that new jobs emerge and it happens. And I believe that, and I am, you know, if you look back at history, every major technological innovation, disruptive innovation that reset, you know, industries, they talk about farming as an example.
[00:33:44] Paul Roetzer: . People found other jobs like job market continued to grow. I just don't know that there's ever. A technological revolution that happened in four months. And yes, I know AI is 80 years old and like this has been kind of progressing, but realistically for most people, they had no idea what was going on with AI up until November 30th of last year.
[00:34:07] Paul Roetzer: And now all of a sudden, this stuff's coming for knowledge work and creative work, and nobody was ready for it. And so I just, I don't know. I mean, we, we need a lot more dialogue. We need a lot more. Thinking about this because there's no clear path. And most technologists just assume the world will do, do what it always has done, which is find new jobs for people and, you know, new roles will emerge.
[00:34:34] Paul Roetzer: And again, I truly want to believe that. I just, I don't know how quickly it's going to,
[00:34:43] Mike Kaput: Definitely important topic and I think we can certainly, one, we'll certainly address further on future episodes as as this kind of evolves, and hopefully give people some good questions to ask, some good paths to go down as they explore this on their own.
[00:34:57] Mike Kaput: Yeah,
[00:34:57] Paul Roetzer: and we'll, we'll start like at some point we're going to probably start doing, I don't want to like overcommit us, but we'll probably start adding like a second episode of this podcast each week where we start bringing in people to talk, go deeper on topics. Because again, Mike and I aren't experts on every one of these topics we're talking about.
[00:35:12] Paul Roetzer: A lot of times we're just surf, surfacing information, providing a perspective. Hopefully, you know, an insightful perspective for you to help start forming your own opinions. But some of these topics just. People to come in. I mean, that's why we did the AI for Riter Summit. It's like, I don't know, like we, we gotta go deep on this one.
[00:35:27] Paul Roetzer: And we needed lots of perspectives and so we think we'll do more stuff like that to try and keep this conversation going and help people find some answers. Amen.
[00:35:39] Mike Kaput: So next step is that we actually heard that replica, which creates cloud. So a cloud software development platform, and is also the creator of Replica Ghostwriter, which is an AI co-pilot for developers, is actually teaming up with Google Cloud.
[00:35:54] Mike Kaput: So this seems like a pretty direct response to the fact that Microsoft owns GitHub, which has its own co-pilot that is, Popular among developers and being used to actually generate code and increase developer productivity. What does this mean to you to, for the market at large? I mean, are we going to kind of see this programming co-pilot, arms race, start developing?
[00:36:17] Paul Roetzer: Yeah, developer jobs are going to go away fast. Like it's going to be crazy. It's getting really good rep's, a company to watch. I think more than anything it's just a co company to get to know. I met Amjad, the CEO co-founder and about a month or so ago in February, really smart guy driven, been at this since 2012 trying to do this as company's taken off.
[00:36:39] Paul Roetzer: Now, I would just keep an eye on replica. It's a really interesting.
[00:36:43] Mike Kaput: Yeah. And kind of to our employment conversation. It's crazy how a decade ago, everyone, every headline was screaming at you that the best job to go into was becoming a computer programmer in terms of earnings, in terms of, you know, future career potential.
[00:36:58] Mike Kaput: Not to say that'll all go away, but boy, it's
[00:37:00] Paul Roetzer: changing. That's for sure. It's going to, it's going to change faster than design and writing. I mean, I think coding's going to be the first one that's going to get just massive impact from this. It's going to happen really, really. Wow.
[00:37:11] Mike Kaput: All right. Last but not least, on March 25th, Lex Fridman, the host of the mega popular Lex Fridman podcast, of which I know we are both big fans, Paul.
[00:37:21] Mike Kaput: . He interviewed OpenAI, CEO, Sam Altman, so they had a almost two and a half hour conversation. They talked about everything from Agi I to Elon Musk's, developing beef with OpenAI to the company's work to build powerful but safe, ideally, AI systems. It was a really interesting glimpse into kind of how Altman's mind works, how he thinks about this stuff, and how, at least his position on some of the opportunities and challenges ahead regarding ai.
[00:37:49] Mike Kaput: Especially now that we have the letter, now that we have a lot of these. Concerns. I know you had some thoughts kind of on this conversation. What were your takeaways? What jumped out at you?
[00:38:00] Paul Roetzer: One, I think everybody should listen to it. I mean, it's two hours, but it's worth it. Again, we've talked about Sam, we've talked about the importance of OpenAI to the future society and business, and I think people need to understand where he's coming from, agree with him or not.
[00:38:14] Paul Roetzer: There's going to be lots you don't agree with. There's going to be some stuff you do. I thought two things jumped out to me. One. Lex said, is GPT-4 agi, like, is it an early form of agi? And I'm not so sure that Sam knows. Mm. Like that's the thing that came across to me first, because he actually turned it on Lex.
[00:38:31] Paul Roetzer: He's like, well, do you think it is like, and I think part of it's, he was just curious at L's opinion, but he's, he's not super clear on what they actually think they've created. and I, It's just fascinating to hear them, him explain the guardrails they've put in place, why they put 'em in place, why they're concerned about agi.
[00:38:53] Paul Roetzer: I just think it's good for people to listen to and form their own perspective. The part that I rewound and listen to three different times though, was him describing his role as the CEO of the company. So if we accept that Sam is this really influential person that's dictating like the future of AI and potentially society and you know, but the economy and everything else.
[00:39:11] Paul Roetzer: He said, I'm not so sure I'm the right person for this. And Lex said like, why? And he is like, well, I have, you know, there's just things that I have flaws. And he explained a couple, but the one that really stuck out to me is he said, I think I'm pretty disconnected from the reality, for most people trying to empathize but internalize the impact a g I is going to have.
[00:39:31] Paul Roetzer: I probably feel that less than other people in the world or other people would. And so just the fact that this guy who. Pivotal in this. Whether he wants to be or not, doesn't feel he's able to empathize with people and the role it's going to have in taking their jobs and changing society is a terrifying thought to me.
[00:39:51] Paul Roetzer: And all I could think is I really, really, really hope that he has leaders around him there who do have that ability. Because if these decisions are being made in a vacuum of people who can't relate with society and with the average person, Then we got big problems and I'm, I'm more bullish on the need for the government get to, to get involved knowing that than I was before.
[00:40:15] Paul Roetzer: And so I would say just listen to that. And Cade Mets our friend at, at the New York Times, did a phenomenal, like profile on Sam also last week. So you can really start to get a feel for Sam. And I think it's important that people keep a close eye on, on him and OpenAI, because it's going to affect wherever this all.
[00:40:34] Mike Kaput: Wow. And on that note, Paul, I want to thank you for the awesome analysis as always. There's a lot going
[00:40:40] Paul Roetzer: on. No coughing fits too. We made it through. Thank you everyone. That's great for dealing with my raspy voice. All right, well, we'll be back. We're going to actually record a little bit early cause I'm on, I'm on vacation next week.
[00:40:51] Paul Roetzer: So Mike and I are going to do one at the end of the week. We will be back, we will have one for you next Tuesday, as always. So yeah, everybody have a great week and we'll talk to you next.
[00:41:01] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.
[00:41:23] Paul Roetzer: Until next time, stay curious and explore AI.
Cathy McPhillips is the Chief Growth Officer at Marketing AI Institute.