<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

41 Min Read

[The Marketing AI Show Episode 49]: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Paul and Mike sat down on the Friday before the Memorial Day weekend to record this week’s episode of The Marketing AI Show. There are usually seven days between recordings, but as fast as AI is moving, this episode will not disappoint in the volume of content covered and the speed of change that continues.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by MAICON, our 4th annual Marketing AI Conference. Taking place July 26-28, 2023 in Cleveland, OH.

Listen Now

Watch the Video

Timestamps

00:04:44 — Google Introduces AI-Powered Ads

00:09:47 — Microsoft Rolls Out AI Copilots and AI Plugins

00:17:05 — Cities and Schools Embrace Generative AI

00:22:31 — AI Resources from Andreessen Horowitz

00:25:49 — DeepMind’s AI Risk Early Warning System

00:30:15 — OpenAI’s Thoughts on the Governance of Superintelligence

00:36:20 — White House Takes New Steps to Advance Responsible AI

00:40:08 — Fake Image of Pentagon Explosion Causes Dip in the Stock Market

00:44:01 — Meta’s Massively Multilingual Speech Project

00:46:18 — Anthropic Raises $450M Series C

00:48:39 — Figure Raises $70M Series A

00:50:30 — Sam Altman’s Worldcoin Raises $115M

00:54:07 — NVIDIA stock soars

Summary

Google Introduces AI-Powered Ads

Google just announced brand new AI features within Google Ads, from landing page summarizations to generative AI helping with relevant and effective keywords, headlines, descriptions, images, and other assets for your campaign. Conversational AI will be able to help with strategy and improve ad performance. Their new Search Generative Experience (SGE) and a continued focus on AI principles were discussed.

Microsoft Rolls Out AI Copilots and AI Plugins

Two years ago, Microsoft rolled out its first AI “copilot,” or assistant, to make knowledge workers more productive. That copilot paired with human programmers using GitHub to assist them in writing code. This year, Microsoft introduced other copilots across core products and services, including AI-powered chat in Bing, Microsoft 365 Copilot (which offers AI assistance in popular business products like Word and Excel), and others across products like Microsoft Dynamics and Microsoft Security. Now, the company just announced it is going to release Windows Copilot, with availability starting in June in Windows 11.

Cities and Schools Embrace Generative AI

We see some very encouraging action from schools and cities regarding generative AI. According to Wired, New York City Schools have announced they will reverse their ban on ChatGPT and generative AI, citing “the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” Additionally, the City of Boston's chief information officer sent guidelines to every city official encouraging them to start using generative AI to understand its potential. The city also turned on Google Bard as part of the Google Workspace tools that all city employees have access to. It’s being termed a “responsible experimentation approach,” and it is the first policy of its kind in the US.

AI Resources from Andreessen Horowitz

Andreessen Horowitz recently shared a curated list of resources, their “AI Canon,” they’ve relied on to get smarter about modern AI. It includes papers, blog posts, courses, and guides that have had an outsized impact on the field over the past several years.

DeepMind’s AI Risk Early Warning System

In DeepMind’s latest paper, they introduce a framework for evaluating novel threats–misleading statements, biased decisions, or repeating copyrighted content–co-authored with colleagues from the University of Cambridge, University of Oxford, University of Toronto, Université de Montréal, OpenAI, Anthropic, Alignment Research Center, Centre for Long-Term Resilience, and Centre for the Governance of AI. DeepMind’s team is staying ahead: “as the AI community builds and deploys increasingly powerful AI, we must expand the evaluation portfolio to include the possibility of extreme risks from general-purpose AI models that have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities.”

OpenAI’s Thoughts on the Governance of Superintelligence

Sam Altman, Greg Brockman, and Ilya Sutskever recently published their thoughts on the governance of superintelligence. They say it’s a good time to start thinking about it, being that it’s not inconceivable that we’ll see this in the next ten years. They say that proactivity and mitigating risk are critical, alongside special treatment and coordination of superintelligence.

White House Takes New Steps to Advance Responsible AI

Last week, the Biden-Harris Administration announced new efforts that “will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.” This includes an updated roadmap to focus federal investments in AI research and development (R&D), a new request for public input on critical AI issues, and a new report on the risks and opportunities related to AI in education. In addition to these new announcements, the White House hosted a listening session with workers last week to hear firsthand experiences with employers’ use of automated technologies.

Fake Image of Pentagon Explosion Causes Dip in the Stock Market

A fake image purporting to show an explosion near the Pentagon was shared by multiple verified Twitter accounts on Monday, causing confusion and leading to a brief dip in the stock market. Local officials later confirmed no such incident had occurred. The image, which bears all the hallmarks of being generated by artificial intelligence, was shared by numerous verified accounts with blue check marks, including one that falsely claimed it was associated with Bloomberg News. Based on the actions and reactions of the day, are we unprepared for this technology?

Meta’s Massively Multilingual Speech Project

Meta announces their Massively Multilingual Speech (MMS) project, combining self-supervised learning, a new dataset that provides labeled data for over 1,100 languages and unlabeled data for nearly 4,000 languages, as well as publicly sharing models and code so that others in the research community can build upon Meta’s work. Meta says, “Through this work, we hope to make a small contribution to preserve the incredible language diversity of the world.”

Anthropic Raises $450M Series C

Anthropic raised $450 million in Series C funding led by Spark Capital with participation from Google, Salesforce Ventures, Sound Ventures, Zoom Ventures, and others. The funding will support Anthropic’s continued work developing helpful, harmless, and honest AI systems—including Claude, an AI assistant that can perform a wide variety of conversational and text-processing tasks.

Figure Raises $70M Series A

Figure plans on using the $70M Series A to accelerate robot development, fund manufacturing, design an end-to-end AI data engine, and drive commercial progress

Sam Altman’s Worldcoin Raises $115M

OpenAI Chief Executive Sam Altman has raised $115 million in a Series C funding round led by Blockchain Capital for a cryptocurrency project he co-founded. The project, Worldcoin, aims to distribute a crypto token to people "just for being a unique individual." The project uses a device to scan irises to confirm their identity, after which they are given the tokens for free.

NVIDIA Stock Soars on historic earnings report

Nvidia’s stock had already more than doubled this year as the AI boom took off, but the company blew past already-high expectations last Wednesday in its earnings report. Dependency on Nvidia is so widespread that Big Tech companies have been working on developing their own competing chips, much in the same way as Apple spent years developing its own chips so it could avoid having to rely on — and pay — other companies to outfit its devices. Google has built its own “Tensor Processing Units” for several years, and both Microsoft and Amazon have programs to design their own as well.

As you can see, last week was a busy week in the week of AI! Tune in to this lively and fast-paced episode of The Marketing AI Show. Find it on your favorite podcast player and be sure to explore the links below.

Links referenced in the show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I do believe that he believes that what they build next is going to have a major impact on society. And I believe he truly is trying to prepare society for this. And so I want to assume that what he is doing is largely truly for the good of humanity and society. And so I think when he's saying these things that I don't know that he really has too many underlying motives other than he actually really believes this is very important that we get this right.

[00:00:30] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:50] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:59] Paul Roetzer: Welcome to episode 48 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host as always, Mike Kaput, chief Content Officer at Marketing Institute and co-author of our book, marketing Artificial Intelligence, AI Marketing, and the Future of Business. Today's episode, which focuses on ai.

[00:01:19] Paul Roetzer: Government regulation and oversight. We have a lot to get into, man. Is this episode is brought to us by the Marketing AI Conference. July 26th to the 28th in Cleveland. Tickets are selling fast, so join us in Cleveland. It is going to be the biggest Macon by far, based on current ticket sales, but we're going to explore AI and marketing experience AI technologies, engage with other forward thinking marketers and business leaders.

[00:01:49] Paul Roetzer: And really give you a chance to kind of dive in and accelerate your AI learning journey. So hopefully you can join us in Cleveland. It is at the Cleveland Convention Center right across from the Rock and Roll Hall of Fame and Cleveland Brown Stadium and Lake Erie. And, we'd love to see you. So it's mayon.ai.

[00:02:05] Paul Roetzer: It's M A I C O n.ai. We cannot wait to see you there. And today we have, I guess, kind of like a special edition show. There was a lot going on in Washington last week, with meetings on Capitol Hill, senate meetings, other hearings. And we are going to kind of try and dissect this best we can for you, with a focus of our three main topics are all going to be in this area because there was, there was a lot going on.

[00:02:34] Paul Roetzer: So, Mike, I'm going to turn over to you and let's see if we can get through this in a reasonable amount of time.

[00:02:38] Mike Kaput: Sounds great, Paul. Yeah. Like you mentioned this past week, artificial intelligence came to Washington in a big way. So first up, OpenAI. CEO. Sam Altman appeared before Congress. It w, it was his first ever testimony in front of Congress and he spoke at a hearing called by Senators Richard Blumenthal and Josh Hawley, and the topic was how to oversee and establish safeguards for artificial intelligence.

[00:03:07] Mike Kaput: So this hearing lasted nearly three hours, and it did focus largely on Altman and OpenAI. Though IBM executive, Christina Montgomery was there, as well as Gary Marcus, who is a leading AI expert, academic and entrepreneur, they both also testified. Now, during the hearing, Altman covered a ton of different topics, including a discussion of different risks posed by AI and what should be done to address those risks.

[00:03:35] Mike Kaput: As well as how companies should be developing AI technology. And what was really interesting is Altman even suggested. The AI companies be regulated possibly through the creation of one or more federal agencies or controversially, some type of licensing requirement. Now, this hearing, like most things in our politics today was divisive.

[00:03:59] Mike Kaput: Some of the experts applauded what they saw as much needed urgency from the federal government in tackling these important safety issues with ai. Others, however, criticized the hearing for being way too friendly and they cited some worries that. Companies like OpenAI are now angling to have undue influence over the regulatory and legislative process.

[00:04:24] Mike Kaput: Now, we should also note, if you're unfamiliar with congressional hearings in the United States, this hearing just appeared to be informational in nature. It wasn't called because OpenAI is in any. Sort of trouble. And it does appear to be just one of, in the first of many such hearings and committee meetings on AI that are happening moving forward.

[00:04:44] Mike Kaput: So Paul, like you mentioned, we're going to do something slightly different in this episode. We're going to tackle this hearing from three different angles as our three main topics today, and we're also going to talk through a series of lower profile, but very important government meetings on AI that occurred at the same time.

[00:05:05] Mike Kaput: So first we'll kind of deep dive into what actually happened at the Altman hearing and what was discussed and what that means for marketers and business leaders. We're then going to take a closer look at some big issues in AI safety that were discussed during the hearing. And last but not least, we'll talk through the regulatory measures.

[00:05:25] Mike Kaput: Being considered and mentioned during the hearing and what dangers there are, if any of AI companies kind of tilting the regulatory process in their favor. And as part of that, we'll also run through exactly what went down in these other meetings on AI that were had at the federal government level last week.

[00:05:45] Mike Kaput: So Paul, before we dive into the details of the Altman hearing, can you contextualize how significant was this hearing?

[00:05:54] Paul Roetzer: Yeah, I'll preface this by saying Mike and I are not experts on this stuff. Like this is, this is above our pay grade in terms of like how the government bodies work, how the laws of the land work.

[00:06:05] Paul Roetzer: And I really just like, we want to dedicate this episode to raise awareness about this and offer some perspective. And try and give some context to what's going on based on our perception and knowing the players involved and different things like that. But this is a really important area and I do think that part of, part of the, our effort here is to surface it for everyone and make sure everyone is paying attention and that you do find the people who are like the true experts in the different related areas here, and you follow along as this is developing because it's going to impact all of us.

[00:06:38] Paul Roetzer: So. That all being said, you know, in earlier episodes I'd have to go back and share out which episodes. I remember saying multiple times, like Altman's going to have his day in front of the Senate. Like he'll have a Zuckerberg moment, I think is what I called it. And, and here we go. Here we are. It was like two months later.

[00:06:54] Paul Roetzer: So it came a little faster than I expected. So my overall take is I would not expect much action in the near term as a result of these hearings. I think what's happening, and, and this is not meant to be cynical, I think this is realist. I think that both sides right now of the political spectrum in the United States are trying to figure out.

[00:07:16] Paul Roetzer: What is going on trying to understand this technology and trying to figure out how the public will react to these different elements because they have to win votes next year. And so they're trying to decide is aIs AI a hot button issue in the election next year, and what do our voters care about?

[00:07:36] Paul Roetzer: And so is it jobs, is it safety? You know, what are the elements of AI that they need to really dig into and kind of pull that thread so that they can win boats? So I do believe that there are altruistic reasons why these hearings are happening right now, but I also think that they're probably outweighed by political posturing and it's critical regardless of why they're happening, and it's important and noteworthy and newsworthy.

[00:08:03] Paul Roetzer: But I do think that these are probably more for show and for, explore exploration to figure out how this is going to play in the election cycle than it is turning this into new laws in the next, you know, 12 to 18 months.

[00:08:21] Mike Kaput: Gotcha. So there was a lot of ground covered during this hearing, and I would highly recommend people go read in the show notes, either transcripts or summaries from news outlets because there was a lot of ground covered.

[00:08:33] Mike Kaput: But in your mind, what were kind of the main takeaways from the actual content of the hearing?

[00:08:41] Paul Roetzer: The government knows it's a major issue like that does become obvious. So again, even if this is for, you know, political posturing and, and boats and, you know, the 2024 election cycle, it's obvious that they are investing a lot of time and energy trying to figure out this topic.

[00:08:57] Paul Roetzer: It also is clear that the tech companies, or at least Sam Altman re representing the tech companies, believe they need oversight. Or that it, again, the cynic approach to this is they know oversight is coming. And they might as well try and take a leadership role in. Getting that oversight to be in, in the best interest possible of the tech companies.

[00:09:20] Paul Roetzer: So I think that they're very well aware that whether they want it or not, it will likely come in some form. So I think they're just pushing for the government to get involved. Now, before this, the AI gets way more advanced. I do believe. I don't know Sam Altman personally. I've listened to a lot of interviews with Sam and, he seems like a relatively complicated guy.

[00:09:44] Paul Roetzer: But I, I do believe that he believes that what they build next is going to have a major impact on society. And I believe he truly is trying to prepare society for this. And so I want to, I want to assume that what he is doing is largely truly for the good of humanity and society. And so I think when he's saying these things that I don't know that he really has too many underlying motives other than he actually really believes this is very important that we get this right.

[00:10:17] Paul Roetzer: So, you know, those, those kind of jumped out to me and then, the thing I wanted to do was go through a few quick opening thoughts from each of the three players. Cause I think it helped set the stage. So again, as you mentioned, there was the three main people there. It was Sam Altman, CEO, co-founder, OpenAI, Christina Montgomery, the Chief Privacy and Trust Officer at IBM, and then Gary Marcus.

[00:10:42] Paul Roetzer: Who's professor and author and kind of the antagonist to Yann LeCun on Twitter, like these two are every day at each other. It's kind of funny. So Altman just a couple of key points. So he said, the US government might consider a combination of licensing and testing for development and release of AI models above thresholds of capabilities, and, ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures.

[00:11:11] Paul Roetzer: And examining opportunities for global coordination. So he did come to the table with some specific ideas around what he thought, was needed. Christina Montgomery mentioned that, she kind of like slightly different approach, he said I b s Congress to adopt a precision regulation approach to ai.

[00:11:29] Paul Roetzer: This means establishing rules to govern the deployment of AI in specific use, use cases, not regulating, regulating the technology itself. She went on to say, And businesses also play a critical role in ensuring the responsible deployment of AI companies active in developing or using AI must have strong internal governance, including among other things, designate designating a lead AI ethics office official, responsible for an organization's trustworthy AI strategy, standing up an ethics board or a similar function as a centralized clearinghouse for resource.

[00:12:05] Paul Roetzer: To help guide implementing that strategy. And then she also mentioned, it's a pivotal moment, clear, reasonable policy and sound guardrails are critical. And then Gary Marcus, again, sort of the antagonist to the tech company. He had a few key points here. So he said there are benefits, to AI obviously, but we don't know whether they will outweigh the risks.

[00:12:26] Paul Roetzer: Fundamentally, these new systems are going to be destabilizing. He offered some very specific instances where he saw this could occur. He said we want, for example, for our systems to be transparent, to protect our privacy, to be free of bias and above all else to be safe. But current systems are not in line with these values.

[00:12:44] Paul Roetzer: This was interesting because he's saying basically taking shots at OpenAI sitting two feet from Sam Altman. He said, but the current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy. And they continue to perpetuate bias and even their makers don't entirely understand how they work, which is true most of us.

[00:13:05] Paul Roetzer: Most of all, we cannot remotely guarantee that they're safe, and hope here is not enough. The big tech company's preferred plan boils down to trust us. But why should we? The sums of money at stake are mind boggling. He talks about OpenAI, kind of causing this by forcing things out into the market and Microsoft as well, to blame.

[00:13:25] Paul Roetzer: And then he says in that forced alphabet to rush out products and deemphasize safety. Humanity has taken a backseat. AI is moving incredibly fast with lots of potential, but also lots of risks. We obviously need government involved and we need the tech companies involved, both big and small, but we also need independent scientists, which is kind of like his.

[00:13:45] Paul Roetzer: So again, the perspectives here were varying. It was, it was an interesting mix of people. I don't know how they picked those three people out of. All the people that could be there. But I think again, it was just helpful to get the context of what the three main people were saying in their opening statements, which then led to the rest of the hearing.

[00:14:06] Paul Roetzer: So does

[00:14:06] Mike Kaput: this hearing after you've kind of studied it and reviewed it, give you any confidence or any more confidence that we'll see timely and sensible AI regulations from the US government?

[00:14:20] Paul Roetzer: No, I mean, I don't think this hearing does anything for that. I think it's helpful hopefully, that the senators were listening.

[00:14:29] Paul Roetzer: You know, I've watched enough Senate hearings in my day to know half the time they're not even in the room when the key questions are being asked, but this one seemed relatively nonpartisan, so I, I don't know, like it. There's a part of me who wants to think it's going to do something, but overall, I just, I don't think this hearing was much more than a starting point.

[00:14:48] Paul Roetzer: I don't think it's going to accelerate anything. But at the end of these, this conversation today, we're going to talk about the other three hearings that were going on that does give me hope that they're actually maybe way more happening behind the scenes than most of us were aware of prior to this.

[00:15:03] Mike Kaput: So there was a big emphasis, like you mentioned in some of those opening comments on AI safety.

[00:15:10] Mike Kaput: And at one point during this hearing, Sam Altman even said, quote, my worst fear is we cause significant harm to the world when he's talking about what can go wrong here. And lawmakers and the AI experts at the hearing cited. Several different AI safety risks that they're kind of losing sleepover. So there were a handful of kind of common issues that everyone seemed to be concerned about.

[00:15:33] Mike Kaput: And I'm going to list out a few of the main ones and then get your take on these because they're all important issues in their own right. The first is AI's ability to produce misinformation. Generally, but also specifically during election season. So being able to create fake text, images, video and audio at scale is a huge concern as well as the ability to emotionally manipulate people consuming this content.

[00:15:59] Mike Kaput: And so there's fears that this could influence the outcome of elections in a negative way, including we have upcoming in the US a 2024 presidential election. Now another huge concern is job

[00:16:11] disruption

[00:16:12] Mike Kaput: or the possibility that AI will cause significant and rapid unemployment also discussed where concerns around copyright and licensing, the fear that AI models are being trained on material that is legally owned by other parties and being often used without their consent.

[00:16:31] Mike Kaput: We also are worried generally about harmful or dangerous content, so it's not just misinformation, but also generative AI systems. Producing outputs that actually harm human users. So a couple ways this could happen include hallucination, where it makes up information and misleads you, or a lack of what we would call alignment, where generative AI is not well trained enough and gives users information that they can use that harm others or themselves.

[00:17:00] Mike Kaput: So AI that is not aligned with the most beneficial interests of humanity first. Now underlying all of this as this big overall fear of the pace and scale of AI innovation and our ability to control that, so the experts and lawmakers in the hearing do fear. It seems that without proper guardrails, AI development could move so fast that we release potentially harmful technology.

[00:17:30] Mike Kaput: Into the world that can't be adequately controlled. Or you know, in some of the more extreme opinions out there, we might actually create machines far smarter than us that we don't control. So that's kind of what's often broadly called artificial general intelligence or agi. So Paul, if I'm. Looking into this hearing and hearing all of the conversation around AI risks and just getting up to speed.

[00:17:56] Mike Kaput: Honestly, I think I'd be having a bit of a panic attack. I might be having a

[00:17:59] Paul Roetzer: panic attack.

[00:18:02] Mike Kaput: These all, everything I just listed seem like very significant concerns now. Could you kind of put these in context for us? Like which ones are the most actual clear and present dangers and which ones are more hypothetical?

[00:18:16] Mike Kaput: Things that are concerning but not as immediately impactful right now?

[00:18:21] Paul Roetzer: As I've said on recent episodes, th th this, this whole like, AI is going to destroy humanity. I mean, I get it. I, I understand that that's makes for great headlines in the media and it, you know, drives a lot of clicks and views and I know why the mainstream media would run with these kind of like, more abstract, long termist kind of approaches.

[00:18:45] Paul Roetzer: And it, it makes sense. To me it's kind of like, Saying like an asteroid could hit earth and destroy humanity and it might happen in a hundred million years, or 10 million years, or a million years, is like, yes, okay. that's good. Like I'm, I'm glad there are scientists at the Frontiers solving for asteroids coming, but the reality is on Earth, we got real problems today.

[00:19:12] Paul Roetzer: Like we, we have climate change, we have hunger, we have disease. We have. Contagions. Like, we have things that I really want scientists working on, and that's kind of how I feel about what is going on here is like, yes. Okay. I'm glad that Jeff Hinton is talking about existential threats to humanity and like some people are thinking about these long termist views.

[00:19:35] Paul Roetzer: But I would really much rather know that the majority of scientists and the majority of lawmakers are focusing on the things that you just outlined. These are very real. And so I would, I would think about these on a timeline, almost like an X y Xs of like the time that they will impact us, when it will occur, and the significance of the impact.

[00:19:58] Paul Roetzer: And so when I look at that, the election interference is right at the forefront. I mean, that is at our doorstep. It's already happening, and it's going to get really bad, and that's going to occur over the next, what do we got, you know, 14, 15, 16 months or whatever before the, you know, the November election in the us.

[00:20:15] Paul Roetzer: So it's going to be, you know, picking into high gear. So that's real. I think job loss is real in the next six to 12 months. I think we're going to start seeing that impact. We had a whole episode dedicated to that. Disruption to the education system. You know, I think administrators, teachers, professors we're going to have this summer to kind of like regroup.

[00:20:39] Paul Roetzer: And figure out what does this mean going into the next school year? Because it's, it's happening. I'm hearing like one-offs from friends whose kids are using it or hearing about it. You're hearing stories about entire classes being failed because the teacher thinks they used AI to do it. So it's like this is, this is happening and now we gotta.

[00:20:58] Paul Roetzer: Regroup over the summer and figure out how to go into the school of the year next year, the 2324 school year, and solve for this. We just saw some great efforts, just in Wired Magazine on Friday, I think it was. I read, the New York City School systems sort of pulled back on their ban of ChatGPT, and then the city of Boston came out with this incredible, you know, guidance on generative ai, encouraging agencies and schools to like, Try this stuff.

[00:21:24] Paul Roetzer: So I think that's really important. Bias and discrimination has been there for years. Like, you know, in terms of like lending, job applications. So that's happening. It's just happening below, you know, kind of the radar for a lot of people. And then the thing I think is going to be. Just a massive issue moving forward is this deceptive and synthetic content.

[00:21:43] Paul Roetzer: I shared this past weekend on LinkedIn, a TED Talk with the guy from Metaphysics, I think it is. Is that the name of the company We profiled them. The Tom Cruise? Yep. Deep fake guys. Yeah. Yeah. And it was a very disturbing talk, honestly, like crazy technology, but. I mean, how good that tech is getting, how fast mm-hmm.

[00:22:04] Paul Roetzer: I just, I really worry about it. So I think the ones you outlined are very real. They're all relatively near term, and there's no advancements in the technology needed for all of those things to happen. So again, we're talking about today's technology creating these issues. If we jump ahead a year, two years, three years from now, and the technology is basically doubling in its capabilities every year, it, it, it becomes a really overwhelming thing to think about, which is why it's so important that whether the government does anything immediately or not, at least they're talking about these things and they're focusing on these issues that I consider the very real near term issues.

[00:22:47] Mike Kaput: So in the next topic, we're going to discuss some more of the regulatory considerations around that are being suggested for ai. But I'm curious with all the issues we just outlined, like are companies, AI companies today doing anything to address these issues? Like is that part of the reason for. This hearing.

[00:23:10] Mike Kaput: We,

[00:23:10] Paul Roetzer: we've, we've covered these a little bit on the show before, but certainly the tech companies are aware of these dangers and they've had ethical AI teams. Unfortunately, as we've discussed, those ethical AI teams probably aren't playing as much of a role right now. Given the competitive nature of what's going on and the rate of innovation that's occurring, the ethical concerns seem to be putting, becoming secondary within some of these tech companies.

[00:23:37] Paul Roetzer: But you know, we know that GPT-4 when it came out was, I think Sam said about six and a half months old, six, you know, seven months old, meaning they spent seven months on safety alignment, red teaming, you know, trying to find the flaws within it, trying to find the harm it could do. They have ethics teams.

[00:23:56] Paul Roetzer: There's Google avoiding releasing in the EU because they, they, they don't adhere to some of the EU laws, or they're trying to prevent some new EU e EU laws from going into place. So, certainly these organizations are doing things, and again, you want to assume they have the best interest of society in mind, but you can't always do, you can't always believe that because competition and capitalism, like they're.

[00:24:24] Paul Roetzer: They're not incentivized to prevent this technology from getting into the world. They're, they're basically encouraged to do it and they're rewarded to do it from a stock price standpoint. So, you know, OpenAI, obviously not stock publicly traded, but from a financial perspective, so, I just don't know that we can rely on the tech companies.

[00:24:46] Paul Roetzer: I don't think it's enough to assume and to trust these, like, you know, five to 10 major tech companies in the world who are basically driving AI innovation right now to police themselves. I don't think that's realistic.

[00:25:00] Mike Kaput: So, I am curious, if you had to pick one of these issues or fears to be most concerned about in the near future, which would it be?

[00:25:09] Mike Kaput: And kinda why, why would you pick that one? Like how and how? Does your choice affect, you know, business leaders and

[00:25:16] Paul Roetzer: professionals? I would initially say job loss because it's the one I've thought most deeply about and I'm most have the most conviction around. Like my, my view of what I think is going to happen.

[00:25:30] Paul Roetzer: But then I would, now that I'm looking at these things and thinking out loud, like election interference is like a threat to democracy. Like what I just, I really, really worry about it. And this is kind of the catch 22 for politicians is they want to use this technology to win elections. But.

[00:25:50] Paul Roetzer: They want to also control it to some degree. But interestingly enough, I did, I think it was last week, OpenAI actually has, in their terms, you cannot use this stuff for, certain elements of political campaigns and things. Oh. And I think they actually, caught somebody doing it and like shut 'em down from using the technology for that.

[00:26:09] Paul Roetzer: It was like a, it was one of like the big, either agencies or PAX that works for one of the politicians or something was using it and they shut it down so, Yeah. I don't know. It's going to be interesting, but I, I do worry greatly about the elections. Yeah.

[00:26:28] Mike Kaput: So as part of the hearing, kinda last but not least, they discussed at length.

[00:26:34] Mike Kaput: Hypothetical or possible regulatory actions that might be taken. And this conversation actually raised some tough questions. So Senate judiciary, chair, Senator Dick Durbin suggested the need for a new agency to oversee the development of AI and possibly an international agency. So one example cited of a model is the International Atomic Energy Agency.

[00:26:59] Mike Kaput: Which promotes and enforces the safe use of nuclear technology. Gary Marcus said there should be a safety review to vet AI systems before they are widely deployed. So similar to something like what is used with the F D A before you're allowed to release a drug. He also advocated for what he called a nimble monitoring agency.

[00:27:20] Mike Kaput: And interestingly, kind of on the subject of government agencies, Senator Blumenthal, who has. You know, chaired or been involved in the creation of some of these agencies, cautioned that any agency has to have adequate resources, both money and the appropriate experts on staff because he cautioned an agency.

[00:27:39] Mike Kaput: Without those is something that AI companies would quote, run circles around us. And as part of this overall regulatory discussion, there was a fair share of controversy as well because at one point, Sam Altman suggested having some type of licensing requirements for the development of AI technology. So some of the observers I saw at other AI companies were immediately crying foul over this because they saw it as a transparent move to engage in what is called in the industry regulatory capture.

[00:28:12] Mike Kaput: So that's when. You know, well-funded, powerful incumbents end up influencing laws and regulations in their favor, and also to stifle competitors. So it's kind of a tactic, not an altruistic thing. Some other people, Commenting on the hearing remarked on how cordial the hearing seemed. It was a very far cry from when our social media executives went in front of Congress and they said that some senators appear ready and willing to kind of allow OpenAI to play.

[00:28:44] Mike Kaput: A pretty big role in its own regulation. And indeed, you know, Altman met with about 60 lawmakers at a private dinner in the days before the hearing, and he has been engaged for several months on what some have called a charm offensive with lawmakers. So Paul is, you're looking at the proposed regulatory solutions, licensing possible agencies.

[00:29:06] Mike Kaput: Do any of these seem reasonable or feasible to you?

[00:29:11] Paul Roetzer: I could hear any of these being potentially viable. I mean, honestly, depending on who's saying it, it's like, oh, okay. That makes a lot of sense. And then you look at the other side and it's like, okay, yeah, I understand why that would be a problematic.

[00:29:24] Paul Roetzer: One of the things I thought was interesting, I forget which Senator asked the question of Altman, but it was like, Something like, would you come and lead it? And he said, I love what I'm doing, sir. Like, because I think that's one of the challenges here is all this sounds great, the, you know, create an agency.

[00:29:39] Paul Roetzer: I've seen the arguments that, yeah, it needs its own agency. And then I've seen other arguments that say, what do we need more agencies for? Let's just administer the laws we already have and apply them to ai. And it's like, oh, okay. Yeah, that actually both makes sense. So I would say for me it's, it's really too early for me to form a true.

[00:29:57] Paul Roetzer: Point of view on this and say, these are the three things I think need to happen. I don't know, like I'm just like all of you, like I'm kind of like processing this information. Listen to both sides. You understand? Everyone has their own agenda, whether it's political or business wise, and so you always have to take with a grain of salt.

[00:30:16] Paul Roetzer: Who's saying what and why are they saying it? And then try and kind of filter through. I would say that Aaron Levy, who we've mentioned before, the CEO Box, he tweeted out and I thought it kind of captured pretty well. He said AI regulation will be one of the most complicated and critical areas of policy in the 21st century.

[00:30:33] Paul Roetzer: Move too fast or regulate the wrong aspect, and you squelch innovation or anno winners too early, move too slow and inevitable risks emerge wild times ahead. That's kind of how I feel like they've gotta do something. I don't know what the answer is. I don't think they're going to find like a magic bullet to just put all this in place in like, the next two years and, and we're good to go.

[00:30:57] Paul Roetzer: But, I don't know. There's a lot of interesting ideas that I think are worth exploring further, and I just like that they're listening right now and I think they need to keep. Listening to the independent scientists, the tech leaders, the ethicists, like they really need a lot of diverse per perspective.

[00:31:17] Paul Roetzer: And then we need people leading these government committees who we are confident actually understand the technology and it nothing else. It seems like they're investing a lot of time to try and figure it out.

[00:31:31] Mike Kaput: Yeah, it's pretty easy to dunk on Congress and often they deserve it, but it does. There were a couple comments during a hearing.

[00:31:37] Mike Kaput: It sounded like that they realized they kind of got burned on social media and got caught flatfooted with that type of technology regulation and understanding. So it is heartening, at least to your point, to see intelligent conversations happening about this. I want to talk really quick about Altman's licensing comments specifically.

[00:31:57] Mike Kaput: Those are getting a ton of attention in kind of the world of ai. Do you see that as a good faith effort to find a regulatory solution, or is that just kind of as self-interested as some of the critics say

[00:32:10] Paul Roetzer: it as? I, this is one where I actually believe Altman, like I feel like he's genuine here and, and again, like you have to.

[00:32:19] Paul Roetzer: You have to take a lot of things in context to evaluate these. So this is a guy who came from leading Y Combinator. He is a startup champion through and through, like he believes in the importance of startups as an economic driver. He believes in entrepreneurship and building companies. Like that's his background.

[00:32:37] Paul Roetzer: Then he's built this company as a cap profit company underneath a nonprofit. So there's like, He's, he's taking, he's paying himself enough to like cover his health insurance. Like for, I don't understand that one, but like, for whatever reason, like he's bail, even taking a paycheck. He doesn't own any equity in OpenAI.

[00:32:56] Paul Roetzer: Like, there's a lot of things that say this guy is truly trying to solve for this. Like he has more money than he needs in his life and probably for generations, like he's already good. So if he makes another billion or whatever, like it's not going to change his life. And so I want to believe. What he's saying at face value, and I think there was, it was misconstrued what he was trying to get across with this licensing idea, but he had a follow up tweet that I thought he kind of like summarized pretty well.

[00:33:26] Paul Roetzer: He said, AGI safety is really important and frontier models should be regulated. Regulatory capture is bad and we shouldn't mess with models before the threshold. Open source models and small startups are obviously important. So he's basically saying like, we shouldn't crown the winners now. It shouldn't be Google and Microsoft and OpenAI and the few others in meta whatever, and like that's it, and nobody else can get in.

[00:33:50] Paul Roetzer: But I do really think that he is not worried about today. He believes they are going to get to Agi I in the near future, and he is trying to prepare society and the government for that, what he believes to be inevitable outcome. And so it's really hard for all of us to judge what they're trying to do and the the ideas they have because he's seeing years ahead of what we know to be true.

[00:34:19] Paul Roetzer: And he's trying to help put things in place to protect us when that occurs. And so with all of that context, I, again, I want to believe what Key is doing, what OpenAI is doing is truly an altruistic thing. And I just hope the government gets it right. Mm.

[00:34:40] Mike Kaput: So on that note, I know you've been a bit skeptical of how quickly we'll actually get useful AI regulations, given everything we've discussed and some of the other things going on from a regulatory perspective, do you still feel that way?

[00:34:56] Paul Roetzer: Well, I don't, again, I don't know that this one is going to do anything, but this is probably a good point to talk about those other hearings that were going on last week. So we'll just kind of. Maybe I'll take a moment and walk through a couple of key points from what else was happening last week, because these are the things that kind of give me hope that maybe there's way more going on than we're aware of, and maybe things are moving along a little quicker.

[00:35:18] Paul Roetzer: So the same day as the hearing we've been talking about, there was actually another hearing upstairs in the Senate building. So this comes from a political article. And honestly, like the other three ones we're going to talk about, there wasn't much out there about them. Like we had to do some digging to try and.

[00:35:33] Paul Roetzer: Figure out what was even talked about in these. So there's very limited resources. We'll link to the few articles that we mentioned here. But this was the senate committee on Homeland Security and Government Affairs, and the hearing brought together current and former government officials, academia, and civil society to discuss a bunch of ideas on how the federal government should channel its immense budget toward incorporating AI systems while guarding against unfairness and violations of privacy.

[00:36:01] Paul Roetzer: So it gets into some specific things like supercharging, the federal AI workforce, shining a light on federal use of automated systems, investing in public facing computing infrastructure, and steering the government's billions of dollars in tech towards responsible AI tools. So this is interesting this, this is one that jumps out to me.

[00:36:20] Paul Roetzer: Even if there aren't rules and regulations, the government is a major buyer of technology. They can very simply put in place requirements. For, for you to be a vendor to the government. Now, even without laws, it's like, well, we have to apply or abide by the responsible AI guidelines of the government for X, Y, and Z.

[00:36:40] Paul Roetzer: So that's where the government can actually have a much quicker effect. So it says, Lynn Parker, former assistant director for AI at the White House Office of Science and Technology Policy suggested each agency should tap one official to be a Chief AI officer. I like that idea. Business should follow that idea.

[00:36:57] Paul Roetzer: She also talked about, Multiple panelists and lawmakers called for boosting AI literacy as a crucial first step toward new AI rules. Very, a hundred percent. We've talked about it on the show, and it says, Peters partner with, Senator Mike Braun, Republican from Indiana on a bill that would create an AI training program for federal supervisors and man and officials.

[00:37:17] Paul Roetzer: Love it. Also significant emphasis on standing up a national AI research resource. The Biden administration envisions this as a sandbox for AI researchers that can't afford the massive computing infrastructure used by OpenAI. That's a great idea. Like there's, it's going to be really, we've talked about, it's really hard to get access to the compute power if you're a small player.

[00:37:37] Paul Roetzer: So let's, let's democratize access to these capabilities. It says through an initial 2.6 billion investment over six years, it would give AI researchers access to powerful computing capabilities in exchange for their agreement to follow a set of government approved norms. But Congress still needs to sign off on this plan.

[00:37:55] Paul Roetzer: So again, this is like a clearer path to near term impact where the government uses its strength and dollars. To basically force the industry to follow along with these norms and policies in exchange for either access to compute power as a startup, or access to being a vendor to the government. So, you know, that seemed really positive.

[00:38:19] Paul Roetzer: The other one was on, Wednesday we had the House Judici judiciary subcommittee on court's intellectual property and the internet. And this one was dealing with copyright issues. So it said that, And again, this was actually another political article. Politico is like the place to go to like learn what's actually happening.

[00:38:38] Paul Roetzer: So they, aired out some key emerging concerns during the meeting. One of the biggest issues is how to compensate our credit artists, whether musicians, writers, or photographers. When their work is used to train a model or is the inspiration for an AI's creation, which we've talked about previously on the podcast, is really challenging right now given the current technology?

[00:38:59] Paul Roetzer: One of the key issues, that they pressed on is who should be compensated for all the material and how it would work. Subcommittee chair, Daryl Issa. Whose background is an electronics industry proposed one mechanism, a database to track the sources of training data. Quote, credit would seem to be one that Congress could mandate that the database input be searchable, so you know that your work or your name or something was in the database.

[00:39:26] Paul Roetzer: So then they said A key question emerging now is when does the use of an artist work to train AI constitute fair use under the law? And when is it a copyright violation under current law? So this one certainly starts directly impacting businesses, marketing artists, things like that. So again, most people have no idea that these conversations are even happening.

[00:39:48] Paul Roetzer: It's, it's a positive development that they seem to be at least asking the right questions. And then the last and most intriguing to me that I couldn't find anything about, other than a couple of, of articles. But even then, it was hard to like, get too much information. Was, on Friday there was the president's council of Advisors on Science and Technology held a meeting that apparently included Dr.

[00:40:11] Paul Roetzer: Fafe Lee and Demi Sabba from Google DeepMind among others. And they were looking at opportunities and risks to provide input on how best to ensure that these technology developed and deployed as equitable, responsibly, and safe as possible. So they were looking at, generative AI models can be used for malicious purposes, such as creating disinformation, driving misinformation and campaigns and impersonating individuals.

[00:40:37] Paul Roetzer: So they're looking at how to enable it, what the impact of it is on society. But interestingly enough, in, in the kind of the summary I found, they actually outlined like, here's everything the government is doing. So I'll just take a moment and kind of read this paragraph because again, it gives me hope.

[00:40:53] Paul Roetzer: That there is way more going on than we know about or we're hearing in the media every day. So it says, US government agencies are actively helping to achieve a balance. For instance, the Weiss House Blueprint for an AI Bill of Rights lays out core aspirational principles to guide the responsible design and deployment of AI technologies.

[00:41:09] Paul Roetzer: That came out last year. We had an episode about that one. The National Institute of Standards and Technology released the AI Risk Management Framework to help organizations and individuals characterize and manage the potential risks of AI tech. Congress created the National Security Commission on ai, which studied opportunities and risks ahead, and the importance of guiding the development of AI in accordance with American values around democracy and civil liberties.

[00:41:35] Paul Roetzer: The National Artificial Intelligence Initiative was launched to ensure US leadership. In the responsible development and deployment of trustworthy AI and support, coordination of US research development, and demonstration of AI technologies across the federal government. And in January of this year, the congressionally mandated National AI Research resource, which we mentioned earlier, taskforce released an implementation plan for providing computational data, test beds and software resources to add researchers affiliated with US organizations.

[00:42:05] Paul Roetzer: So this, presidential council is kind of built to build upon what was already done, and then I'll wrap up here. I thought it was really interesting. They actually asked the public. For ideas on generative ai and then they had five questions. I thought these were really interesting, the things they were asking, just random people to submit ideas for.

[00:42:25] Paul Roetzer: So the first is in an era of in which convincing images, this again, step back. The reason I think this is interesting because it gives a lens into the things that they're thinking about, that they're obviously building plans for themselves. So this is kind of what the government is focused on here. In an era in which convincing images, audio and text can be generated with ease on a massive scale, how can we ensure reliable access to verifiable trustworthy information?

[00:42:51] Paul Roetzer: How can we be certain that a particular piece of media is genuinely from the claim source that is critical. We've talked about the importance of that one, but that they don't have an answer. Number two was, how can we best deal with the use of AI by malicious actors to manipulate the beliefs and understanding of citizens?

[00:43:07] Paul Roetzer: 100%. That's the election interference issue. Number three is what technologies, policies, and infrastructure can be developed to detect, encounter AI generated disinformation. We've talked about that a bunch of times. It seems really hard right now. Google, a couple weeks ago said they're working on it and seems like they're confident.

[00:43:24] Paul Roetzer: They may have ways to do it to be determined. The fourth, how can we ensure that the engagement. Of the public with elected representatives. A cornerstone democracy is not drowned out by AI generated noise. And then the last was, how can we help everyone, including our scientific, political, industrial, and educational leaders, develop the skills needed to identify AI generated misinformation, impersonation, and manipulation?

[00:43:49] Paul Roetzer: So I think in totality, if nothing else, I hope this episode. Helps people realize there is a lot actually going on in Washington. This is being thought about deeply. They're, they're doing what they should be doing, which is racing to understand the technology and the impacts it's having. And I th I want to be optimistic here and say that.

[00:44:14] Paul Roetzer: These collective efforts will move the needle, on safety for US citizens and hopefully globally. And I, while I don't expect laws and regulations to emerge immediately as we discuss, there's a lot of levers the government can pull that don't require the passing of new laws. In addition to the previous episodes we talked about all the, like the FTC and how they're just applying existing laws, so.

[00:44:40] Paul Roetzer: I think if nothing else, this is a very high priority topic for the US government, and they appear to be doing a lot of work behind the scenes to figure out what to do next. That's awesome.

[00:44:55] Mike Kaput: Thank you for that roundup. I mean, I think it's extremely important that our audience not only realize how much is going on, but just become aware of the need to stay on top of these kinds of issues because they will affect all of us like we just described.

[00:45:09] Mike Kaput: I want to wrap up here as if we, as if we haven't covered enough ground with a few, rapid fire topics, just to kind of give people a sense of what else is going on this week in artificial intelligence outside of, congressional hearings. So first step is, Some Google Bard news. So if you don't know, Google Bard is Google's, response to ChatGPT and it's rolling out or available in about 180 different countries.

[00:45:38] Mike Kaput: And it was a huge focus for Google's recent IO event, which we discussed, in a previous podcast. What's really interesting though is that it's actually not. Available in the European Union and none of the other generative AI technologies Google has created are available in the EU as well. And Google has not said why this is the case.

[00:46:03] Mike Kaput: However, some reporting from Wired magazine has a number of experts saying that they suspect Google is using Bard to send a message that the EU privacy laws and safety laws are not to its liking. Paul, what do you make of this?

[00:46:21] Paul Roetzer: A lot of VPNs being used in the eu. Yeah, that's, I put this on LinkedIn and that was a comment I got from, you know, people in Europe is like, yeah, we know how to use VPNs, to get around it, basically.

[00:46:35] Paul Roetzer: Yeah, I don't know. I mean, it's, it's a really interesting topic. I'll be curious to see if Google officially comments on it at any point, but, It's interesting for me cause I'm heading to Europe in a few weeks for a series of talks and so it's just like contextually, you gotta keep in mind when you start doing the conversations over there, that's a different world and it's all, again, it follows that law of uneven AI distribution.

[00:46:56] Paul Roetzer: That, you know, I wrote about and we had an episode about, Just because the tech is available doesn't mean everyone's going to have access to it or, you know, be able to use it. And this is a perfect example where, you know, if you're in the eu, you can't compare these technologies. So I guess follow along on all the Twitter threads comparing them.

[00:47:14] Paul Roetzer: Yeah.

[00:47:16] Mike Kaput: So next up, we saw the launch of the ChatGPT app for iOS, said the official OpenAI app replacing all those, Kind of scammy free ones that were out there trying to give you chatty PT access. So the app is free to use and it syncs your history across devices. It's also notable that it integrates Whisper, which is OpenAI's open source speech recognition system.

[00:47:42] Mike Kaput: So you can actually do voice input now. Really good. Yeah, and it's a, yeah, it's a really robust model too. Yeah. And then chat, PT plus subscribers, get all of the chat PT plus features like JPT four access. On the app now as of today, and I think this will change. I do not believe you can be using the web browsing plugin or some of the other available plugins, but I believe that will change.

[00:48:09] Mike Kaput: And it is also notable that the rollout is happening right now in the US but will happen in other countries in the coming weeks, and it's only for iOS at the moment. But ChatGPT will be coming to Android soon according to OpenAI. Any thoughts on this app?

[00:48:26] Paul Roetzer: It's slick. I tried it. The haptic thing is crazy.

[00:48:29] Paul Roetzer: Yeah. Like it has this like ha cool haptic feature as it's typing. It does like the ticking in your hand. Like, ah, I dunno. It seems like it's really well done. I've, I've found that I do jump into it cause I always had a tab open in Chrome. Yeah. And so I would like go in and use it. So it's, it's nice to just have the mobile app and it seems really well done.

[00:48:46] Mike Kaput: I noticed it seems to me very fast. It is fast, yes. Yeah. So that's really cool. All right, next step. So we actually found a really interesting, commentary about generative AI unicorns. So, We're seeing, overall startup funding, drought. And obviously tech has had some widespread layoffs, but generative AI is kind of bucking the trend.

[00:49:13] Mike Kaput: It's actually already produced 13 unicorn companies, so, you know, startup valued based on funding rounds at a billion dollars or more, and there's been five. That have become AI unicorns this year alone, and that includes two companies. We've talked about quite a bit, cohere and runway. And what's actually fascinating as well contextually, is that it's taking far less time to get to unicorn status.

[00:49:42] Mike Kaput: It looks like the average time to reach unicorn status is for a generative AI company, is about 3.6 years. But for other types of startups, the average is seven years. So, It is, they're twice as fast at getting two unicorn status. And some of the, I'll just quickly read off the generative AI unicorns, based on this chart.

[00:50:07] Mike Kaput: So we've got OpenAI, philanthropic cohere, hugging face company called Light

[00:50:13] Paul Roetzer: Tricks. I wasn't familiar with

[00:50:14] Mike Kaput: them. I wasn't either now. Runway, which we are, we mentioned quite a bit. Jasper Replica inflection, adept character, ai, stability, do ai, and another company

[00:50:25] Paul Roetzer: called Glean. Yeah. So Glean and Light Tricks are the only two on there that we haven't talked about.

[00:50:30] Paul Roetzer: Yeah. Numerous times on the show. Yeah, that's interesting.

[00:50:32] Mike Kaput: Yeah. So Paul, I mean, are any surprises here? I mean it seems like these are kind of the, so mostly the usual suspects, but it was interesting to see how fast some of these companies are achieving

[00:50:41] Paul Roetzer: unicorn status. Yeah, I think the background's really interesting.

[00:50:44] Paul Roetzer: You and I track this stuff pretty closely. We get alerts on funding rounds, so it's not like it's news that they were, these companies were billion dollar companies. But it's interesting to see it in context and how quickly some of 'em are happening in that like five this year. But I will say, like for us, we've always used funding and.

[00:51:01] Paul Roetzer: And valuations as a indicator for which companies to be paying attention to, and especially as you're thinking about building your martex stack and which companies be making bets on. It's very helpful to have the context of where they're at from a funding when the last funding round occurred, which people, are investing, like who are the venture capital firms involved?

[00:51:22] Paul Roetzer: Who are the individual investors involved. We actually consider all of that when we're analyzing these companies and a bunch of other variables. But, you know, it's a, it is a good indicator, as an initial entry point of like, which companies are legit and have a lot of velocity behind them. Awesome.

[00:51:40] Mike Kaput: Well, Paul, as always, thank you for the time, the insight and the analysis. I don't know how I would understand all of this stuff without it, and I think our audience

[00:51:51] Paul Roetzer: agrees, dude. It's a, it's a team effort, man. Like that this was a, this was a, I'm like Thursday, Mike and I are going back and forth. I was like, I think we just gotta make like Tuesday's episode all about regulation.

[00:52:02] Paul Roetzer: And so basically Mike and I like. Cram for a final between Thursday and Sunday night to high school? A little bit. Yeah. Like last night I was up till midnight just like reading 50 articles and trying to like kind of organize and, and figure this all out. So yeah, I mean hopefully this has been helpful for everyone.

[00:52:18] Paul Roetzer: It is a lot. We get that. But you know, I know every week we're doing our best to try and make this stuff make sense and synthesize it and I'm sure there's even other stuff going on. We're missing. But yeah, hopefully it's really helpful to you and a again, like my, we're trying to be real about it all, but also find the hope in it.

[00:52:37] Paul Roetzer: And, and again, I think hopefully that came through in today's episode that there's a lot going on. I understand the need to be cynical about government and even cynical about the tech companies themselves and even some of the tech leaders. Like I get that people have personal perspectives and agendas with this stuff, but.

[00:52:54] Paul Roetzer: At the end of the day, like it's in all of our best interests that they get this right. And so, you know, I'm going to cheer it on and if, if there's positive things happening, we're going to share those with you. And if we think they're slipping up we'll, you know, share that perspective too. But, all of this is so you can form your own perspective.

[00:53:11] Paul Roetzer: You know, we're just trying to give you kind of a balanced overview. Nonpartisan, just kind of, here's where the information's at. And then hopefully you can go do your own thing and, and kind of find your sources and the people you, you trust and, you know, really develop your own point of view on, on all this stuff.

[00:53:26] Paul Roetzer: So yeah, thanks for listening another week. And we will be back next week's Memorial Day. We're going to, we're going to record early, so we will still have a, an episode usual time next week. And, Mike, happy travels. You're off to another talk this week? I think so. I am. Thank you. All right. Thanks everyone.

[00:53:43] Paul Roetzer: We'll talk to you next week. Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:54:06] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 57]: Recap of 2023’s Marketing AI Conference (MAICON), Does Sam Altman Know What He’s Creating? and Generative AI’s Impact on Jobs

Cathy McPhillips | August 1, 2023

This week's episode of The Marketing AI Show talks about MAICON 2023, a mind-blowing article on Sam Altman, and generative AI's impact on jobs.

[The Marketing AI Show Episode 37]: ChatSpot from HubSpot, Generative AI Market Deep Dive, and ChatGPT and Whisper APIs

Cathy McPhillips | March 7, 2023

This week's episode covers what's next for generative AI and ChatGPT with the announcement of new APIs and HubSpot’s new tool.

[The Marketing AI Show Episode 48]: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

Cathy McPhillips | May 23, 2023

This week's episode of The Marketing AI Show covers a major Congressional hearing on AI, major AI safety risks, and possible regulatory action.