<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

52 Min Read

[The AI Show Episode 115]: OpenAI o1, Google’s Insane NotebookLM Update, MAICON 2024 & OpenAI’s $150B Valuation

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Our annual gathering, MAICON (Marketing AI Conference) has wrapped, and Paul and Mike dive into a bit of the key event takeaways, including an impromptu/modified closing keynote breaking down the Strawberry (OpenAI o1) launch. Also, OpenAI’s $150B valuation and Google’s Notebook LM update make for an exciting episode!

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:04:56 — OpenAI o1/Strawberry

00:24:40 — NotebookLM Audio Overview

00:35:34 — Marketing AI Conference (MAICON) 2024 Recap

00:44:27 — OpenAI $150B Valuation

00:46:08 — Hume EVI 2

00:51:07 — HeyGen Avatar 3.0

00:54:31 — Glean’s Funding Round

00:56:52 — World Labs

01:03:31 — Salesforce AI Use Case Library

01:04:57 — DataGemma

01:07:28 — LLM Novel Research Ideas

01:09:39 — "Plex" It

Summary

OpenAI o1 (Strawberry) is released

OpenAI has released its long-anticipated project codenamed "Strawberry," officially called o1, an advanced reasoning model designed to improve AI's ability to tackle complex problems, especially in fields like science, coding, and mathematics. 

The o1 model is engineered to spend more time thinking before responding, simulating human cognitive processes to solve harder problems than previous models. A key innovation of o1 is its use of a "chain of thought" reasoning process, which helps the model refine its thinking, explore different strategies, and recognize mistakes. 

The performance improvements are substantial, with o1 ranking in the 89th percentile in competitive programming on Codeforces, a significant leap from earlier models. In mathematics, o1 excelled on challenging tests like the American Invitational Mathematics Examination (AIME), placing it among the top high school mathematicians in the U.S. 

Additionally, OpenAI introduced o1-mini, a more cost-efficient version optimized for STEM reasoning.

Note: The release came just a couple hours before Paul and Mike’s closing keynote at MAICON 2024, making an exciting pivot and discussion to end our annual conference! 

A Peek at NotebookLM from Google

NotebookLM is a personalized AI research assistant from Google, powered by its Gemini 1.5 model. It allows users to create virtual notebooks where they can upload, organize, and reference documents, slides, PDFs, and more, with a capacity of up to 500,000 words per source and a total of 50 sources. 

Once the sources are uploaded, NotebookLM uses the Gemini 1.5 Pro model to analyze, summarize, and draw connections between the information. 

Recently, Google introduced a new feature called Audio Overviews, which is far more impressive than the name suggests. This feature enables users to generate an audio overview of any material in NotebookLM, simulating a deep-dive conversation between two AI hosts, much like two podcast hosts discussing the content. 

Before recording today, Mike uploaded the website for our Marketing AI Conference and generated a 5-minute conversation between two AI hosts that sounded nearly indistinguishable from real humans. You can try it out yourself by visiting notebooklm.google and creating a notebook with at least one source.

MAICON 2024 Recap

Paul and Mike discussed MAICON, the 5th annual Marketing AI Conference, the 1,100+ attendees from 45 states and 20+ countries, the energy, conversations, key takeaways, and much more. During the closing keynote, Paul and Mke used AI to sum up some of the content in order to create a conversation that brought together the main points of the event. A unique, can’t-miss way to end the event. 

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: it's kind of the GPT 1 moment for reasoning model. So now we're at this very beginning where these things can actually reason, but it seems like they can accelerate pretty quickly to get to the next models, and that they're very confident in their ability to scale up the reasoning capability, which then accelerates us to reliable agents and innovators, and eventually, Autonomous organizations.

[00:00:24] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:54] Join us as we accelerate AI literacy for all. [00:01:00] 

[00:01:01] Welcome to episode 115 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are fresh off of the 2024 Marketing AI Conference in Cleveland last week. I, I feel like my brain is still mush, Mike.

[00:01:17] I don't know. We're like, not only from our own, so, so I, I, Mike and I both did a. Uh, separately, three hour workshops on the day one, and then I had four other sessions in my opening keynote, a conversation with Andrew Mason, um, from Descript, and an opening talk on day three, um, with Adam Brotman and Andy Sack, talking about their book and all their interviews with Sam Altman and Bill Gates, and then um, You and I did a closing keynote that took quite a turn, which we'll talk about in a minute, when OpenAI dropped strawberry two hours before we were going on stage.

[00:01:51] So, it was just a lot. I mean, what an incredible week, but, um, just mentally a marathon. I was, I woke [00:02:00] up Saturday and I, I was like, I don't even know what to do with myself. Like, and I just started working again. Like, I wanted to just take a weekend off, but I couldn't, like, my mind was just racing after that.

[00:02:09] So yeah, 

[00:02:10] Mike Kaput: so 

[00:02:11] Paul Roetzer: everybody was with us. Thank you. I know we had a lot of podcast listeners. It was wonderful to meet so many of you. Um, Mike and I had a book signing on the third day that had, I don't even know how many people are in that line. We, it was supposed to go for 30 minutes and it went for two hours, I think.

[00:02:28] So, uh, it was wonderful to meet so many of you and hear your stories. And I mean, we had people from all over the world. There was a lady who traveled from the Philippines, said 30 hours of travel. We got from Australia. We met, we had, uh, just, it was amazing. So, uh, just to thank you. And we'll talk a little bit more about MAICON later in the show, but, We really appreciate everybody who was a part of it and a chance to get to meet everyone in person.

[00:02:51] So, and yes, we heard the, uh, you know, you're taller than you sound and shorter than you sound. That was maybe the [00:03:00] most common thing people said other than how amazing the event was and their experience. Yeah, no 

[00:03:04] Mike Kaput: kidding. I feel like I'm, I got to really watch my words on this podcast because everyone has mentioned that.

[00:03:11] Paul Roetzer: I know, it's incredible. Uh, all right, so this episode is brought to us by MAICON On Demand, so if you weren't there and you didn't get to meet us in person and be a part of those three days, uh, you can still experience it. So, we have, what, 25 sessions, Mike, I think they recorded? So we had the 10 general sessions and then 15 other, um, featured breakout sessions that were part of MAICON 2024.

[00:03:36] So, uh, that is going to be available within the next, I think about 10 days or so. I don't, I don't know exactly. I know the team told everybody like two weeks, but I know they're racing to try and get everything out as soon as possible. So you can go to MAICON. AI, M A I C O N. A I, and you can buy that now.

[00:03:52] And then you'll get alerted as soon as, uh, they're ready. But, um, that, I mean, honestly, like I said, through some sessions [00:04:00] that, On those sessions alone where we're at the price of admission, like the copyright and IP panel was amazing. People who, you know, I think so many attendees just aren't aware of the copyright and IP issues, intellectual property issues related to generative AI.

[00:04:16] So, You just had, you just look around the room, you see all the note taking, a thousand plus people in there are just feverishly taking notes about everything that's being said. Um, the keynotes were incredible from Andrew Davis and Mike Walsh to, you know, Andrew Mason and, um, Adam and Andy and talking about AI First and yours on 38 Tools in 30 Minutes, Mike.

[00:04:35] I did that with AGI, just all of that is included in the on demand package. So if you missed it. Uh, go to MAICON. AI and buy MAICON 2024 on demand. We also announced MAICON 2025 dates coming in October of 2025. So you can learn a little bit more about that if you missed it this year and want to be in person next year.

[00:04:55] Okay. So during MAICON. OpenAI dropped, [00:05:00] Strawberry, or O1, as it's called, and that is going to be our first main topic of the day. 

OpenAI o1/Strawberry

[00:05:06] Mike Kaput: And we're going to talk a little bit more about this interesting timing, too, in our third topic, when we talk a little bit about our closing keynote at MAICON, but yeah. Uh, yeah, Thursday afternoon, Paul.

[00:05:18] OpenAI released the long anticipated Codename Strawberry project, which is formally called O1, which is their advanced reasoning model. It is designed to enhance AI's ability to think through complex problems, particularly in fields like science, coding, and mathematics. O1 is engineered to spend more time thinking before responding, mimicking human cognitive processes.

[00:05:44] So this approach allows the model to reason through complex tasks and solve harder problems than previous models. One of the key innovations here is the use of what's called chain of thought reasoning. And it's a [00:06:00] process that enables the model to refine its thinking, try different strategies, and recognize its mistakes.

[00:06:06] Now, according to some of the benchmarks released by OpenAI, performance improvements appear to be quite substantial. In competitive programming on code forces, O1 ranked in the 89th percentile. of human competitors, which was a big jump from previous models. In math, O1 showed some pretty significant performance on challenging tests, like the American Invitational Mathematics Examination, AIME, placing it among the top high school mathematicians in the U.

[00:06:37] S. And alongside O1, OpenAI introduced O1 Mini, a cost efficient version of it. optimized for stem reasoning. So this one achieved an 86th percentile on code forces and showed some strong performance on high school level cyber security challenges. And it is designed to be faster and more cost effective [00:07:00] for applications that might require really good reasoning capabilities, but not necessarily all the kind of broad world knowledge in context, a bigger model.

[00:07:08] might have. So, Paul, like we talked about, like, this dropped like two hours before our closing keynote at MAICON, where you and I kind of planned on closing out the conference, talking about the main takeaways of the conference, which we still did, but we did have to pivot a bit in real time, and you spent quite a bit of time up front, rightly so, kind of riffing about your initial impressions of O One on stage, and like, what this What does this mean right now?

[00:07:37] Could you maybe share a little bit of that with us? 

[00:07:41] Paul Roetzer: Yeah, so, I mean, obviously, Strawberry didn't come out of the blue. For anyone who's been listening to the podcast, you can go back to episode 106. We talked about, you know, what was going on in this kind of secret code named Strawberry Project, and maybe the origin of the name Strawberry.

[00:07:56] Um, episode 110, we went a little deeper on it and then episode [00:08:00] 113, just two weeks ago, as we started to get indications that it may be coming sooner than later, you know, we talked about it again. So you can go back and get some additional context there, but then just to set the stage of sort of what happened at MAICON and the initial feedback on 01.

[00:08:16] So my opening talk on day two. So day one was workshops, optional workshops. Day two kicks off the formal conference setting. So my opening talk is the road to AGI. And in this road to AGI, I actually, um, kind of went into the stages of artificial intelligence that OpenAI had previously, um, shared internally and that Bloomberg reported on in July of this year.

[00:08:40] And in their world, level one is chatbots, AI with conversational language. That's what we had. Up until this moment, basically, that's what we've had with all the different models. Level 2 in OpenAI's, um, modeling of the future is Reasoners, human level problem solving. Level 3 is Agents, systems that [00:09:00] can take actions.

[00:09:00] Level 4, Innovators, AI that can aid in invention, which we're actually going to touch on A little bit later in a rapid fire item. And then level five organizations, AI that can do the work of an organization. I spent quite a bit of time talking about that in my opening keynote. So I, I had sort of set the stage that we knew reasoning capabilities were coming.

[00:09:20] That this level two that OpenAI was envisioning was already here, at least in their world, cause they've had this for a while. Um, and then the morning of my talk, the information had an article that said, That strawberry could be released at any time in the next two weeks. So then, um, Tuesday, I think it was Tuesday night, if I remember correctly, someone came up to me at the conference and I don't remember who it was or what company they were with.

[00:09:47] It was a very quick conversation. And he said, it's coming on Thursday. He said, I have a source at OpenAI, like strawberries dropping Thursday. Oh 

[00:09:55] Mike Kaput: boy. And I 

[00:09:55] Paul Roetzer: was like, Oh. Oh, okay. Like I, you know, he seemed fairly confident in it, but I [00:10:00] didn't, you know, I didn't know if he truly had any inside information, but it wasn't the first thing I'd heard that week that it was coming, uh, this week.

[00:10:08] And so then on, uh, on Thursday, the final day of the conference, I actually wandered over to the luncheon area. And as I was walking back, I'm checking my Twitter feed, making sure nothing crazy is dropping. And I see a Bloomberg article that said. That strawberry could come, uh, that afternoon, Thursday afternoon or Friday, but that it was coming like this week.

[00:10:28] So then I get back to the exhibit hall. I'm kind of hanging out by our booth and a couple of people come up to me and they're chit chatting and my phone buzzes. And I looked down at my watch, my Apple watch, and I see the word strawberry. I was like, Oh man, it's happening. So I had to like politely exit the conversation I was in because I realized like, Oh my God, like if it's, Out right now, we have like two hours to figure out what to do about this.

[00:10:49] So I go back to like the, uh, the team office and, and Mike, I think was sitting there getting ready for his. So he was doing a talk at 4. 05 Eastern time that day. And then Mike and I together had a 4. [00:11:00] 35 closing keynote that he had mentioned. That was more of like a podcast style conversation. And, uh, So I go in and I was like, dude, what do we do?

[00:11:08] Like we, we have to obviously adapt to this. And, um, and so what we decided to do was, uh, I was gonna, when I was introducing Mike at 4. 05 for his 30 tools in 30 minutes talk. And so I decided I was going to say, Hey, we're, we're aware that, uh, we, we know some things are happening and we were going to address this in our final talk.

[00:11:30] And then I basically spent like an hour and a half. Kind of researching O1 and learning more about it. And, um, and then I opened the closing keynote with Mike and I, and I spent like 10 minutes or so kind of talking about what it means. So, uh, yeah. So if you were there, you've kind of got the gist of this already and you have likely now, uh, all of you have probably experimented with it because it should be fully rolled out to everyone.

[00:11:56] So if you have a paid version, I I don't know if it's available for the free [00:12:00] version or not, but I think it requires the paid version. I think it does too. Yeah. So you, you have it. And so we'll just take a few minutes here and talk a little bit about it. Now, again, everything we talked about in 106, episodes, 106, 110, and 113 are a hundred percent accurate.

[00:12:16] Like there's, there was nothing that was set in those where this now came out and now we don't have the things we were talking about there. So if you go back and you re listen to those, one, you'll understand the origin of Strawberry a little bit more, but, um, you'll also understand why reasoning is so critical.

[00:12:34] And so we'll touch on that a little bit here, but again, I would go back and especially, I think 1. 13 is probably where we went pretty deep on like why this matters. Yeah. So we'll put all the links in the show notes, but OpenAI released, as they generally do with this stuff, they just drop some tweets. So it's like, Hey, here you go.

[00:12:50] Here's this. It's a whole new series of models, um, but they released three things. There was an O1 preview, like a tech, a technical preview of it. Um, [00:13:00] there was a, kind of a more just general article about it. And then they released a system card, um, about the models, the series of models. And so first thing is they said in our test, the next model update performs similar to PhD students.

[00:13:13] So they, they released these kind of mini in preview. But they didn't release the full O 1 model. And they talked about this being like PhD level in terms of what it was coming, uh, coming up to. And in my opening keynote, I'd actually shared some insights from Dario Amodei from Anthropic, where he was talking about how PhD level was kind of where these models were going.

[00:13:35] And then interestingly, I was, uh, listening this morning, actually. So it's, today is Monday, September 16th. Um, last week was the all in summit. Now. I used to listen to the all in podcast. I honestly stopped because it just got so political, um, and sort of shifted away from what I found value in. I know a lot of people still listen to it, obviously, but I generally don't listen to it too much anymore.

[00:13:59] Um, that [00:14:00] being said, they had in, at their summit last week, Sergey Brin, uh, founder, co founder of Google. So I listened to that interview this morning, and then I listened to one where David Sachs was interviewing Mark Benioff, well, supposed to be interviewing Mark Benioff, but Benioff Talked for 39 of the 40 minutes, but the reason I bring this up is.

[00:14:18] Sachs actually said, um, that they recently were given an inside look at where OpenAI is going because he's one of the investors or his investor groups, one of the key investors. Um, and so he said they just did a day, they being OpenAI, where they brought in a relatively small number of investors and kind of gave us all an update on their product roadmap.

[00:14:40] And it sounds kind of similar because everyone's moving in the same direction. He was referring to like agents and sales force is going to be launching their agent force stuff this week. Um, so there are three big takeaways. And again, this is Sachs telling the story to Benioff. Number one was that they said LLMs would soon be at PhD level reasoning.

[00:14:58] Right now, it's more like a smart high school or [00:15:00] college student in terms of the answers we're going to be. Um, and then it said, but the next level will then be agents. So this timeline that this level one, level two, level three, level four, level five, that I started off explaining that jives with the investor deck that Sam is showing people to raise the 6 billion or whatever number they're, they're at, Mike, and I think we're going to touch on that in a rapid fire today.

[00:15:23] Um, so reasoning is the foundation of all of this. So a few key elements here. It doesn't have many of the features ChatGPT has. So if you haven't tested this yet, when you go into your paid account in ChatGPT, you're still going to see your 4. 0 models, and you're going to see these new 0. 1 mini 0. 1 preview models.

[00:15:41] They're distinctly different models at this point. So. I don't know. I kind of assume the reasoning capabilities will be baked into ChatGPT at a different price point, but it seems the way they're going is two very different models. So you're only going to use the O1 model for like advanced reasoning, [00:16:00] decision making, things that require system two thinking, as we would call it, where you want it to take its time and, and really think about its answer.

[00:16:07] And so, because they view these as distinctly different, that's why they said they're resetting, quote unquote, the. Resetting the counter back to one and naming this series OpenAI01. I think I joked on stage. Um, I, I, I just can't comprehend the naming conventions for these things. Like it's, it's shocking to me that they have the most powerful intelligence, non human intelligence in the world.

[00:16:29] And these are the naming conventions. 

[00:16:31] Mike Kaput: Right. If only you could find something to give you tons of great ideas. Um, 

[00:16:37] Paul Roetzer: there is a rate limit, which they've already lifted, I think. Like when they first put this out, there was 30. Weekly rate limits of 30 messages for O1 Preview and 50 for O1 Mini. I don't know if they've lifted them completely or they just had to reset them because people were experimenting so much.

[00:16:52] And then they said they plan to continue developing and releasing models in this series. Now, the insight, so As we'll [00:17:00] often do in the podcast, we'll like go look and see what are the key people in the industry saying about this. And one person that I follow very closely is Noam Brown. Um, and Noam, you'll remember, I'll have to go back and see what episode we talked about Noam, but, um, he led a team at Meta, that co developed Cicero, which achieved human level performance in the strategy game of Diplomacy.

[00:17:21] This was back, I think, in 2023, maybe early 2023. And he also, uh, developed, uh, AI, um, that was able to win it at multiplayer no limit poker, which was thought to be sort of an insurmountable thing. So Noam is, is a really smart dude, and he moved on to OpenAI in July of last year. And at the time, he tweeted, for years, I've researched AI self play and reasoning in games like poker and diplomacy.

[00:17:49] I'll now investigate how to make these methods truly general. If successful, we may one day see LLMs that are 1000 times better than GPT 4. [00:18:00] So that was in July of 2023. He tweeted that and then when oh one came out on Thursday, he said, um, I've seen a few folks implying that I was the lead on strawberry.

[00:18:11] I was not. Oh one is the result of many years of research that really started taking off in October of last year being 2023. Now, interestingly. Um, as I alluded to on stage, that would sync up with when Sam got in trouble with the board. So the implications here from this tweet, um, You could certainly make the connection that Sam's temporary ouster as CEO, which was always rumored to have something to do with QSTAR, which was the name prior to Strawberry, codename.

[00:18:43] Um, it, it seems as though it, that Sam could have been greenlighting the, the building of this capability last October. Uh, Ilya Sutskever, who was leading the reasoning capabilities, um, May not have agreed with that and, and, and may [00:19:00] have called to the board's attention that they were moving too quickly on something like this.

[00:19:03] So that's kind of like one of the implications of Noam's tweet, but he said it's a new scaling paradigm. We're just getting started. Um, he does say many tasks don't need reasoning. So GPT 4. 0 is probably still going to be the preferred model for a lot of use cases. Uh, but people are going to find, you know, ways to apply this.

[00:19:23] He does say it's a preview, it's early, it's going to get stuff wrong. You're going to have people tweeting all these examples of it not working and make these claims, like, you know, it wasn't a breakthrough, but in reality, they could see this where over time you let these things, things for hours, days, even weeks, and this idea of inference costs, like the more time you spend on inference.

[00:19:45] The better the answers become. And I do laugh because, um, Hitchhiker's Guide to the Galaxy, it's very much like the computer where it's like, Oh, it's the key to the universe. And it says, well, I need seven and a half million years to figure it out. Like it's, it's kind of that idea coming to life where like the [00:20:00] computer just takes time to think and then it'll, it'll figure it out.

[00:20:03] Um, Yeah, you know, how they did it. There's all kinds of great research they shared about the reinforcement learning approach. You mentioned some of the benchmarks in mathematics, chemistry, physics, biology. But the key here is, as we've talked many times on the podcast, is it's just taking time to think.

[00:20:21] And it can go through dozens or hundreds of steps in a process. And this is just the beginning. And I think that's maybe the most important thing, is whatever you see now, this is like I don't know if this is an overstatement or not, but it's kind of the GPT 1 moment for reasoning model. So now we're at this very beginning where these things can actually reason, but it seems like they can accelerate pretty quickly to get to the next models, and that they're very confident in their ability to scale up the reasoning capability, which then accelerates us to reliable agents and innovators, and eventually, Autonomous organizations.

[00:20:58] So, [00:21:00] you know, if you go in and test it today, you may struggle to come up with, uh, uh, an example where GPT 4. 0 isn't fine, like it, that it isn't that much better than GPT 4. 0, but, um, I, I don't think that, um, I think it's important that we understand this is just the beginning and that these things are only going to get smarter and it's probably going to happen fast.

[00:21:25] Mike Kaput: Before we wrap this discussion, I just like kind of wanted to touch on a little more what you just said. Like, so Ethan Mollick, for instance, who also had access to L1 for a month, kind of wrote about his impressions about it recently. And he mentioned exactly what you just said, which is it can be hard to evaluate the outputs of this model.

[00:21:45] Uh, especially if you, like, don't know what it's supposed to be used for, what you're looking for. Like, if I'm an average business marketing leader, do you have any, like, Tips or things you've found that are useful for me to be used, like putting [00:22:00] into this, just to check it out versus typical queries or prompts I might put into GPT 4.

[00:22:05] Paul Roetzer: 0. Yeah. I mean, I think it's, again, it's things that require multiple steps. So on, I think it was episode 113, we talked about this, that on the internet, you mainly have outputs. You have the end products. You don't have the 10, 15, 20 steps that a human mind goes through to create that end product, kind of a list of tasks that you go through.

[00:22:24] And so that's how I think about this. Like the way I'm going to test it, like that I'm excited about trying, I haven't had time to do yet, is I'm thinking about like, Our business for 2025 and beyond. And there's a bunch of like challenges I see and problems to solve and opportunities to evaluate. And so I actually intend to use this model.

[00:22:45] And what I'm going to do is have 4. 0 simultaneous with it. Maybe Claude, maybe Gemini, like I might test all of them at the same time, but I want to use it to help me. Go through a chain of thought to evaluate the future of our business. And so I don't know exactly what those [00:23:00] prompts are going to look like, but when I do that, I usually have either a notepad or a Google doc open, and I sit there and I create a list of things I'm going to think through to arrive at decisions.

[00:23:12] So if I want to decide, Hey, should we launch? Uh, a conference in the UK next year, the output, like it could, I could ask GPT 4. 0 that, and it could do something really fast and say, well, here's what you need to consider and dah, dah, dah, that's system one thinking. But if I use a one, my assumption is it's going to go through a whole bunch of steps to try and arrive at that decision.

[00:23:35] It's going to take its time and think. And so that's what I need. It's what you would use a consultant for traditionally. It's what you and I, Mike would sit around and just like, have a, you know, have a drink and like, brainstorm over when we do that, our mind is thinking about all these different variables.

[00:23:50] And so I'm excited to try it in that way as a true like strategy assistant or peer, where I can just talk about like complex ideas that require a lot of [00:24:00] thinking, and I'm hoping it can accelerate. My, like how I work through those ideas basically, and maybe even illuminate some steps I wouldn't have thought to take.

[00:24:09] Because if you haven't tried it yet, um, to our listeners, it'll tell you the steps it's doing. So if you say like, here's what I'm trying to evaluate for next year, I want to like think through a different options for, you know, growth of the company. It'll say, okay, I'm considering this now. I need to, and it may ask you questions.

[00:24:24] It may help you like drill into these things. So that's how I'm thinking about it. Yeah. And, like, Ethan Mollick gave a great example of, like, a crossword puzzle or, you know, Sudoku puzzle. Like, things that require steps to go through that you normally just see the output. 

NotebookLM Audio Overview

[00:24:40] Mike Kaput: So in our next big topic this week, there's kind of an interesting AI product update that we think is kind of flying a bit below the radar, I would say.

[00:24:49] So this relates to NotebookLM, which is a personalized AI research assistant from Google that's powered by its Gemini [00:25:00] 1. 5 model. So if you use NotebookLM, you basically create virtual notebooks where you upload, organize, and reference. Anything like documents, slides, PDFs, research, websites, and more up to like 500, 000 words per source according to Google.

[00:25:17] And then once you upload those sources, NotebookLM uses Gemini 1. 5 Pro to analyze, summarize, and connect the dots between all the information in all these sources you've pulled together for some type of research task. But now Google has added this It's pretty incredible new feature, which is called audio overviews, and that's a mild mannered name, but the results are a little crazier than it suggests, because what you do is you now can create an audio overview of any material in Notebook LM that simulates a deep dive audio conversation between two AI generated objects.

[00:25:56] Like essentially radio or podcast hosts. [00:26:00] It's literally like listening to two very realistic hosts talking about the material that you've specified. So for instance, before we recorded today, I dropped in the website for marketing AI conference, which is now advertising. our 2025 dates and it generated in a few minutes, like five minutes of conversation between two AI hosts that sound honestly indistinguishable to me from real humans and like have all this crazy cadence and conversationality.

[00:26:31] And it's just, I found it pretty stunning, Paul. I know you are super impressed by it. You can try this out. We'll drop the link in by going to notebooklm. google. There's no com, just google and create a notebook, drop in a source, a doc, whatever. And I'm going to talk about how to generate an audio overview.

[00:26:47] But what I wanted to kind of talk about today, Paul, is like, first, maybe your impression of the audio overviews feature specifically. And then, like, it seems like Notebook LM is like kind of a sleeper tool in terms [00:27:00] of its implications. 

[00:27:02] Paul Roetzer: Yeah, so this came out on September 11th. We're in the middle of the conference, but like, even as we're going, like, I, I just, my habit is I'm always scanning Twitter for updates.

[00:27:14] And so I saw this and I immediately put it into our sandbox for the podcast this week and then made a task for myself and Asana is how I did my task management. Like, you got to test this. Now, NotebookLM came out last July as an experiment in Google labs. And I remember testing it at the time and thinking, this seems really cool, but like, I gotta.

[00:27:32] Dig in more. And I've had a task in Asana for the last like six months to experiment with notebook LM more because it seemed like it had a ton of potential, but it wasn't until this surface where I like went back into it. And so. You know, it was on our list to talk about today, and then this morning I actually went in and the first thing I did with it was I gave it the, uh, O1 safety card, like the system card from OpenAI, and I was like, you know, let me just see how this [00:28:00] works, because it's a 48 page, um, PDF.

[00:28:04] And so I drop it in and 30 seconds later, like all of a sudden it just pops up and I can do all of these things with the tool. So it enables you to go through and build FAQs. If there's timelines within the document, it'll do that. You can create a study guide. You can write a briefing doc. Um, it gives you suggested questions.

[00:28:24] You can have a conversation with it, like ChatGP style. Um, it's like retrieval augmented generation. It's like a, a rag model where you can just talk to the document, but everything gets cited and then like, it'll actually do a split screen where it'll pull up the source and highlight for you, like where the citation is coming from.

[00:28:40] So you can verify. facts. But then as you alluded to, like the real magic and kind of that wow moment was when I created the first audio overview and it starts playing and it's a man and a woman's voice and it is insanely conversational and it's like It's obviously taught to do it, like tuned to do [00:29:00] it at a very, um, I don't know, like an eighth grade level, like maybe high school level, but they took a complex topic and they use analogies, they use filler words, like it truly sounds like a real conversation and it's incredibly valuable.

[00:29:15] And so as soon as I heard the first one, I was like, Oh my gosh. And so then I took our state of marketing ad report, which is something I'm intimately familiar with. And I was like, well, let me just put this one in here and see what happens. So I created a new notebook, um, around that one, started having a conversation with them, chat, created the audio, and it's honestly one of the most impressive, AI, it's not even a demo.

[00:29:38] Like this is a real legit product. It's still tagged as experimental by Google, but it's one of the most impressive AI products I've ever tested. Um, and so like, I, I was half joking to myself. I didn't put this on, I shared something on LinkedIn about it this morning, but like if two years ago someone created this product, you, [00:30:00] they would have raised a hundred million dollars like in their seed round, like it is potentially And I try really hard not to over exaggerate.

[00:30:07] It is a potentially transformational tool, because I immediately started thinking like, you and I look at earnings calls all the time. As a CEO of a company, I have financial reports. I have analytics reports. We, we look at three to five research papers a week, podcast transcripts. There's like, All of these things we do to do our podcasts, to run our business, to do our jobs as you as chief content officer, me as CEO.

[00:30:33] I honestly can't imagine moving forward where I won't use this tool in every one of those use cases. Right. And so like immediately I thought, Oh my gosh, one, I can't believe I didn't do this earlier. Like the audio thing's awesome, but it's just like cherry on the top kind of awesome. The whole thing is awesome.

[00:30:51] And so I even said in there, like. I've been working a lot with my daughter lately, um, on teaching her like ChatGPT and like how to [00:31:00] use it in a very functional, um, augmentative way that enhances what she's capable of doing at 12. And like, this is immediately a personalized learning tool for me. So like now when my kids are trying to learn something and they're struggling, or if I'm trying to explain a concept to them, I'll just be like, let's go get a source and let's put it in here and let's talk to it.

[00:31:20] Um, Let's listen to a 10 minute podcast together about this complex topic. Like it is wild. So if you haven't tried it, you really have to go in and see it for yourself. Um, I, you know, I, I, again, I don't want to like over exaggerate just, but like, I would pay 20 bucks a month for this, like it's part of Gemini right now.

[00:31:41] But once the use cases are clear, which I think is where a lot of these tech companies, these AI companies struggle is to like, Get those three to five use cases that are so insanely valuable that they're, they alone are worth the 20, 30 bucks a month you're paying for the whole language model. This is one of those where I [00:32:00] could see this within a month being so integrated into my workflows, like Perplexity and ChatGPT and Google Gemini already are.

[00:32:07] Um, it's, and it's insanely exciting to me because you and I analyze dense technical information so often to try and make sense of it. I just see this being a critical piece of, of doing that. So I, I mean, again, you, you experimented with it as well this morning, but. It's a mind blowing capability. 

[00:32:28] Mike Kaput: Yeah, my jaw doesn't drop that often these days, but like, hearing the voices cover the content, and especially just like, I think it was the little stuff that got me the most.

[00:32:38] Just the two hosts like, talking about our event and mentioning the background of some of the speakers, and being like, look at the diversity of companies at this event. Like, this was not stuff that was like, on the website. And then the other person's 

[00:32:51] Paul Roetzer: like, right, yeah, I see it. Yeah. Whoa, and you just, and then you realize how much [00:33:00] further voice capabilities are than what we have access to right now.

[00:33:03] That was the other thing that kept running through my mind is we know we have advanced voice mode from AI coming sometime soon. Yeah. Um, Google has advanced voice capabilities, obviously. And so it just like, you just realize how further ahead these research labs are. Then what we likely have access to and we even heard that like in the, you know, the interviews at MAICON this week where, um, Andy Sack and, uh, Adam Brotman were talking about when they met with Reid Hoffman, like months before anyone knew GPT 4 was a thing and he already had access to it.

[00:33:40] And Bill Gates, same story. Um, Andrew Mason at Descript told the same story, how he and Sam, who are friends, Sam, like, basically showed him, gave him access to GPT 4 long before any of us even knew ChatGPT was going to be a thing. Um, and you just, it reinforces the fact that there are advanced capabilities sitting in these [00:34:00] labs that we may not see for 6 to 12 months, but When you hear this audio overview, like it's like looking into the future and then imagining that being applied.

[00:34:08] Like the thing I kept coming back to was this idea of personalized learning for my kids and really for, for any student and even in a professional environment to like, just instantly have a 10 minute. Summarization of some complex topic in a really approachable way. That's, that's just wild. The implications of that.

Marketing AI Conference (MAICON) 2024 Recap

[00:34:26] Mike Kaput: Absolutely. It's, it's wild. I just, if you have one takeaway from this week's podcast, go try it out. All right. So our last big topic is one we've touched on a few times already. Our marketing AI conference MAICON 2024 is a wrap. It happened last week. A big thank you to all the people who made the trip to Cleveland, Ohio.

[00:34:48] for our biggest and best conference yet. So Paul, you know, I just wanted us to maybe talk through a little bit, maybe your perspective on the significance of the event, the turnout, audience response, maybe [00:35:00] Kind of what you took away from it. And then I thought we could maybe just really quickly sum up a couple of the big picture takeaways from our closing keynote, where you and I actually used AI to summarize a bunch of the sessions and the keynotes and actually kind of synthesize takeaways.

[00:35:19] That made the most sense for the audience to turn into kind of their own AI action plan. 

[00:35:25] Paul Roetzer: Yeah. So, I mean, anybody listening to the beginning of episode 113 sort of heard the background story of MAICON. If you, if you didn't listen to it, you can go kind of get the, the origin story of MAICON. And I, you know, I'm grateful for everyone that came up to me during the event last week and shared how much that episode meant to them.

[00:35:42] I mean, just a lot of people, a lot of entrepreneurs, who just. You know, appreciated sort of the inside look at the nonlinear path that we've taken to get to nearly 1100 people in Cleveland at an event. Um, Entrepreneurship is hard, as I, as I said in that episode, [00:36:00] and it doesn't just happen. Um, but for us, you know, it's an amazing collection of speakers.

[00:36:05] Volunteers. Um, we have an incredible event partner in Kelly Wetzel that, you know, makes the operations come to life and makes it like a world class experience for people. But I think the thing that kept ringing true to me and Mike, I'm sure you saw the same thing is just the, the community that has formed and how collaborative.

[00:36:25] And non competitive they are. Like there's just people helping people everywhere. And everyone was in like incredibly welcoming and, um, supportive and took the time to listen. Um, I loved hearing so many stories about, you know, where people are in their AI journeys. And so just, I think the audience that came together and I try real hard, like obviously when you're running the event and you know, I got five sessions, it's hard to sort of step back, but I tried.

[00:36:54] You know, as many times as I could to just sort of step back and soak the whole thing in. You know, [00:37:00] appreciate the journey of, of creating and building something like this, but then seeing it take on a bit of a life of its own and, and just seeing so many incredible speakers and like the ratings from the sessions were through the roof and the, the comments people made about the impact different sessions had on them were incredible.

[00:37:16] And so, as you said, Mike, like our idea for the closing keynote was to do this sort of AI in action wrap up. where we curated the insights from these 50 plus speakers and all the things we learned from, you know, different attendees and then use some AI technology to help curate it. Um, so we had a plan and we executed it and then strawberries sort of threw a wrinkle in it, but we did still pull it off.

[00:37:38] So, yeah. Um, you, you led the charge on that though. Like once we established the vision for that event, like you really went through and did it, so why don't you share just like kind of how AI played a role in it? And then maybe some of those key insights you mentioned. 

[00:37:51] Mike Kaput: Yeah, for sure. So a couple of ways that AI was super helpful in kind of doing this almost in real time with, you know, a little bit of lead time to get all the [00:38:00] slides and things built, but we were really lucky to have a great partner in GoldCast.

[00:38:03] GoldCast has, um, A bunch of AI features, uh, for the events that you're running. Um, one of them is we got real time transcriptions and summarization, like some nice summarized, like three page briefs on each of the main sessions that we did. So we took all those, uh, we took all of the transcripts and we put those into Gemini 1.

[00:38:26] 5 Pro. And then importantly, we didn't just say, Hey, Gemini, like, do the work for us. It was more like here are 10 questions from Paul's opening keynote that he posed as being relevant to the future of marketing. Can you go through with the context around these questions, with the context, gave it a bunch of information about our event, what we're trying to achieve from your keynote, and help us break this down and contextualize How we could start answering these questions using a bunch of takeaways.

[00:38:54] So obviously closing keynote was like 45 minutes long, so we're not going to go through every single one of those [00:39:00] pieces here, but I did just want to throw out like three really quick, like almost sub bullets of some of these events. So, or some of these questions rather that we were asking. So one of the first questions and the most important was that you had posed Paul, is how will the next generation of AI models affect you and your team?

[00:39:17] Your team and your company. And you had a quote in your keynote that I thought was worth mentioning here that says, LLMs are just the foundation for what comes next. 

[00:39:27] Paul Roetzer: And that, yeah, that obviously that was like one of the overall themes of the conference was that the, these text in text out models that started with ChatGPT were just the basis, not only for multimodal, uh, reasoning, agentic capability, but really the pursuit of artificial general intelligence, which was like the whole focus of my talk.

[00:39:49] Uh, and that was, you know, I think something we just became a recurring theme throughout the event. And so much of the agenda enabled us to sort of tell the story of where these models are [00:40:00] going. 

[00:40:01] Mike Kaput: And then another question, and kind of, we had much more extensive conversation around all these, but one of the big questions as well was how will AI impact the future?

[00:40:10] Marketing Strategies and Budgets. And I think a thing you and I talked about on stage, Paul, was just this idea of everyone is going to be able to and be expected to do much more with much less. And I think you had kind of mentioned briefly, either in your, in your keynote or in one of the panel conversations, kind of this idea that we're probably going to be seeing pretty soon, if not very soon, someone's going to probably start a billion dollar business with one person.

[00:40:35] I had multiple people come up to me after that and say, or during the event and say, Yeah, I'd like to figure that out in my own history. I was like, yeah, that's great. 

[00:40:45] Paul Roetzer: Yeah. I think that, you know, going again, we're heading into budgeting season. So anyone listening to this who controls a budget or P& L within their organization is already thinking about, you know, where the budget's going to go next year.

[00:40:57] What strategies are we going to employ? What is [00:41:00] our HR, you know, strategy org chart related to that? And so there was a lot of conversations around that over those few days. And, you know, I think that we're entering a very dynamic phase where you are going to be able to do so much more with so much less.

[00:41:14] And even, you know, for us with SmarterX, when, you know, I announced the launch of that AI research and consulting firm, um, sort of the sister company to Marketing AI Institute, part of it was, I wanted to re imagine what a research firm could be and, you know, tie it back to today. Think about NotebookLM as like a component of an AI native research firm.

[00:41:35] These are things you're paying people hundreds of thousands of dollars a year to do. Like top researchers aren't, isn't like cheap talent. 

[00:41:42] Mike Kaput: Right. 

[00:41:43] Paul Roetzer: And like when you step back and you say, what are our strategies? What are our budgets? Not just from a marketing perspective, but from a business perspective.

[00:41:49] And you think about what technology is now available to you, you can re imagine. And that was a recurring theme as well, but like re imagine what's possible. Um, and [00:42:00] I, I like the idea of sort of disabled to build more efficient, more creative from the ground up versus trying to like tear down what's already there.

[00:42:08] Um, so yeah, I think, Doing much more with much less is certainly something that most organizations will be able to execute next year if they truly understand AI, what it's capable of and apply it in the right ways. 

[00:42:22] Mike Kaput: And last but not least here, I just wanted to mention that one of the questions we asked was how will marketing jobs change?

[00:42:29] And you laid out a timeline on the road to AGI in your keynote. And one of the milestones was Around 2025 to 2027, you said, quote, disruption in knowledge work starts to become more tangible and measurable. And I thought that was just like a great way to kind of cap off this topic because it really sums up why the event is so important, why the community is so important.

[00:42:50] Like this stuff is accelerating and we're getting very quickly, it sounds like, into where it's going to start affecting people. 

[00:42:57] Paul Roetzer: Yeah, my theory is that when [00:43:00] agents, true agents, um, I, I don't, it's funny, like what Benioff's calling this agent force thing, I'm not so sure that they're the kind of agents that we think of agents as, like, I think I mentioned this on a recent episode, like these rule based agents are being kind of considered agents where I think the true concept of them is these Agents that can go through reasoning and chain of thought and take action and things like that.

[00:43:24] So what we're going to see this year is a lot of experimentation, a lot of talk around agents. Um, but it's really going to be one to two years out before these things become at or beyond human reliability and accuracy. And when that occurs, when agents truly start to function as integral parts of businesses and workflows, then I think we start to see true disruption to the workforce.

[00:43:48] And that's why we built JobsGPT. So, you know, recalls that, but from a few episodes ago, it's the idea of taking an exposure key and saying, well, okay, in one to two years, These things are going to have reasoning [00:44:00] capability, persuasion capability, they will be multi modal from the ground up, like now we start to have jobs that are exposed to true automation and disruption.

[00:44:09] Um, so I do think we still have some time, but I, I think that we're going to move forward. Quite quickly into agents, um, starting to disrupt work next year. 

OpenAI $150B Valuation

[00:44:20] Mike Kaput: All right, we're going to dive into some very rapid fire in the time we have left here. Um, first up, OpenAI is now in talks, according to Bloomberg, to raise 6.

[00:44:32] 5 billion in equity funding at a pre money valuation of 100 billion. and 50 billion. Now that valuation is way higher than the company's current valuation of 86 billion. And in addition to the equity funding, OpenAI is reportedly in discussion to secure 5 billion in debt financing. So we had talked a bit, Paul, um, in previous weeks about rumors around OpenAI's funding.

[00:44:56] Um, they were rumored to be at a 100 billion [00:45:00] valuation last time we discussed it. You had kind of mentioned, like, they need to make sure that what they're raising is enough money for the massive investments that they're going to have to make to create the next generation of frontier models. So, if these numbers are accurate, again, they're just early reports from Bloomberg, between 6.

[00:45:19] 5 billion in equity and 5 billion in debt, is that enough? 

[00:45:23] Paul Roetzer: I, I mean, I said on an episode or two ago, like, 100 billion is what it was rumored at then, and 5 billion or whatever, it just still seemed low to me. I mean, this may be the number it comes out. At least it's a little bit higher. Every time a report comes out, these numbers just change, but there's, if this is what it happens to be, if it is six and a half and equity and five, Billion in debt financing.

[00:45:46] So 11 and a half billion total at 150 billion pre money valuation. There's more to the story somewhere. Like those are still too small of numbers in my opinion, especially the money raised. It's just not going to do anything. 10 billion isn't going to do the training run for [00:46:00] GPT 5. 5. Like there, there's just more to the story than this.

[00:46:04] Um, and we'll have to wait and see what it is. 

Hume EVI 2

[00:46:07] Mike Kaput: So next up, a company we've talked about before called Hume ai. HUME, has just introduced a new voice to voice AI model called EV two EVI two, and that's short for empathetic voice interface two. So EV two is focused on emotional intelligence. This model is.

[00:46:30] Very able to engage in rapid, fluent voice conversations with sub second response times. And it's also designed to understand and generate various tones of voice, different personalities and accents, and even adapt to the speaking rate of the participant in the conversation. So the idea here is this isn't just designed to talk to you like a natural language, like a human, but actually anticipate and adapt to your emotional.

[00:46:59] [00:47:00] Preferences. So, EV2 is now available in beta, both as an app and as an API, and the current release is described as EV2 Small. Uh, and a new version, EV2 Large, is in development and is expected to be announced in the future. So, Paul, like, how significant is, like, quote unquote, emotionally intelligent AI? I mean, I mentioned Hume, actually, in some of my talks, just as an example of, like, what's becoming possible.

[00:47:26] Like, AI that Understands your emotional state at a decently competent level, that can respond accordingly, like tons of business applications for that, but also a bit easy to see how that could be misused. 

[00:47:39] Paul Roetzer: Yeah, so in my opening keynote, I had a slide that said, what remains uniquely human, and that the key question becomes, what won't AI be able to achieve or simulate, and simulate is the real key here, at or above human levels.

[00:47:53] And so emotions, the AI won't have emotions. Like there's no known path to an [00:48:00] AI having emotions the way a human does, but it can simulate them and it can understand them. And that's the key on Locke is it doesn't have to actually have emotions as long as it, you know, And so this is one of those that I fully expect, not only this model, but other models from other, you know, frontier model companies to build in this emotional intelligence into their models and it's a very slippery slope, like it's, it can be amazing, it can be very, very useful in business instances, but But it can also cross, uh, ethical lines quite quickly.

[00:48:42] Um, but I don't think we're putting this one back in the box. Like, this is just going to be done by people and businesses. Um, so yeah, I'm, I wouldn't say I like that I'm a huge Proponent of this and that I'm super excited that people are pushing this, [00:49:00] but I also am a realist that it's going to be a part of, uh, AI models and a part of our society.

[00:49:06] Um, it is a fascinating product. 

[00:49:10] Mike Kaput: Relating back to what we talked about, Notebook LM like doesn't do this, but you can hear in those voices a level of. Emotional intelligence and richness and nuance that is way crazier than you hear interacting with OpenAI voice. So you will get a kind of eerie sense of what this could sound like interacting with.

[00:49:32] Paul Roetzer: No doubt. Oh, and then real quick, just to jump back a topic or two. Um, so the, the next rumored model from OpenAI is Orion. That was named in one of the articles in the last week. And gotta love Sam Altman. So on September 13th, so this was the day after the conference, this is Friday, September 13th, I think, if I remember correctly.

[00:49:53] Yes. Um, Sam was at his high school in St. Louis. doing, you know, some [00:50:00] talk and he tweeted, I love being home in the Midwest. The night sky is so beautiful. Excited for the winter constellations to rise soon. They are so great. So in traditional Sam style, he is. I'm just implying that Orion is coming soon, which probably the new name for GPT 5, I'm guessing is, at least that's the code name is Orion.

[00:50:24] I don't know if they'll actually call it that, but there was another great one. I don't know if I'll be able to find it real quick, but while we're on the topic of Sam He had this hilarious tweet, uh, where somebody said something about, like they released O1 and, uh, somebody was like, yeah, that's great.

[00:50:42] But like, when's advanced voice mode coming out? And he replied, uh, man, I'm not gonna be able to find it now. Um, something like, uh, why don't you just be grateful for like the insane intelligence from the sky for a minute [00:51:00] and like, Chill. Basically. We just gifted you reasoning and you're like complaining about not having voice mode.

[00:51:09] Like you bumped the brakes basically. Yeah. That's hilarious. How about a couple of weeks of gratitude for magic intelligence in the sky and then you can have more toys soon. 

[00:51:21] Mike Kaput: That's hilarious. And just in case you think we're grassman straws here, he earlier this year tweeted or posted a photo of his garden, a garden growing strawberries and said something to the effect of like, looking forward to the crop this year or whatever.

[00:51:36] So, 

[00:51:37] Paul Roetzer: so yeah, this is fun with it. Intentionally. He knows they've got something. Yeah. Uh, he likes to beg. 

HeyGen Avatar 3.0

[00:51:44] Mike Kaput: All right. All right. So next up, HeyGen, a company we've talked about a bunch of times, a leading AI video generation. This is the latest version of their AI avatars. [00:52:00] These are completely AI generated, hyper realistic looking individuals that are narrating videos or live streams.

[00:52:08] The founder, Joshua Zhu, in a post on X said, quote, Our avatars have evolved beyond lip syncing. to feature full body dynamic motion. For the first time, our avatar's facial expressions and voice tones are dynamically generated to perfectly match the script. So basically these are much more advanced avatars that can narrate videos and can grasp nuances of words, use facial expressions, et cetera.

[00:52:33] And HeyGen says a few of the use cases for these include things like AI SDRs, virtual corporate trainers, Using AI avatars to scale customer support or creating AI tutors. So this is not the first time we've talked about this concept or, Hey Jen, like, have you seen any AI avatars becoming really prevalent yet in like business context?

[00:52:55] Paul Roetzer: I'm sure that I personally haven't been exposed to them. I don't know if you would, like I'm not getting like sales [00:53:00] messages from them or anything like that. Yeah. They get talked about a lot, but I'm not sure how they're being used in a practical environment or what kind of adoption HeyGen is seeing.

[00:53:09] They just get a lot of buzz. And again, like, I feel like this is one of those things where they're going to raise a bunch of money. It's going to, like, be a darling of the media and it's going to be all over Twittersphere and, and then they're going to get acquihired because they can't find a revenue model.

[00:53:25] Like, it just, it just seems, and again, like, nothing against HeyGen or, like, the company or where they're going to go. I just don't understand. The market for this in terms of like enterprise software market, like maybe it's just online influencers and people figure out how to build millions of followers on TikTok and YouTube with this kind of stuff.

[00:53:44] But in a business environment, this is like cute for three months. And then you're like, stop sending me the stupid avatars. Like, just get on a call and talk to me. So I feel like the more human side is going to eventually like, Not obsolete, this path, but I just feel [00:54:00] like this is a tough, long term, like, viable company idea.

[00:54:07] I could be totally wrong on that, but me personally, I have zero interest in being pitched by an avatar, or like, nurtured by an avatar, like, just not. It's not going to fly for me personally. 

[00:54:20] Mike Kaput: Kind of like we talked about with Zoom's AI meetings. We can't bother to get on the call. I don't know if we're going to talk.

Glean’s Funding Round

[00:54:28] All right. So next up, AI, an AI startup called Glean has just raised 260 million in sales. Series E funding at a 4. 6 billion valuation to build what it calls quote Google for work using generative AI. So Glean is basically an AI powered search platform for enterprises. It helps employees find and discover information across apps, docs, emails, other corporate knowledge.

[00:54:55] So basically you can use Chat based search, leveraged small language models that are [00:55:00] trained for individual customers to basically understand all your unique context and knowledge in your enterprise. And that enterprise focus seems to be their sweet spot because they have enterprise specific security and governance.

[00:55:14] Now, this new funding values the company at double what it was worth just six months ago, according to Fortune. So, Paul, it seems like Glean is a huge AI player in this kind of corner of the enterprise AI search market. They have a bunch of existing customers, plenty of traction. Like, how big is this problem?

[00:55:33] And like, why do, why do we need AI to solve it? 

[00:55:39] Paul Roetzer: One 

[00:55:39] Mike Kaput: of them used to work at Google. I know it. Like pre 2014 or 2015, I believe. 

[00:55:46] Paul Roetzer: Yeah. I, I mean, honestly, you know, it's a company that had surfaced for us and that I dropped in the sandbox last week. I, I haven't personally investigated this company. It seems like a lot of money.

[00:55:58] Yeah. Um, and I don't, [00:56:00] I didn't know that it was a problem. That wasn't already being solved to justify this kind of valuation. Um, but I, I dunno, like I Google for work. Like that doesn't for work. 

[00:56:13] Mike Kaput: That was my first question, right? Yeah. Yeah. Isn't it with Gemini for workspace? I mean, if you're using those apps, isn't that what this is?

[00:56:21] Paul Roetzer: ChatBase that leverages small language models. Yeah, I mean, I guess what they're doing is just taking the small model approach and like I'm tuning those things on people's data. And I could see there being a big market for that. Maybe it's a market Google doesn't want to play in. Um, And maybe it's a pretty, pretty good size market.

[00:56:37] So yeah, I mean, obviously seeing something 260 million series E is pretty, pretty far along in terms of their fundraising. So yeah, it must be some solid traction for it. Something we'll have to keep an eye on. 

World Labs

[00:56:52] Mike Kaput: So next up, there's another new AI company. called WorldLabs, that is focused on spatial intelligence, and it is backed by [00:57:00] some pretty significant AI players, which makes it worth paying attention to.

[00:57:03] So, WorldLabs core mission is to develop what they call Large World Models, as if you need another acronym, LWMs. These are AI systems that are designed to perceive, generate, and interact with the 3D world. So the company believes spatial intelligence, the ability to understand and reason about objects, places, and interactions in 3D space and time is a crucial next step in AI development.

[00:57:31] Now, why this is important is because at the helm of this company is Fei Fei Li, who is a visionary AI pioneer. She has a ton of groundbreaking work in computer vision. She's joined by a number of other renowned experts in computer vision and graphics. And they have over 230 million in funding, including from some AI voices we know and listen to often, like Jeff Hinton and Andrej Karpathy.

[00:57:59] Paul, can you [00:58:00] kind of walk through why this company, this approach is significant? 

[00:58:05] Paul Roetzer: If you don't know who Fei Fei Li is, she's generally considered the godmother of AI. So, um, let's see, she, I'll just read a little bit from her bio. So, inaugural Sequoia professor in computer science department at Stanford, co director of Stanford's Human Centered AI Institute, uh, served as director of Stanford's AI lab from 2013 to 2018.

[00:58:24] During a sabbatical, she was at Google as the vice president. As a vice president and chief scientist of AI ML at Google Cloud, she's published over 300 scientific articles, but most importantly in terms of AI history, she invented something called ImageNet and the ImageNet Challenge. So ImageNet is a critical large scale data set and benchmarking effort that contributed to really the deep learning movement that we, that we are in.

[00:58:49] We know today. So ImageNet, um, provides a data set of over 14 million hand annotated images for training AI algorithms. If you go back to 2009, [00:59:00] 2010, computer vision, wasn't really anywhere near where it is today. It couldn't recognize objects clearly. And so this training set that she led the creation of.

[00:59:10] That hand annotated all of these images enabled computers to start to learn objects. So those 14 million images are categorized into 20, 000 classes, like objects, animals, scenes, and other categories. So today we take, you know, uh, for granted that you can go on your iPhone and search for a, You know, a tree or a dog, and it just automatically recognizes those things within your photos.

[00:59:33] That, that was impossible 13 years ago. So, um, this ImageNet competition in 2012 was a turning point. And this is when deep learning took off. Um, there was something called AlexNet created a deep convolutional, uh, neural network that dramatically outperformed everything else in the field. Um, so AlexNet was created by Alex Kravetsky and Ilya Sutskever.

[00:59:57] There's Ilya, and Jeff Hinton. And the [01:00:00] story of this, if you want to like hear the story, because it is fascinating, it's extremely important to understand contextually to the history of AI, in Genius Makers by Cade Metz, the book starts with the prologue about this. And so I'll just read a couple quick excerpts and then we'll move on, but this, I just find this stuff fascinating.

[01:00:17] So, Hinton created a new company. It included two other people, both young graduate students, in his lab at the University of Toronto. It made no products, it had no plans to make a product, and its website offered nothing but a name, DNN Research, which was even less appealing than the website. Two months earlier, Hinton and his students, this was in 2012, had changed the way machines saw the world.

[01:00:40] They had built what was called a neural network, a mathematical system modeled on the web of neurons in the brain and could identify common objects like flowers, dogs, and cars with an accuracy that had previously seemed impossible. As Hinton and his students showed, a neural network could learn this very human skill by analyzing vast amounts [01:01:00] of data.

[01:01:00] He called this deep learning, and its potential was enormous. It promised to transform not just computer vision, but everything from talking digital assistants, to driverless cars, to drug discovery. The idea of a neural net dated back to the 1950s, but the early pioneers had never gotten it working as well as they'd hoped.

[01:01:19] By the new millennium, most researchers had given up on the idea, convinced it was a technological dead end and bewildered by the 50 year old conceit that these mathematical systems somehow mimic the human brain. As we've always talked about, Mike, AI goes back to the 1950s. This isn't new. But deep learning, the ability to give the machine human like capabilities of understanding, of vision, of language, That's new.

[01:01:43] So then it goes on to say, Hinton remained one of the few who believed it would one day fulfill its promise, delivering machines that could not only recognize objects, but identify spoken words, understand natural language, carry on a conversation, and maybe even solve problems humans couldn't solve on their [01:02:00] own.

[01:02:00] Providing new and more incisive ways of exploring the mysteries of biology, medicine, geology, and other sciences. Sounds awfully relevant to where we are today with the models. And then it ends in the spring of, in summer of 2022, Hinton and his two students made a breakthrough. They showed that a neural network could recognize common objects with accuracy beyond any other technology.

[01:02:22] With the nine page paper they unveiled that fall, they announced to the world that this idea was as powerful as Hinton had long claimed it to be. He then sold that company to Google for 44 million. Ilya Sutskever and Jeff Hinton, uh, then went to work at Google. And then that led to eventually the forming of OpenAI when Ilya left and went to OpenAI with Sam and Elon Musk.

[01:02:45] And then Jeff Hinton stayed at Google until last year. He left because he thought that the models were now a threat to society and he regretted his life's work. So it's just fun, like, it's fun sometimes to like, So why does this matter? It's because Fei Fei Li has been at the center of all of it. [01:03:00] And now she's trying to do it again.

[01:03:01] She's trying to do the spatial intelligence, what she did to images and computer vision. And if she achieves it and they've already made progress, then 10 years from now, we may be looking at back on this moment and saying, Oh, so when she started this, that started it all again. So when Fei Fei Li does something of this level, you pay attention because history would tell you, you must.

Salesforce AI Use Case Library

[01:03:27] Mike Kaput: Lots and lots of history here. Alright, so next up, Salesforce has released a new resource called its AI Use Case Library. This is a collection of out of the box use cases for specific industries that customers of Salesforce can now quickly learn about and then access. Uh, instructions on how to enable in their Salesforce instance.

[01:03:49] So if you click into any of the cards on this AI use case library page, you can quickly see exactly how to enable that use case using your Salesforce [01:04:00] instance. So, uh, For instance, you can click on something like generate a sales pitch and Salesforce gives you all the details, how to use the actions and prompt templates within Salesforce to do that exact thing.

[01:04:11] So Paul might just beat me, but I think this is a pretty valuable approach because I'm not always sure if like demo videos or onboarding education is enough. Like sometimes handholding people through use cases seems to resonate with a lot of people that we speak with. 

[01:04:26] Paul Roetzer: Yeah, that's what we always tell enterprises, like, don't just turn tools on, give people like three to five use cases that are highly personalized to them and let them nail those before you give them like the ocean of possibility of these language models.

[01:04:40] So. Yeah, I like that approach and we'll probably have more on Salesforce next week because Dreamforce is happening this week and AgentForce is in full force. That's a lot of forces. So, yeah, lots to talk about Salesforce probably next week. 

DataGemma

[01:04:55] Mike Kaput: So, in our next topic, Google has unveiled something called [01:05:00] Data Gemma, which is a groundbreaking AI model designed to tackle one of the most pressing challenges in generative AI, which is hallucinations, or, you know, confidently presenting inaccurate information.

[01:05:13] So, Data Gemma aims to reduce this problem by grounding LLMs in real world statistical data. And at the core of it is Google's Data Commons, which is a vast repository of public information with over 240 billion data points. So, this is a knowledge graph that sources data from organizations like the UN, WHO, various census bureaus, like reliable, extensive data.

[01:05:39] Datasets, and basically what they're going to do is they use Retrieval Interleaved Generation, RIG, which proactively queries trusted sources and fact checks information, and Retrieval Augmented Generation, RAG, which we've talked about before. So basically, you're going to be able to use this to, uh, [01:06:00] notably enhance the accuracy of their language models when handling numerical facts.

[01:06:04] And Google has reported they're already seeing some encouraging preliminary results. So this is all built on Google's GEMMA family of lightweight open source models, and Data GEMMA itself is an open model. So, Paul, just this research, I think, like, can highlight a pretty important point. Hallucinations are a huge problem, but some people kind of talk about them like they're unsolvable, or like a permanent barrier to adoption and trust, but they're not.

[01:06:30] This seems to indicate that might not be true. 

[01:06:33] Paul Roetzer: Yeah, if I recall correctly, Sundar Pichai was asked about hallucinations on the 60 Minutes special in the spring of this year. And, I believe at the time he said, like, they were tracking it. Like, they saw it as a solvable problem, and what I have heard from pretty much all researchers is, they're likely going to tackle hallucinations, certainly beyond human level.

[01:06:57] Like, humans get stuff wrong all the time. Make stuff up [01:07:00] all the time, create stories all the time, um, create fiction to convince people of things. My guess is Within two years, these models are way more reliable than humans, um, on the average, on the distribution, basically. So, I'm not surprised. I think we're going to see a lot more research papers coming out in the next 12 months that, you know, really start to move the needle on eliminating or dramatically reducing hallucinations.

LLM Novel Research Ideas

[01:07:26] Mike Kaput: Alright, so our last topic today, we have a new paper out from researchers at Stanford that investigates whether large language models can generate novel expert level research ideas. So the authors conducted a large scale study that involved over a hundred researchers. to compare research ideas generated by an LLM and basically have those proposed by human experts compared with the ones that AI produced.

[01:07:53] Now, this study in particular focused on NLP research and this particular domain, and they found that [01:08:00] in this particularly designed experiment, AI generated ideas were actually judged as significantly more novel than human expert ideas. And that actually. held robustly across multiple statistical tests and evaluation methods.

[01:08:16] The study also found that AI ideas were rated slightly lower on feasibility compared to human ideas, though this difference was not statistically significant. Now, Paul, this is just one study, one particular domain, but kind of brings full circle, I think, the implications of systems like OpenAI's O1. It appears to be possible that sufficiently advanced AI systems could accelerate AI research.

[01:08:43] What do you think about that? 

[01:08:45] Paul Roetzer: Yeah, I don't find it surprising at all. Um, I think we're going to keep seeing more and more studies that validate the idea that AI can generate original ideas or at least find connections between ideas that humans struggle to. Um, so yeah, I, I think [01:09:00] Like I said, I'm not surprised that all of this research, um, I will randomly say here, I hope Google solves the randomly getting logged out of your account while you're in the middle of doing a podcast with a Google Doc that just drops you all of a sudden, you've got to scramble and find your password to like keep things flowing.

[01:09:19] So maybe AI can solve that. The don't log me out in the middle of using your tools thing. That's why I never use Google slides for anything. Like when I'm presenting, I just get logged out in the middle. Um, yeah, so I, not surprising. Uh, I think we'll see more. And I will add one more quick rapid fire topic, Mike, uh, just because this is fascinating.

"Plex" It

[01:09:39] So yesterday, Uh, Harry Stebbings, who has the 20 minute VC podcast, he tweeted, I am truly astonished by how fast and previously so ingrained behavior Googling has been usurped for me by perplexity. So natural, so much better. And then he tagged Arvind, the CEO and founder. So I replied to it because ironically, earlier [01:10:00] that day, a friend Schaefer, who's in Venice.

[01:10:03] was, um, saying how much he loved Venice, and I had been there this summer and I had learned why St. Mark was the patron saint of Venice, which is a wild story I'm not going to tell at the moment, but I wanted to say to Mark, like, Google it or what? Perplex it? Like, what is the verb for perplexity? So I just said Google or perplexity.

[01:10:22] And so when I saw Harry's tweet, I replied, and this is at 5. 30 on September 15th. And the timestamp matters for a second. I said, we need a verb for it. Plex slash plexing. Like, what, what do we call this? Because I haven't seen anybody say anything. And so Aravind was tagged on that tweet. And then six hours later, Aravind tweets, just plex it.

[01:10:45] So I'm not taking credit for like creating a verb for complexity, but it is quite ironic timing that, um, that, that we might be calling it like that. All 

[01:10:58] Mike Kaput: right, you've heard it here first. From now [01:11:00] on, we're going to call 

[01:11:00] Paul Roetzer: it like the duplex it, meaning perplexity. And we'll let Aravind take credit for, um, you know, naming it.

[01:11:07] But if anybody listens to the podcast, it may just be a coincidence, but there you go. So we, we can. We can go plex something now. Awesome. Good. Because 

[01:11:17] Mike Kaput: I'm trying to use it more and more so. 

[01:11:20] Paul Roetzer: Now we have a way to describe it. 

[01:11:22] Mike Kaput: Awesome. Well, Paul, thank you so much as always for breaking down the complicated and interesting world of AI this week for us.

[01:11:31] Just a couple of quick housekeeping notes. If you haven't left us a review, we'd love that. It helps us. Make the podcast better. And also check out our newsletter, marketingaiinstitute. com forward slash newsletter. It is this week in AI, basically it'll send you a single brief every week of all the news you need to know in AI, including what we talked about today and a bunch of stories we did not get to cover.

[01:11:55] Paul, thanks so much. 

[01:11:58] Paul Roetzer: All right, Mike, back to normal week, huh? 

[01:12:00] Get through MAICON, we get back to the grind. Thanks everyone. Uh, we will talk to you again next week.

[01:12:07] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey, and join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:12:30] Until next time, stay curious and explore AI.

 

Related Posts

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.

[The AI Show Episode 110]: OpenAI’s Secret Project “Strawberry” Mystery Grows, JobsGPT, GPT-4o Dangers, Groq Funding, Figure 02 Robot, YouTube AI Class Action Suit & Flux

Claire Prudhomme | August 13, 2024

Episode 110 of The AI Show explores OpenAI's leadership changes, JobsGPT's job market insights, and GPT-4o's risk management. Plus, unravel the "Strawberry Mystery" at OpenAI.

[The AI Show Episode 97]: OpenAI’s Big Announcement, Google’s AI Roadmap, and Microsoft’s New AI Model MAI-1

Claire Prudhomme | May 14, 2024

The Artificial Intelligence Show’s Episode 97 explores OpenAI's anticipated announcement, Google's AI roadmap, and Microsoft's new AI model, MAI-1.