Marketing AI Institute | Blog

[The Marketing AI Show Episode 73]: OpenAI Fires Sam Altman: What Happened? And What Could Happen Next?

Written by Claire Prudhomme | Nov 21, 2023 1:55:33 PM

In light of the developments from OpenAI over the weekend, we are coming to you with a new episode a little ahead of schedule. Following OpenAI’s announcement on Friday, there has been widespread speculation about future implications amidst other significant developments. Join us this week as Paul provides a brief history of OpenAI’s journey, a recap of recent events and thoughts on what is yet to come.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by our sponsor:

Meet Akkio, the generative business intelligence platform that lets agencies add AI-powered analytics and predictive modeling to their service offering. Akkio lets your customers chat with their data, create real-time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI analytics service to your site or Slack. Get your free trial at akkio.com/aipod.

Listen Now

Watch the Video

Timestamps

00:01:46 — An introduction of the weekend's events, Sam Altman fired, Greg Brockman quits

00:06:35 — Taking flight while OpenAI crashes

00:13:09 — Three factors that initially jumped out to Paul

00:15:10 — The history and structure of OpenAI explained

00:33:35 — Superintelligence and navigating the future of AGI

00:36:00 — Altman’s next steps towards GPT-5 and development of OpenAI

00:42:15 — The current state of affairs of Sam Altman, Microsoft and OpenAI

00:57:32 — Final thoughts on the beginning of a transformational time

Summary

Sam Altman is fired, Greg Brockman quits

On Friday afternoon, Nov 17, OpenAI's board abruptly fired CEO Sam Altman. CTO Mira Murati was briefly named interim CEO, and co-founder Greg Brockman quit shortly after Altman’s departure in protest.

The board said Altman was not consistently candid with them, hindering their oversight. Despite widespread speculation, the specific reasons remain unclear. The situation continues to evolve rapidly and reasons remain unknown as to why the board removed Altman so abruptly. OpenAI’s further plans also remain uncertain.

The history and structure of OpenAI explained

OpenAI was founded in 2015 as a non-profit by Sam Altman, Elon Musk, Ilya Sutskever and Greg Brockman. Their goal was to advance AI to benefit humanity and build AGI safely.

In April 2018, OpenAI published its charter which defined principles to guide its mission: broadly distribute benefits of AI, prioritize humanity over shareholders, ensure safety is built into advanced AI, lead technical advances in line with their mission, and cooperate with others pursuing safe AGI

The charter reflects strategies refined over two years to keep OpenAI focused on its mission as it progresses toward AGI.

In March 2019, OpenAI created OpenAI LP, a "capped-profit" hybrid entity. The goal was to rapidly increase investments in compute and talent, while staying true to it's mission. This entity provided a needed, new structure to raise billions required in coming years.

OpenAI LP allows investors and employees to get capped returns, and any value beyond cap goes to the original OpenAI Nonprofit. Overall, OpenAI LP was created to scale resources dramatically while keeping the mission of OpenAI central.

In June 2023, OpenAI provided an update on its "capped-profit" structure, which was first announced in 2019. They believe developing AGI safely is crucial as AI capabilities advance. A crucial note here is that the board determines when AGI is attained.

What we know now

Despite the back-and-forth nature of the weekend, including an effort led by employees and investors to re-install Sam as CEO, Sam Altman and Greg Brockman are done at OpenAI.

At this point, more than 700 employees, including Ilya Susteveker (co-founder, chief scientist, and board member) and Mira Murati (CTO, and CEO for a moment), have signed a letter threatening to leave the company.

This letter attacked the independent directors for their handling of Altman's removal. The letter accused the directors of jeopardizing OpenAI's work and mission, and lacking competence to oversee OpenAI.

The letter details that the signees would only remain at OpenAI if the board appoints two new independent lead directors, all current board members resign, and Sam Altman and Greg Brockman must be reinstated.

OpenAI has a new interim CEO, Emmett Shear, former CEO of Twitch. The remaining independent board members seem to have chosen Shear. And while OpenAI now must implement a new leadership team, Microsoft has acquired Sam Altman and Greg Brockman to “to lead a new advanced AI research team.”

OpenAI as we know and currently understand it, is likely done.

For a more complex understanding, tune in to this episode as Paul Roetzer breaks down all the particulars and provides insights on what is yet to come.

Links Referenced in the Show


Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: we are about to see an accelerated and more distributed race to build next frontier models. This talent, hundreds of the top AI researchers in the world are going to disperse. 

[00:00:11] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:30] Paul Roetzer: My name is Paul Roetzer I'm the founder of Marketing AI Institute, and I'm your host. this episode is brought to us by Akkio. The generative business intelligence platform that lets agencies add AI powered analytics and predictive modeling to their service offering.

[00:00:50] Paul Roetzer: Akkio lets customers chat with their data, create real time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI [00:01:00] analytics service to your site or Slack. Get your free trial at Akkio, that's Akkio.com/aipod.

[00:01:10] Paul Roetzer: Welcome to episode 73 of the Marketing AI Show. I am your host, Paul Roetzer. It is Thanksgiving week. So I am without my co host, Mike Kaput, who is on vacation. I technically am on vacation. Or at least I was until, Friday afternoon. So the plan for today. was for Cathy McPhillips, my Chief Growth Officer and I, to do like a top 15 AI questions based on things people ask us all the time in our Intro to AI session.

[00:01:43] Paul Roetzer: And, that sort of changed at 3. 28 Eastern Time on Friday, November 17th. 

00:01:46 An Introduction of the Weekend’s Events

So, I think by now everyone is aware that OpenAI, said at that time, so this is again 3. 28 PM [00:02:00] Eastern, Friday, November 17th, they tweeted, OpenAI announces leadership transition. Tweet led to an OpenAI blog post, I will call out a couple of excerpts from that post.

[00:02:13] Paul Roetzer: Says the Board of Directors of OpenAI Inc., the 501c3, that acts as the overall governing body for all OpenAI activities, Today announced that Sam Altman will depart as CEO and leave the Board of Directors. Mira Murati, the company's Chief Technology Officer, will serve as interim CEO, effective immediately.

[00:02:36] Paul Roetzer: Mr. Altman's departure follows a deliberative review process by the Board, which concluded that he was not consistently candid in his communications with the Board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. In a statement, the board of directors said, OpenAI was deliberately [00:03:00] structured to advance our mission to ensure that artificial general intelligence benefits all humanity.

[00:03:07] Paul Roetzer: The board remains fully committed to serving this mission. So Sam was unceremoniously terminated by the board on Friday, and shortly thereafter, Greg Brockman, his co founder and president, in a tweet at 7. 09pm Eastern on Friday said, After learning today's news, I sent OpenAI, the following letter to the OpenAI team.

[00:03:32] Paul Roetzer: That letter said, Hi everyone, I'm super proud of what we've all built together since starting in my apartment eight years ago. We've been through tough and great times together, accomplishing so much despite all the reasons it should have been impossible. But based on today's news, I quit. Genuinely wishing you all nothing but the best.

[00:03:52] Paul Roetzer: I continue to believe in the mission of creating safe AGI that benefits all of humanity. [00:04:00] So, despite a wild weekend of speculation, including an effort led by employees and investors to reinstall Sam as S as CEO, as of, I'm recording this at 1. 20 PM, Eastern time on Monday, November 20th. So as of this moment, Sam Altman and Greg Brockman are done at OpenAI and more than 700 employees, including Ilya Sutskever, the co founder, chief scientist, and very importantly, board member.

[00:04:30] Paul Roetzer: and Mira Murati, who was the CTO and was installed as the interim CEO, have signed a letter threatening to leave the company. I'll explain more about that letter in a moment, and the significance of Ilya and Mira signing it. Then, we wake up Monday morning, so again, back to Monday, November 20th, at 2. 53 a.

[00:04:53] Paul Roetzer: m. Eastern Time. Microsoft CEO Satya Nadella, keep in mind, Microsoft is the largest [00:05:00] investor in OpenAI, believed to hold about 49 percent of the company. Satya tweets, We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, which just happened last week, and in continuing support our customers and partners.

[00:05:23] Paul Roetzer: We look forward to getting to know Emmett Shear, kind of bearing the lead here, Emmett was announced as the CEO also this morning, we'll get to that in a minute. and OpenAI's new leadership team and working with them. And then here's the kicker. And we're extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.

[00:05:48] Paul Roetzer: We look forward to moving quickly to provide them with the resources needed for their success. And that's where we find ourselves at the moment.

 So how did we get here [00:06:00] and what could possibly happen next? In this episode, I'm going to do my best to explain what we know and offer some what I believe to be highly relevant historical context about the key moments and players.

[00:06:15] Paul Roetzer: It's important to note at the start that we still don't know why the board took the drastic measures that they did, and despite speculation from people on social media, we actually don't know yet. So this is a extremely fluid situation. Again, I am recording this at 1. 23pm Eastern Time on Monday, November 20th.

00:06:35 Taking Flight While OpenAI Crashes

[00:06:36] Paul Roetzer: I'm going to do my best not to look at my Twitter feed while I'm explaining this all. So by the time I'm done here, who knows what's going to happen. I will take a moment to say, I am eternally grateful for days like this that I graduated from journalism school. So, I was trained... to vet and verify and find [00:07:00] factual sources to craft stories.

[00:07:04] Paul Roetzer: I think that we all have to be very careful of how we collect information in moments like this and who we trust with, the facts of what has happened. So for me, luckily I have a highly curated list of journalists and AI insiders that I monitor in real time. And so when stuff like this happens, I already have all of those vetted sources.

[00:07:32] Paul Roetzer: I don't have to wait for mainstream media to write the articles. We can just go right to the people on the inside and the journalists who are deeply sourced with, sources on the inside. And so just like a note for everyone, if there's topics you care deeply about, I don't care if it's politics or AI or investing, whatever it is, build a trusted list of sources.

[00:07:55] Paul Roetzer: And whether you like Elon Musk or not, whether you like Twitter slash X or not, There [00:08:00] is nothing that compares to Twitter for moments like this. This is what Twitter was built for. It's why it has so much value potentially if they don't screw it up. So lists and alerts are the two key things, and alerts more than anything.

[00:08:15] Paul Roetzer: So if you don't use alerts, use them, find the people you trust and set up alerts and pay attention in real time as it's happening. So, with that stage being set, I will now demonstrate to you how that exact process led me to kind of where we are today and how it started for me on Friday. So, Friday afternoon, I was in Chicago, so I live in Cleveland, I was in Chicago, and I had just finished running a strategic AI leader workshop.

[00:08:45] Paul Roetzer: And, I was, ready for a very long week away. So I was done. That was it. I'd made it through a crazy week and I was going to now take the next like nine, 10 days off and just relax and spend time with my family. [00:09:00] So I'm relaxing at Chicago airport, get on the plane. Everything's leaving on time. It's looking great.

[00:09:04] Paul Roetzer: I'm going to be home by dinnertime Friday and just relax. Sitting on the, at the gate. And I'd been thinking about something for a while. So there was this tweet from Sam Altman, October 24th. And I brought it up in the workshop I was running to explain how sales and marketing and customer service and just like how I thought the world was about to change in a way that most people, most leaders weren't really understanding.

[00:09:32] Paul Roetzer: And so I posted on LinkedIn as I'm sitting on the plane. So before any of this has happened, I'm just kind of, , thinking about this. And so I post on LinkedIn. And so on October 24th, , Sam tweeted and I shared the screenshot. I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes.

[00:09:57] Paul Roetzer: So that was Sam's October 24th [00:10:00] tweet of this year. And then I added my thoughts. So I said, I find myself thinking about this tweet from Sam a lot. He tends to tweet things he already knows to be true. because he sees versions of AI tech the general public doesn't have access to yet. It happened in March 2021 when he wrote Moore's Law for Everything, and numerous times since related to OpenAI product releases.

[00:10:22] Paul Roetzer: I went on to say, imagine a near future in which we have access to AI agents that are superhuman at negotiation and persuasion. How does that change the buyer side in businesses and our personal lives? How does it change the seller side? How about HR, law, etc? I think these are questions and realities we may need to face in 2024.

[00:10:45] Paul Roetzer: So I just put it up and I was just kind of like pondering it as we were getting ready to take off. So then the plane started to taxi for takeoff and I got a Twitter alert from OpenAI, which I get alerts from. And it, that clicked through and read about the [00:11:00] termination of Sam. And I think I swore out loud, I don't know, I had my headphones on, but I am sure it was that moment where I was kind of...

[00:11:07] Paul Roetzer: Completely caught off guard by something. So I quickly, before I think the plane took off, went into my persuasion post I had put up and added a note that said, I posted this about 10 minutes before the OpenAI announcement that Sam had been unceremoniously terminated. Added that link below because I realized people were going to all of a sudden read my persuasion post and think it had something to do with him getting fired, which it had nothing to do with it.

[00:11:33] Paul Roetzer: It was just completely ironic that I happened to put it up, 10 minutes earlier. And then I added a new LinkedIn post that said, , wow. Talk about a Friday news dump, Sam's out basically. So then of course the flight takes off. Great. I now have no internet. Ironically, when they announced GPT-4, I was on a flight from Denver to San Diego.

[00:11:54] Paul Roetzer: So it's like OpenAI can't get on planes. OpenAI does crazy stuff when I get on planes. So the flight takes [00:12:00] off. I obviously lose the internet for about the first 10 minutes. Shockingly to me, it came on pretty quickly. It's a 50 minute flight. The internet works for about 15 minutes of that flight.

[00:12:10] Paul Roetzer: So I have nothing the last like 35 minutes. So I'm just like losing my mind trying to figure out what is going on. So I land in Cleveland, family picks me up. We go get dinner. I did just put my phone down. I explained to my wife what was happening. And then I just put my phone down and enjoyed a dinner with my family.

[00:12:27] Paul Roetzer: I was like, it'll be there when I get done. So we get home and then the evening just. It's insane. Like, anyone who's on Twitter Friday night knows how, how nuts it got. And the thing that drives me crazy in situations like this is people who have no idea what they're talking about with completely uninformed speculation.

[00:12:46] Paul Roetzer: And this stuff just spreads like crazy. It's like anything on the internet that's not true. Like, people just spread that stuff because they love it. So I had zero interest in getting involved in the speculation that was going on because I knew nothing more than anybody else. But there [00:13:00] was a few things that I was very confident in at that moment, a couple of which I shared a little bit on public.

[00:13:06] Paul Roetzer: There was a few of my friends, I was texting some things. 

00:13:09 Three Factors that Jumped Out Initially

But the three things that jumped out to me was one, OpenAI's structure and charter would be very relevant to whatever the story ended up being. Whatever happened, their charter was going to be a part of that story. I don't think most people realize they're governed by a nonprofit, like that is not common knowledge.

[00:13:31] Paul Roetzer: So that was going to be key. The second thing that I thought was going to be critical to this was artificial general intelligence. I had a very strong feeling that was going to be at the center of the discussion. And if, , if AGI is a new topic, if this is the first time you're listening to this podcast, Go back to episode 72, like last week's episode.

[00:13:52] Paul Roetzer: We had a whole topic, I think like 15 minutes on AGI, the state of it, how it's classified, so you could go listen to that and then I'll get a little bit [00:14:00] more into AGI today. And the third thing I was confident in is there was no way Microsoft was going to let their $10 billion investment in OpenAI just crash.

[00:14:12] Paul Roetzer: Satya is too savvy of a CEO. There was no way that was happening. So OpenAI structure was going to be key. AGI was going to be at the center of this conversation and Microsoft was not going to let their money just burn. So what I want to do today is walk you through a few things that I think become extremely relevant to understanding whatever happened and wherever it goes from here.

[00:14:37] Paul Roetzer: So the first section I want to go through here is the history and structure of OpenAI. So, OpenAI's origins, I always recommend reading Genius Makers, Genius Makers by Cade Metz. It's a great book that tells sort of the inside story of how it was founded. Also, Chamath Palihapitiya, I can never say Chamath's last name properly, [00:15:00] Palihapitiya, He had a great tweet, and I'll put the link to that in the show notes, that sort of summarized in detail, 

00:15:10 A Brief History of OpenAI

the timeline, and so I'll pull out a couple of excerpts from Chamath's tweet, and Chamath's on the All In podcast, if you don't listen to the All In podcast, brilliant marketer, business person, investor.

[00:15:19] Paul Roetzer: So, he said, OpenAI was initially founded in 2015 by Sam Altman, which we obviously talked about, Elon Musk. Ilya Sutskever, who is key to this, and Greg Brockman, who is key to this. It was founded as a non profit organization, with the stated goal to advance digital intelligence in the way that is most likely to benefit humanity as a whole.

[00:15:42] Paul Roetzer: The company assembled a team of the top researchers in AI to pursue the goal of building AGI in a safe way. A crucial development occurred in June 2018, as Chamath's tweet goes on to say, the company released a paper titled Improving Language Understanding by Generative [00:16:00] Pretraining, which introduced the foundational architecture for the generative pretrained transformer model.

[00:16:06] Paul Roetzer: This later evolved into ChatGPT, the company's flagship product. Now, I don't think Chamath mentions this in his tweet. It was a pretty long tweet. But whatever opening I did in June 2018 built on what the Google Brain team did in June 2017 when they released the paper called Attention is All You Need that invented the transformer.

[00:16:30] Paul Roetzer: Which is the T in GPT. So generative pre-trained transformer actually originated from the Google Brain team. So then il, April 9th, 2018, to be exact, OpenAI published the OpenAI charter, defining the principles we use to execute on OpenAI's mission. I'll put the link to the charter in the show notes as well.

[00:16:55] Paul Roetzer: From that charter, I will now read a few excerpts. This document reflects the [00:17:00] strategy we've refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interest of humanity throughout its development.

[00:17:17] Paul Roetzer: It goes on to say, OpenAI's mission is to ensure that AGI, Artificial General Intelligence, By which we mean, highly autonomous systems that outperform humans at most economically valuable work benefits all of humanity. Again, I'm going to read that because this is extremely important to understand this concept for everything else that's happening.

[00:17:42] Paul Roetzer: The mission is to ensure that AGI, by which we mean, highly autonomous systems that outperform humans, at most economically valuable work benefits all of humanity. We will attempt to directly build [00:18:00] safe and beneficial AGI, but we'll also consider our mission fulfilled if our aid works others to achieve this outcome.

[00:18:08] Paul Roetzer: So this is not about them making money. This is about the achievement of safe AGI, whoever does it. To that end, we commit to the following principles. They then outline four principles. I'll, I'll breeze through a couple of these things quickly. The first. Broadly distributed benefits. We commit to use any influence we obtain over AGI's development to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity.

[00:18:43] Paul Roetzer: Unduly concentrate power. It goes on to say, and this is, I'm going to boldface this, it's boldface in my text. I'll tell it to you in boldface terms. . Our primary fiduciary duty is to humanity. [00:19:00] Fiduciary duty is the obligation of a party to act in another party's best interest. So above all else. The mission of the organization, the charter that the board governs, is the fiduciary duty to humanity.

[00:19:17] Paul Roetzer: It has to be in the interest of humanity. They then have a section on long term safety. It says we will work out specifics in a case by case agreement, but a typical triggering condition, for the actions they would take would be a better than even chance of success of achieving AGI in the next two years.

[00:19:37] Paul Roetzer: So they have a timeline to this. Then they get into technical leadership and cooperative orientation. So again, you can go read the charter. It's only probably like 400 words. But it's really important to understand this. So then a year later, so, in March of 2019, three years before the launch of GPT-4, OpenAI announced the formation of something called [00:20:00] OpenAI LP.

[00:20:01] Paul Roetzer: The blog post announcing this was authored by Ilya Sutskova and Greg Brockman. In that blog post, they say, We've created OpenAI LP, a new capped profit company that allows us to rapidly increase our investments in compute, And talent, while including checks and balances to actualize our mission. Our mission is to ensure AGI benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.

[00:20:33] Paul Roetzer: Goes on to say, we have experienced first hand that the most dramatic AI systems use the most computational power, in addition to algorithmic innovations. and decided to scale much faster than we planned when starting OpenAI. We'll need to invest billions of dollars in upcoming years into large scale cloud compute, attracting and retaining talented people, and building AI supercomputers.

[00:20:56] Paul Roetzer: Okay, so again, March 2019, we're four years after the founding of [00:21:00] OpenAI as a non profit. They are now launching a for profit entity underneath that non profit. Because they believe they're going to need billions of dollars that they cannot raise just through donations to train the most advanced models.

[00:21:16] Paul Roetzer: That post went on to say, we want to increase our ability to raise capital while still serving our mission, and no pre existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid for a for profit and non profit, which we are calling a capped profit company.

[00:21:35] Paul Roetzer: The fundamental idea is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees within startup like equity. But any returns beyond that amount... And we are, if we are successful, we expect to generate orders of magnitude more value than we owe people, to people who invest in or work at OpenAI, [00:22:00] are owned by the original OpenAI non profit entity.

[00:22:03] Paul Roetzer: So beyond some profit level, any other money they make goes back to the non profit and gets distributed to humans, basically. So going forward in this post and elsewhere, OpenAI, OpenAI refers to the for profit company. And the original entity is referred to as OpenAI Nonprofit. So the OpenAI that you all know, that we have all known for the last three plus years, is the for profit entity.

[00:22:30] Paul Roetzer: which is why most people don't even know the non profit's still involved. Then it goes on to say, and this is really, really critical to understanding what's happening. The mission comes first. We've designed OpenAI LP to put our overall mission, ensuring the creation and adoption of safe and beneficial AGI, ahead of generating returns for investors.

[00:22:53] Paul Roetzer: OpenAI LP's primary, primary fiduciary obligation is to advance the aims of [00:23:00] the OpenAI charter And the company is controlled by OpenAI's non profit board. Again, the for profit OpenAI that we all know is controlled by OpenAI's non profit board. All investors and employees are, they sign agreements. That OpenAI LP's obligation to the charter always comes first, even at the expense of some or all of their financial stake.

[00:23:30] Paul Roetzer: So that means employees who have stock options, investors who have invested in the company, they all sign an agreement that if there comes a point that the benefit of humanity is more important than the work they're doing, they accept that their value may go to zero for what they have. Then it says only a minority of board members are allowed to hold financial stakes in the partnership at one time.

[00:23:55] Paul Roetzer: Furthermore, only board members without such stakes can vote on [00:24:00] decisions. where the interest of limited partners and OpenAI Nonprofit's mission may conflict, including any decisions about making payouts to investors or employees. Finally, it says under safety, we are excited by the potential for AGI to help solve planetary scale problems in areas where humanity is failing and there is no obvious solution today.

[00:24:24] Paul Roetzer: However, we are also concerned about AGI's potential to cause rapid change. Whether through machines pursuing goals misspecified by their operator, malicious humans subverting deployed systems, or an out of control economy that grows without resulting in improvements to human lives. As described in our charter, we are willing to merge with a value aligned organization, even if it means payouts to investors, to avoid a competitive race which would make it hard to prioritize safety.

[00:24:59] Paul Roetzer: Now, if you [00:25:00] think about that last section. When you think about the rapid growth of OpenAI over the last 12 plus months. You can start to see where there's friction between what is happening and what the charter states. So then, also from OpenAI, on June 28th of this year, so June 28th, 2023, they published a blog post, on their structure, an update to their structure, and I will again include this in the episode notes.

[00:25:32] Paul Roetzer: So in this, this post, it says, we announced our capped profit structure in 2019, about three years after founding the original OpenAI nonprofit. Since the beginning, we have believed that powerful AI culminated in AGI, meaning a highly autonomous system that outperforms humans at most economically valuable work.

[00:25:53] Paul Roetzer: has the potential to reshape humanity, or I'm sorry, reshape society and bring tremendous benefits [00:26:00] along with risks that must be safely addressed. The increasing capabilities of present-day systems mean it's more important than ever for OpenAI and other AI companies to share the principles, economic mechanisms, and governance models that are core to our respective missions and operations.

[00:26:19] Paul Roetzer: Now keep in mind, June 2023, what was happening? The government was coming after these companies and looking at laws and regulations related to them. So they had to be proactive in reasserting. that they're here for the benefit of humanity. So that this June 28th, 2023 post goes on to talk about, the structure.

[00:26:41] Paul Roetzer: So it says we initially believed the 501c3 would be the most effective vehicle to direct the development of safe and broadly beneficial AGI while maintaining unencumbered or remaining unencumbered by profit incentives. We committed to publishing our research and data in cases where we felt it was safe to do so and would benefit the [00:27:00] public.

[00:27:01] Paul Roetzer: Now we have the shift. It became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push course research forward, jeopardizing our mission. So we devised the structure to preserve our non profit's core mission, governance and oversight, while enabling us to raise the capital for our mission.

[00:27:22] Paul Roetzer: So then they go into the structure in more detail. I'm going to call it a few key pieces of this. It says, because the board is still the board of a nonprofit, each director must perform their fiduciary responsibilities in furtherance of its mission, safe AGI that is broadly beneficial. While the for profit subsidiary is permitted to make and distribute profit is subject to the mission.

[00:27:43] Paul Roetzer: The nonprofit's principal beneficiary is humanity, not OpenAI investors. So again, you're hearing this over and over and over again, and this is why it's so critical to understand what's happening. They also said, the board remains majority independent. Independent directors do not hold equity [00:28:00] in OpenAI.

[00:28:01] Paul Roetzer: Even OpenAI's CEO Sam Altman does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund. that made a small investment in OpenAI before he was full time. So again, most people don't realize that Sam as the CEO doesn't hold direct equity in OpenAI. Fourth, profit, this is in their bullets, fourth, profit allocated to investors and employees, including Microsoft is capped.

[00:28:29] Paul Roetzer: All residual value created above and beyond is returned to the nonprofit for the benefit of humanity. And finally, and this one I had boldfaced again, The board determines when we've attained AGI. Again, by AGI we mean Highly Autonomous System that outperforms humans at most economically valuable work.

[00:28:49] Paul Roetzer: Such a system is excluded from IP licenses. and other commercial terms with Microsoft, which only apply to pre [00:29:00] AGI technology. So I'm going to linger on this one for a minute. If it is determined by the board, which consists of six people before Friday, it was Greg and Sam, Ilya, so the three co founders, and then three independents.

[00:29:16] Paul Roetzer: If that board determines that they have attained AGI, then the IP licenses to Microsoft and any other commercial terms are voided. They have no rights to AGI. They only have rights to pre AGI technology. So in theory, if GPT 5 has been determined internally, to be AGI, then Microsoft's licenses stop at GPT-4.

[00:29:47] Paul Roetzer: That's what this means. We'll come back to that. Again, very important to understand it. Okay, so now let's talk about Microsoft. In this June 28th, 2023 post, so again, I'm still on the [00:30:00] same post. OpenAI explains it in this way. Shortly after announcing the OpenAI Cap Profit structure and our initial round of funding in 2019, we entered into a strategic partnership with Microsoft.

[00:30:14] Paul Roetzer: We subsequently extended our partnership, expanding both Microsoft's total investment, as well as the scale and breadth of our commercial and supercomputing collaborations. Now, I will stop here for a second and say, It is believed that Microsoft's investment is somewhere in the range of 13 billion. But a large portion of that investment is in computing time and resources.

[00:30:37] Paul Roetzer: So it is not a cash infusion. So to train these models and to run these models costs a lot of money and requires a lot of computing resources. That is what Microsoft provides through Azure. So a lot of that 13 billion in commitment has not actually been awarded yet. And that becomes very important in understanding Microsoft's play in all this.

[00:30:59] Paul Roetzer: So [00:31:00] OpenAI remains, this is going back again to the June 28th article, OpenAI remains an entirely independent company governed by the OpenAI non profit. Microsoft has no board seat and no control. And as explained above, AGI is explicitly carved out of all commercial and IP licensing agreements. Microsoft These arrangements exemplify why we chose Microsoft as our compute and commercial partner.

[00:31:27] Paul Roetzer: From the beginning, they have accepted our CAP equity offer and our request to leave AGI technologies and governance for the non profit and the rest of humanity. It then ends with saying, this is the, what constitutes our board. So I mentioned, you have Greg Brockman, who was the chairman of the board and was left out of being alerted that Sam was being fired.

[00:31:48] Paul Roetzer: He was also the president and co founder. Ilya Sutskova, who we've talked about as the chief scientist, co founder, board member, Sam Altman, and then the non employees were Adam D'Angelo, [00:32:00] Tasha McCauley, and Helen Toner, whose names you may hear more about in the near future. Okay. And then the final milestone from OpenAI that I'm going to talk about is about, well, it's about two weeks later on July 20, July 5th of this year, July 5th, 2023, OpenAI, OpenAI published a blog post titled Introducing Superalignment.

[00:32:26] Paul Roetzer: Now in this post, I'm going to go through a few key excerpts here. It says we need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, it's their goal, we're starting a new team co led by Ilya Sutskever and Yann LeCun, and dedicating 20 percent of the compute we've secured to date to this effort.

[00:32:54] Paul Roetzer: We're looking for excellent ML researchers and engineers to join us. So Ilya Sutskever on June 5th of this [00:33:00] year was put as the co lead, a team within the organization. That was solving for not just AGI, but beyond AGI, superintelligence. And they thought they could get there in this decade. That superintelligence would be achieved within this decade.

[00:33:17] Paul Roetzer: So superintelligence, it goes on to say, will be the most impactful technology humanity has ever invented and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous. and could lead to the disempowerment of humanity, or even human extinction.

00:33:35 Superintelligence and Navigating the Future of AGI

[00:33:37] Paul Roetzer: So if you remember a few episodes back, we talked about this idea, the Doomer side of, this could be the end of all of us. So there is this belief within AI research community that this stuff can get out of control. While superintelligence seems far off now, we believe it could arrive this decade.

[00:33:55] Paul Roetzer: Currently, we don't have a solution for steering or controlling a potentially [00:34:00] superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement from learning from human feedback, rely on humans ability to supervise AI. But humans won't be able to reliably supervise AI systems much smarter than us.

[00:34:18] Paul Roetzer: And so our current alignment techniques will not scale to super intelligence. We need new scientific and technical breakthroughs. They went on to say their approach The goal is to build a roughly human level automated alignment researcher. We can then use vast amounts of compute to scale our efforts and iteratively align superintelligence.

[00:34:39] Paul Roetzer: So basically the superalignment team is going to build a human level intelligence to align the superintelligence, if you're following along at home. The goal is to solve the core technical challenges of superintelligence alignment in four years. Ilya has made this his core research focus and will be co leading the team with Yann alignment, [00:35:00] blah, blah, blah, blah.

[00:35:01] Paul Roetzer: Okay, so that is the structure of the organization and some of the key players and some of the milestones at OpenAI, going back to the founding in 2015, to the charter, to the formation of the cap profit company, to the introduction of super alignment, which leads us to GPT 5, AGI, and the pursuit of super intelligence.

[00:35:27] Paul Roetzer: So since we understand the charter and the board's fiduciary responsibility to humanity, their words, not mine, we have to consider what OpenAI has been working on and the progress that they have potentially been making. So after months of rumors, Sam recently confirmed that OpenAI is working on GPT 5 in an interview with Financial Times that was published November 13th, 2023, just eight days ago.

[00:35:59] Paul Roetzer: [00:36:00] In this article, I'm going to read a couple of key excerpts. It says Altman, meanwhile, splits his time between two areas. Research into how to build superintelligence, quote unquote, and ways to build up computing power to do so. Quote, the vision is to make AGI, figure out how to make it safe, and figure out the benefits, unquote.

[00:36:23] Paul Roetzer: Pointing to the launch of GPTs, he said OpenAI was working to build more autonomous agents. that can perform tasks and actions such as executing code, making payments, sending emails, and filing claims. If you listen to this podcast often, you have heard us talk endlessly about AI agents and how all of these major research labs are working on these autonomous agents that can take actions online.

[00:36:48] Paul Roetzer: That is what he's referring to here. Then he goes on to say, quote, We will make these agents more and more powerful, and the actions will get more and more complex from here. The [00:37:00] amount of business value that will come from being able to do that in every category, I think is pretty good. The article went on to say the company is also working on GPT 5.

[00:37:11] Paul Roetzer: Again, this is the first time I have seen public acknowledgement of this. It has been rumored for months. So again, the company is also working on GPT 5, the next generation of its AI model. Altman said, although he did not commit to a timeline for its release. It will require more data to train on, which Altman said would come from a combination of publicly available data sets on the internet, as well as proprietary data from companies.

[00:37:37] Paul Roetzer: Maybe the data you're uploading to GPTs, possibly. Okay, while GPT 5 is likely to be more sophisticated than its predecessors, Altman said it was technically hard to predict exactly what new capabilities and skills the model might have. Quote, until we train that model, it's like a fun guessing game for us, he said.

[00:37:57] Paul Roetzer: We're trying to get better at it because I think it's [00:38:00] important from a safety perspective to predict the capabilities. But I can't tell you here's exactly what it's going to do that GPT-4 didn't. Ultimately, Altman said the biggest missing piece And the race to develop AGI is what is required for such systems to make fundamental leaps in understanding.

[00:38:21] Paul Roetzer: So keep, keep that one in mind for a moment. We're going to come back to this. The, said the biggest missing piece is the race to develop AGI is what is required for such systems to make fundamental leaps in understanding. He went on to say, quote, There was a long period of time where the right thing for Isaac Newton to do was to read more math textbooks and talk to professors and practice problems.

[00:38:46] Paul Roetzer: That's what our current models do, said Altman, using the example a colleague had previously used. But he added that Newton was never going to invent calculus by simply reading about geometry or algebra, [00:39:00] and neither are our models, Altman said. He ended with a quote, And so the question is, what is the missing idea to go generate net new, knowledge for humanity?

[00:39:13] Paul Roetzer: I think that's the biggest thing to go work on. So then, the night before he's fired, he is at, and 10 days after OpenAI's Dev Day, where he introduced GPT-4. 5 and GPTs, he appeared on stage at the APEC CEO Summit, and he had this to say. I think more generally, this is a quote directly from Sam, the next two paragraphs I'm going to read you.

[00:39:43] Paul Roetzer: I think more generally, the 2020s will be the decade where humanity as a whole begins the transition from scarcity to abundance. We'll have abundant intelligence that far surpasses expectations. Same thing for energy. Same thing for health. A few other categories [00:40:00] too. But the technological change happening now is going to change the constraints of the way we live and the sort of economy and the social structures and what's possible.

[00:40:12] Paul Roetzer: He went on to say, I think this is going to be the greatest step forward that we've had yet so far. And the greatest leap forward of any of the big technological revolutions we've had so far. So I'm super excited for, like, super excited. I can't imagine anything more exciting to work on. And on a personal note, just in the last couple weeks, I have gotten to be in the room when we sort of, like, push the sort of veil of ignorance back and the frontier of discovery forward.

[00:40:43] Paul Roetzer: And getting to do that is like a professional honor of a lifetime. So it's just fun to get to work on that. I'm going to reread a part of that. Just in the last couple of weeks, I have gotten to be in the room [00:41:00] when we push the veil of ignorance back and the frontier of discovery forward. We also know from multiple reports that Sam has been looking to raise billions of dollars from SoftBank and Middle Eastern investors to build a chip company to compete with NVIDIA.

[00:41:18] Paul Roetzer: and other semiconductor manufacturers, as well as lower costs for OpenAI. In addition, there has been rampant rumors that he is in talks to build a device, potentially to compete with the iPhone. It is not clear, despite whatever speculation you might hear, how informed the board was of these efforts, or OpenAI's mission and growth.

[00:41:48] Paul Roetzer: Nor is it clear what the board knew about what Sam saw in that room when they pushed back the veil of ignorance. So [00:42:00] where do we go from here? And it brings us back to today, now 1. 59pm Eastern Time on Monday, November 20th. 

00:42:15 Superintelligence and Navigating the Future of AGI

For the moment, Sam, Greg, and potentially hundreds of OpenAI employees are leaving the company to join Microsoft, obviously, but also, likely, looking at destinations such as Google, and NVIDIA, and XAI, and Meta, and Anthropic, and Inflection.

[00:42:24] Paul Roetzer: Some of them will go launch their own AI company. We also know that OpenAI appears to have a new interim CEO in Emmett Shear, the former CEO of Twitch, which he sold to Amazon. apparently chosen by the remaining three independent board directors. I'm not actually 100 percent sure how that worked, but based on my interpretation of their board structure, I'm assuming the three independent members picked Emmett.

[00:42:49] Paul Roetzer: Emmett, the, at the moment, interim CEO, tweeted a. m. Eastern Time on Monday, November 20th. Today, I got a call inviting me to [00:43:00] consider a once in a lifetime opportunity to become the interim CEO of OpenAI. He goes on to talk about his family and why he left Twitch. And then he kind of continues on and says, Our partnership with Microsoft remains strong, and my priority in the coming weeks will be to make sure we continue to serve all of our customers well.

[00:43:18] Paul Roetzer: OpenAI employees are extremely impressive, as you might have guessed, and mission driven in the extreme. And it's clear that the process and communications around Sam's removal have, has been handled very badly, which has seriously damaged our trust. He then goes on to lay out his three point plan over the next 30 days, which I am not a betting person.

[00:43:37] Paul Roetzer: If I was, I would not be putting money on the fact that he gets to see that plan through. But anyway. He goes on and talks about a bunch of stuff, and then he has the PPS at the end. Before I took the job, I checked out the reasoning behind the change. The board did not remove Sam over any specific agreement on safety.

[00:43:58] Paul Roetzer: Their reasoning was [00:44:00] completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models. The other thing we know at the moment to be true, everyone left at OpenAI seems to have turned on the remaining three independent board members, including Ilya, who signed, along with others, a letter, asking the board to resign.

[00:44:28] Paul Roetzer: So, Mira Murati, who I mentioned earlier, the CTO, was temporarily the CEO for, like, 36 hours or so. She has since tweeted, OpenAI is nothing without its people, and hearted Sam's tweet that said, I love the OpenAI team so much. So, the one funny thing over the weekend was, the OpenAI team was communicating intentions to each other with heart emojis.

[00:44:49] Paul Roetzer: So, anybody who hearted was basically telling Sam, like, hey, we're coming with you. And then anyone who said the OpenAI is nothing without its people were sort of like following along with this letter to the board for all of them to resign and to [00:45:00] bring Sam back. So it was, it was kind of, kind of funny to watch.

[00:45:03] Paul Roetzer: Ilya Sutskever, who either heavily participated in or led the coup to get Sam out, tweeted this morning, I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together, and I will do everything I can to reunite the company. Thank Sam Altman, heart emoji, that tweet.

[00:45:28] Paul Roetzer: I don't think they could write this stuff in Hollywood, like it is, it's wild. Okay, and then also this morning, still Monday, November 20th, we have a letter that was signed by more than 700 employees. I don't even, I think they have like a thousand, so the vast majority of the remaining employees have signed this.

[00:45:44] Paul Roetzer: In this letter from employees, That again was signed by Ilya and Mira, the temporary CEO. The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our [00:46:00] mission and company. Your conduct has made it clear you do not have the competence to oversee OpenAI.

[00:46:05] Paul Roetzer: Again, this is a letter from employees. to the three remaining independent board members. When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds.

[00:46:23] Paul Roetzer: Despite many requests for specific facts for your allegations, you have never provided any written evidence. This is why we don't know yet why the board did what they did. They're not telling anyone why they did it. They also, although I assume Ilya knows, that's like the one thing here, because Ilya voted to get Sam out.

[00:46:42] Paul Roetzer: So Ilya has to know why, and I'm assuming Ilya is the one that raised the red flag, but we just don't know all the details yet. Despite many requests for specific facts, we don't have them. They're also increasingly realized you are not capable of carrying out your duties and we're negotiating in bad faith.

[00:46:57] Paul Roetzer: Leaders should work with you around the clock to find a mutually agreeable outcome. Yet [00:47:00] within two days of your initial decision, you again replaced interim CEO Mira against the best interests of the company. I had this one bold faced. You also informed the leadership team that allowing the company to be destroyed would be consistent with the mission.

[00:47:15] Paul Roetzer: Let, let that one sink in for a minute. So if we know the mission is basically to protect humanity from AGI. Like if we get to AGI, the mission of the organization is whatever it takes to protect humanity. And the employees sending a letter to the board who made this decision said to them, You informed the leadership team that allowing the company to be destroyed would be consistent with the mission.

[00:47:42] Paul Roetzer: Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgment, and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary. Run by Sam Altman and Greg Brockman, [00:48:00] Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary, should we choose to join.

[00:48:06] Paul Roetzer: We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Brett Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman. The last thing we know to be true, is Microsoft's Satya Nadella The CEO may have pulled off one of the greatest coups in business history.

[00:48:33] Paul Roetzer: So Friday night, after I arrived home from Chicago, again, avoiding speculation about why, I, my mind was going to what happens next. And knowing this, all this history and knowing kind of the key players and kind of having been thinking about this stuff for a very long time, this is what I tweeted. Random thoughts.

[00:48:54] Paul Roetzer: Is one possible path forward that Microsoft just acquires OpenAI? [00:49:00] Not even sure how that sort of thing would work given OpenAI's structure. But this is a very, this is very bad for both companies and acquisition might be a way out. Not only does it appear Satya had that thought, but he one upped that thought and did it without having to acquire the non profit.

[00:49:21] Paul Roetzer: Sam, Greg, hundreds of OpenAI employees leaving. without the burdens and limitations of the non profit structure and charter to create a new for profit subsidiary within Microsoft. It's insane. So that is where we are. I'm going to wrap now. I again have not looked at Twitter. I don't know if anything has changed in the 40 minutes I've been talking.

00:49:49 Final thoughts on the beginning of a transformational time

[00:49:49] Paul Roetzer: But I'm going to give you kind of my final thoughts. And I have 8 quick final thoughts here. First is OpenAI as we know it. is likely [00:50:00] done. There's a great thread I will put in the notes from Reid Elbergati, the tech editor for Semaphore, and he outlines in detail why. And in essence, Microsoft controls the IP, all the training is done in Azure, so anything the OpenAI team does, the Microsoft team can learn from that data, they have in theory all the talent, like It's really hard to imagine a scenario where OpenAI comes out of this as anything more than a shadow of its former self and a non profit research lab that has no computing power and no talent.

[00:50:33] Paul Roetzer: Like, unless some dramatic twist happens, OpenAI as we've known it is just gone. The second thing, and this was one I thought of right before I kind of came on to record this, OpenAI customers and partners have to be just scrambling like crazy right now. So if you've built a wrapper on top of their API and you have some service built on this, you have to be wondering, is it even going to be on in three days?

[00:50:57] Paul Roetzer: If you're a customer that maybe signed up for [00:51:00] ChatGPT Enterprise, or if you're like probably millions of people who just built some GPTs for free. You have to really be wondering, what is the future of OpenAI? And so I would think that the phones are ringing off the hook, or the emails are burning up for Anthropic, and Cohere, and Google, and Amazon.

[00:51:17] Paul Roetzer: All the other companies who stand to benefit, whose customers may want a more stable company. So this totally just throws massive disruption into that whole ecosystem. The third thing, and I haven't really thought too much about this yet, but this is sort of like top of mind. The open source AI crowd. So again, if you listen to the podcast, you have the closed source and the open source closed being OpenAI, Google, they're building their own proprietary models, you don't know the weights, you can't really manipulate anything.

[00:51:46] Paul Roetzer: And then the open source crowd that's like meta putting out Lama and tools like that. The open source crowd, just got the greatest argument they could have ever wanted not to centralize AI [00:52:00] into a few closed tech companies. So if we can't trust the people building what is pretty commonly agreed to be the most powerful current AI in the world, if we can't trust them to govern themselves, how are we supposed to trust them to govern general intelligence and superintelligence?

[00:52:20] Paul Roetzer: The fourth thing is... I feel like Elon Musk has been extremely controlled in his tweeting, which is highly unusual. And I got to think there's some role here for him other than just recruiting a bunch of people to XAI to build Grok. So if you remember, Elon Musk put up the initial money for OpenAI. He came up with the name for OpenAI.

[00:52:46] Paul Roetzer: He created OpenAI because he feared Google didn't understand the impact it could have on humanity. So he tried to build a counterbalance. He then was exited from the company and board because Sam won a power struggle to build the for [00:53:00] profit engine. If OpenAI is now the non profit that Elon envisioned to start, how does Elon not get back involved in some capacity?

[00:53:10] Paul Roetzer: So, I have no idea what it is, but it just seems like we're going to hear from Elon before this is all said and done. I also, number five, I think Anthropic plays some role here. So again, Dario Amadei, the co founder of Anthropic, left OpenAI in 2019 and took 10 percent of the staff with him because they were concerned about the for profit engine being built.

[00:53:38] Paul Roetzer: and the race to build AGI in a non safe way. And so he built Anthropic focused on AI safety, safety and constitutional AI. So what is the play there? Like, do all the OpenAI employees who still believe in the mission just go work for Anthropic? Like, they don't want to build the profit. They want to save humanity.

[00:53:56] Paul Roetzer: I don't know. But again, there's something there. [00:54:00] Six is I think we are about to see an accelerated and more distributed race to build next frontier models. This talent, hundreds of the top AI researchers in the world are going to disperse. Whether it's to, , Google or Amazon or NVIDIA or Cohere or Anthropic or Inflection, like wherever they go, or they go start their own things, they're going to accelerate the development of this technology.

[00:54:28] Paul Roetzer: Now, the one factor to consider here is chips, the NVIDIA chips, the computing power that builds these things is still limited and centralized with a few big companies. So you can't leave OpenAI and say, I'm going to go build my own language model and think you're going to compete with the computing power these other companies have.

[00:54:45] Paul Roetzer: So I think it's like, it accelerates the development of the frontier models, but they probably still centralized in like four to five e companies. Number seven is I assume the government is watching all of this with very high interest. And it is [00:55:00] absolutely going to impact their next move and the speed with which they make their next move.

[00:55:05] Paul Roetzer: Because they now see how dangerous this is. How quickly these things can get out of control when you're relying on these few select companies. So I would expect we will hear something from the government in the not too distant future. And then the final thought I have, and the final thing I'll leave you with is, To me, the key question in all of this is what did Sam witness in that room?

[00:55:31] Paul Roetzer: And has AI, OpenAI, made a breakthrough in the pursuit of AGI or superintelligence? So if we go back and we think about the charter to protect humanity, we think about the mission from day one to build AGI safely, and we go back to that quote from Sam at the APEC summit, when he talked about moving past the veil of ignorance, and the frontier of discovery moving forward.[00:56:00] 

[00:56:00] Paul Roetzer: When we think about those things, it really makes me wonder, what have they done? Has there been some milestone in the development of AI that we're not privy to yet? And that played a role in the board's thinking and action. So I think we'll soon learn why the board made the move it did. They can't obviously keep this secret forever.

[00:56:23] Paul Roetzer: And I believe OpenAI's mission is to ensure, or that their mission to ensure the AGI benefits all humanity will play a critical part in this conversation at some point soon. So, I hope this was all helpful to add context to the story. I appreciate you listening. I appreciate you being a part of our community and to, , regularly supporting this podcast.

[00:56:47] Paul Roetzer: I'm, I'm sure crazy stuff has happened since I've been on recording this. We'll try and do our best to keep up with it. I post regularly on, on Twitter and LinkedIn. So, I would say between now and next week. [00:57:00] Follow me on Twitter and LinkedIn, and I'll try and share the latest news and information.

[00:57:04] Paul Roetzer: Mike and I will be back next week with more OpenAI News, and hopefully back to our regularly scheduled format of keeping you up to date on all the AI news for the week. On a personal note, I hope you have a wonderful Thanksgiving, and are able to enjoy time with your family and friends. I know I can't wait to be with mine.

[00:57:23] Paul Roetzer: Hopefully OpenAI takes a little break for a while and we can all just relax. So thank you again for listening. I hope it was helpful and I will talk with you again soon.

[00:57:33] Paul Roetzer: thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:57:54] Paul Roetzer: Until next time, stay curious and explore AI.[00:58:00]