OpenAI is facing a new wave of controversy after its CFO, Sarah Friar, hinted the company might want government help funding its massive, trillion-dollar data center build-out.
The comment came at a recent Wall Street Journal event, where Friar suggested the government might “backstop the guarantee that allows the financing to happen.”
This set off immediate alarms across the industry and in Washington, with critics arguing it sounded like OpenAI wanted the government to de-risk its massive bet on artificial intelligence. White House AI advisor David Sacks quickly posted on X: "There will be no federal bailout for AI."
Hours later, OpenAI CEO Sam Altman issued his own post on X, stating unequivocally that OpenAI “does not have or want government guarantees" for its data centers from the government, emphasizing that taxpayers shouldn't bail out bad business decisions.
Was this just a misunderstanding, or did OpenAI accidentally reveal the quiet part out loud? To understand the deepening anxiety behind the AI infrastructure race, I talked to SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 179 of The Artificial Intelligence Show.
The controversy taps into a growing uneasiness among investors, economists, and business leaders about how critical a handful of AI companies have become.
“The U.S. economy is actually becoming increasingly reliant on AI,” Roetzer says. “And the companies that are building and empowering it.”
The spending is staggering. Major labs like Microsoft, Google, OpenAI, Meta, and xAI are on track to spend close to half a trillion dollars on energy and data centers in 2026, with trillions more planned soon after. OpenAI alone has signaled plans to spend well over a trillion dollars in the next six to seven years.
They are doing this, Roetzer notes, because the market opportunity is also measured in the trillions. They are building the infrastructure for what he calls the "age of omni intelligence," where AI is omnipresent and the demand for compute power to run agents, reasoning models, and video generation becomes massive.
This build-out is also becoming a key source of job creation and GDP growth. But it’s a massive risk, and it’s one the US government is actively encouraging.
“From a government perspective, they are very much on the record as saying they plan on 'winning' this race against China at all costs,” says Roetzer. “So the government needs these private companies to have these bold visions and to take on enormous risks in order to get to superintelligence first.”
The danger, he explains, is that we become so reliant on these companies that they become "too big to fail."
That phrase, "too big to fail," evokes the 2008 banking crisis, and Roetzer sees alarming parallels.
Back then, the crisis was fueled by banks bundling risky subprime loans into complex financial products (CDOs) that few understood, all based on the assumption that housing prices would rise forever.
Today, a similar dynamic may be emerging to fund the AI boom. A New York Times article, citing McKinsey, noted that $7 trillion in data center investment will be required by 2030. To fund this, tech giants are turning to a growing list of complex debt financing options, including corporate debt, securitization markets, private financing and off-balance sheet vehicles.
These companies are increasingly repackaging their debt as asset-backed securities, using the data centers themselves as collateral. This year alone, $13.3 billion in such securities have been issued, a 55% increase.
If the projected demand for AI doesn't materialize and the value of those data centers collapses, the collateral disappears, leaving someone holding the bag for hundreds of billions of dollars.
This high-stakes gamble is the context for CFO Sarah Friar's "backstop" comment. The AI labs know the government needs them to take this risk to compete with China, but they don't want to be left high and dry if their bet fails.
Friar later clarified her comments on LinkedIn, saying she "muddied the point" and was speaking more broadly about the private sector and government playing their respective parts.
But the underlying tension remains.
“AI is becoming increasingly political,” Roetzer says. “No matter how they try and clarify this, the reality is you have private companies taking on enormous risks that the government is encouraging them to do and needs them to do.”
The entire trillion-dollar AI infrastructure bet rests on one crucial assumption: that the demand for AI will be insatiable.
The assumption is that scaling laws will continue, AI models will keep getting smarter, and humanity will demand an endless supply of intelligence in every piece of software and hardware. If that holds true, all this new compute power will be used.
"If at some point supply and demand gets out of whack, we're screwed," says Roetzer.
This possibility is exactly what some contrarian investors are betting on. Roetzer notes that Michael Burry, the investor made famous in The Big Short for betting against the 2008 mortgage market, reportedly "took a billion dollar position against this build out."
Ultimately, the controversy over a single word has exposed the massive, interconnected risk at the heart of the AI revolution. Private companies are making nation-sized bets, encouraged by a government that needs them to win a geopolitical race, blurring the line between private enterprise and national strategy.