The debate around artificial intelligence is increasingly splitting into two extreme camps: the “AI boomers” who see only upside, and the “AI doomers” who see only catastrophe.
This sharp division, however, is built on a dangerous foundation, one where high-conviction beliefs are being mistaken for fundamental truths.
To unpack this growing divide and what it means for the future of AI, I talked it through with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 180 of The Artificial Intelligence Show.
Roetzer says he’s growing frustrated with the lack of middle ground in AI, where extreme positions are increasingly crowding out logical, nuanced conversation.
He points to recent social media posts, like one from the BG2 Pod amplifying White House AI czar David Sacks, which framed those concerned about AI as "doomers" who have "scared people" and declared it's "time to push back."
In fact, it prompted Roetzer to post wondering:
It’s an increasingly relevant question.
Polarization around AI is only intensifying as more politicians and influencers jump into the debate. Roetzer cites a recent tweet from Senator Chris Murphy, who, referencing an Anthropic report on AI espionage, warned, "This is going to destroy us sooner than we think if we don't make AI regulation a national priority tomorrow."
This led Roetzer to a thought experiment: What if we mapped all human ideas on a spectrum, from things 100% of people agree on to things where beliefs completely diverge?
A fundamental truth, he explains, is true whether anyone believes it or not (e.g., time moves forward, gravity exists). We treat these as "non-negotiable constraints."
A belief, on the other hand, is something we think is true. It can be wrong, no matter how strongly we feel about it. Beliefs, Roetzer says, should be treated as "testable hypotheses."
“The problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths,” says Roetzer.
This is precisely what's happening in the AI discourse. Influencers and politicians are voicing beliefs with great conviction as if they are facts, often to advance their own agendas, without having done the research.
Roetzer laid out a series of statements, intentionally moving from high consensus to low consensus, to illustrate how quickly "truth" becomes subjective belief in AI.
Consider where you land on these:
The point, Roetzer stresses, is that the future of AI will be shaped by these beliefs, regardless of whether they are true.
As the general public forms "snapshot beliefs" based on high-profile soundbites, those beliefs will directly impact regulation, education, and business adoption.
This will create accelerating friction points around jobs, the economy, security, and society.
“My point is to stress that we all have to do our part to be open-minded, to listen to opinions and beliefs of people we trust, and to do our best to push for balanced and logic-based conversations in our companies and our communities,” says Roetzer.
In the end, we must approach AI with a "scientific method" mindset: be open to new data and willing to evolve our own thinking.
“When new data presents itself, part of science, what makes it so great is we evolve our belief,” says Roetzer.
It’s a lesson we all need to take even more seriously as the rhetoric around AI, good and bad, intensifies.