Microsoft’s AI CEO, Mustafa Suleyman, just published a reflective essay with a chilling new warning: “seemingly conscious AI” is on the horizon, and it’s a huge problem we’re not prepared to handle.
This isn’t AI that is actually conscious. Instead, it’s AI that is so convincing—so good at simulating personality, memory, and emotion—that it doesn’t just talk like a person, it feels like one.
Suleyman argues this development is creating a dangerous “AI psychosis risk,” where people fall in love with AI, assign it emotions, and spiral in their relationship with reality. He’s so concerned that he’s calling on the industry to avoid designs that suggest personhood, warning that if enough people mistakenly believe these systems can suffer, we’ll see calls for AI rights, AI protection, and even AI citizenship.
To understand the gravity of this warning and what it means for society, I talked it through with Marketing AI Institute founder and CEO Paul Roetzer on Episode 164 of The Artificial Intelligence Show.
Suleyman’s core argument is that we don’t need a massive technological leap to get to seemingly conscious AI (SCAI). All the ingredients are already here.
According to his essay, the recipe for SCAI includes capabilities that are either possible today or will be in the next few years:
Roetzer agrees, noting that the coming debate around AI consciousness is poised to become a “hot button issue for sure.”
“I share Mustafa's concern that this is a path we're on,” he says. He points to what he believes is a likely future scenario where labs can no longer simply shut down old models.
“The people who are starting to believe that maybe these things will have consciousness…they would say, well, you can't shut off GPT-4o, it's aware of itself,” Roetzer explains.
“You can't delete the weights. It's deleting something that has rights. That’s basically where we’re heading here: you couldn’t ever delete a model because you’re actually killing it is basically what they’re saying.”
While Roetzer sides with Suleyman’s position, he is also pessimistic about our ability to stop this trend.
“I appreciate what Mustafa is doing. I do think it will be a fruitless effort,” he says.
“I don't think the labs will cooperate. It only takes one lab [to act] or Elon Musk getting bored over a weekend and making xAI just talk to you like it's conscious. This is uncontainable in my opinion.”
He predicts that this societal divide is not decades away, but imminent in the next few years.
He compares the situation to the current information crisis on social media, where millions of people already can’t distinguish between real and fake images. The same will happen with consciousness. It won’t be about facts, but feelings.
“You're going to have a conversation with the chatbot [and] be like, It feels real. It tells me it's real. It talks to me better than humans talk to me. Like it's conscious to me,’” he says.
And once you believe that?
“Changing people's opinions and behaviors is really, really hard.”
Suleyman’s call to action is clear:
We must build AI for people; not to be a person. But the societal forces may already be moving in the opposite direction.
As Roetzer notes, we’ve already seen early warning signs. The powerful emotional response from users when OpenAI temporarily sunset a popular AI model version should serve as a massive “alarm bell.”
That incident, multiplied by the millions, is exactly what Suleyman is worried about. We are building machines that tap directly into our human need for connection, and as they become more capable, more people will start to believe the illusion.
One AI leader is sounding the warning, but the rest of society is just beginning to grapple with a future where a significant portion of the population believes machines deserve rights. And we are far less prepared for the implications than we think.