OpenAI CEO Sam Altman just said that ChatGPT is about to get more personal, more human-like, and, for some, more adult.
In a series of posts on X, Altman announced plans to relax restrictions within ChatGPT that were originally put in place over mental health concerns. The changes will allow users to customize ChatGPT’s personality to be more like a friend, use more emojis, or echo the more expressive nature of the popular 4o model.
The new policy is built on a principle to “treat adult users like adults.” This includes rolling out age-gating and, as one example Altman offered, allowing "erotica for verified adults." That specific example, Altman later clarified, "blew up" more than he expected and was just one illustration of a broader move toward user freedom.
This shift opens up a complex debate about AI relationships, safety, and personal choice. To unpack what it all means, I turned to SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 174 of The Artificial Intelligence Show.
Altman’s rationale for the move is that OpenAI now has better tools to mitigate serious mental health issues that users may experience when using ChatGPT, which makes it safe to relax previous restrictions that affected most users.
But the example he chose to use (...“we will allow even more, like erotica for verified adults”) set off a firestorm, and Altman published a follow up post to clarify.
He stressed that safety for minors remains a top priority, but for adults, the company does not want to be the "elected moral police of the world," comparing the new boundaries to R-rated movies.
For Roetzer, this direction isn't surprising.
"This is definitely the direction they've indicated they were going," he says. "Sam has continuously said that the future of their AI assistance would be personal. And so we’re now heading more aggressively in this direction.”
The challenge, however, lies in the nature of the technology itself. Roetzer points out that AI labs face a fundamental problem: chatbots are not deterministic systems.
"They are not software that just follows rules every time," he says. "They will at times just do what they want and they can be led to do things that they're not supposed to do quite easily."
This means that even with new safety tools, labs are essentially just telling the system how to behave "out of the box" if a certain condition is met, like a user appearing to be in mental distress or a minor.
But, as Roetzer notes, “it doesn’t mean it’ll always follow those rules.”
As a result, each AI lab must now decide how far to push the boundaries of personality and acceptable content.
"xAI and Meta, for example, will likely push the boundaries of what is acceptable in society further than OpenAI, Anthropic, and Google," Roetzer says.
(He points to Elon Musk's promotion of Grok’s AI avatars, which unapologetically can be used for romantic relationships, as an example.)
Meanwhile, more conservative players are also quietly moving toward personalization. Roetzer noted that his own Google Gemini app recently prompted him to personalize his experience, greeting him with "Hey there, great to see you" and suggesting topics based on past chats.
The reality is that these AI models are already fully capable of having these more "adult" or unrestricted conversations.
"The only reason they don't do them out of the box is because the labs have told them not to," says Roetzer.
But that’s a choice. And it’s a choice not every lab feels like it needs to make. Roetzer predicts that other companies, like Character.ai, will "absolutely exploit what is likely a hundred billion dollars plus market" for AI companions and more "R-rated" assistants.
This trend goes far beyond just adult content. The underlying shift is toward AI that is going to be able to become whatever you want it to be, whether that’s a simple assistant, a more personal best friend, or even a romantic companion.
While users may get more freedom, the societal implications are massive and largely unaddressed.
"We are nowhere near ready as a society for people becoming attached to these things," Roetzer warns.
He noted that this is a conversation families need to start preparing for, as people are already forming deep bonds with AI. It’s a conversation that, no matter how uncomfortable, we all need to be having with children, parents, and relatives.
The bottom line? As these tools become more personal, human-like, and embedded in our lives, we are entering uncharted territory.
"It's going to get weird," says Roetzer. "And we just have to be ready in some way."