OpenAI CEO Sam Altman just said that ChatGPT is about to get more personal, more human-like, and, for some, more adult.
In a series of posts on X, Altman announced plans to relax restrictions within ChatGPT that were originally put in place over mental health concerns. The changes will allow users to customize ChatGPT’s personality to be more like a friend, use more emojis, or echo the more expressive nature of the popular 4o model.
The new policy is built on a principle to “treat adult users like adults.” This includes rolling out age-gating and, as one example Altman offered, allowing "erotica for verified adults." That specific example, Altman later clarified, "blew up" more than he expected and was just one illustration of a broader move toward user freedom.
This shift opens up a complex debate about AI relationships, safety, and personal choice. To unpack what it all means, I turned to SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 174 of The Artificial Intelligence Show.
“We Are Not the Elected Moral Police of the World”
Altman’s rationale for the move is that OpenAI now has better tools to mitigate serious mental health issues that users may experience when using ChatGPT, which makes it safe to relax previous restrictions that affected most users.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
But the example he chose to use (...“we will allow even more, like erotica for verified adults”) set off a firestorm, and Altman published a follow up post to clarify.
Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults. Here is an effort to better communicate it:
— Sam Altman (@sama) October 15, 2025
As we have said earlier, we are… https://t.co/OUVfevokHE
He stressed that safety for minors remains a top priority, but for adults, the company does not want to be the "elected moral police of the world," comparing the new boundaries to R-rated movies.
For Roetzer, this direction isn't surprising.
"This is definitely the direction they've indicated they were going," he says. "Sam has continuously said that the future of their AI assistance would be personal. And so we’re now heading more aggressively in this direction.”
The Deterministic Dilemma
The challenge, however, lies in the nature of the technology itself. Roetzer points out that AI labs face a fundamental problem: chatbots are not deterministic systems.
"They are not software that just follows rules every time," he says. "They will at times just do what they want and they can be led to do things that they're not supposed to do quite easily."
This means that even with new safety tools, labs are essentially just telling the system how to behave "out of the box" if a certain condition is met, like a user appearing to be in mental distress or a minor.
But, as Roetzer notes, “it doesn’t mean it’ll always follow those rules.”
A Race to Push Boundaries
As a result, each AI lab must now decide how far to push the boundaries of personality and acceptable content.
"xAI and Meta, for example, will likely push the boundaries of what is acceptable in society further than OpenAI, Anthropic, and Google," Roetzer says.
(He points to Elon Musk's promotion of Grok’s AI avatars, which unapologetically can be used for romantic relationships, as an example.)
Meanwhile, more conservative players are also quietly moving toward personalization. Roetzer noted that his own Google Gemini app recently prompted him to personalize his experience, greeting him with "Hey there, great to see you" and suggesting topics based on past chats.
The Inevitable (and Weird) Future
The reality is that these AI models are already fully capable of having these more "adult" or unrestricted conversations.
"The only reason they don't do them out of the box is because the labs have told them not to," says Roetzer.
But that’s a choice. And it’s a choice not every lab feels like it needs to make. Roetzer predicts that other companies, like Character.ai, will "absolutely exploit what is likely a hundred billion dollars plus market" for AI companions and more "R-rated" assistants.
This trend goes far beyond just adult content. The underlying shift is toward AI that is going to be able to become whatever you want it to be, whether that’s a simple assistant, a more personal best friend, or even a romantic companion.
“We Are Nowhere Near Ready as a Society”
While users may get more freedom, the societal implications are massive and largely unaddressed.
"We are nowhere near ready as a society for people becoming attached to these things," Roetzer warns.
He noted that this is a conversation families need to start preparing for, as people are already forming deep bonds with AI. It’s a conversation that, no matter how uncomfortable, we all need to be having with children, parents, and relatives.
The bottom line? As these tools become more personal, human-like, and embedded in our lives, we are entering uncharted territory.
"It's going to get weird," says Roetzer. "And we just have to be ready in some way."
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.