2 Min Read

ChatGPT Feels More Human Than Ever. And It's Causing Concern

Featured Image

Serious about learning how to use AI? Sign up for our AI Mastery Membership.

LEARN MORE

OpenAI is grappling with what it means when people begin forming close emotional relationships with AI.

As ChatGPT becomes more lifelike in tone, memory, and behavior, a quiet revolution is taking place in how people perceive AI. Increasingly, users are describing their interactions with AI not as functional or transactional, but as emotional. Or even relational. And OpenAI is starting to pay attention.

In a thoughtful essay by Joanne Jang, head of Model Behavior and Policy at OpenAI, the company acknowledges something both subtle and profound: Some users are beginning to experience ChatGPT as a "someone," not a "something."

On Episode 152 of The Artificial Intelligence Show, I spoke to Marketing AI Institute founder and CEO Paul Roetzer about what this means for business and society.

A New Kind of Relationship

People say thank you to ChatGPT. They confide in it. Some go so far as to call it a friend. And while OpenAI says that their models aren’t conscious, the emotional perception of consciousness is becoming impossible to ignore. In fact, Jang argues, it’s this perceived consciousness, not the philosophical debate around actual self-awareness, that has real consequences.

For someone lonely or under stress, the steady, nonjudgmental responses of an AI can feel like comfort. These moments of connection are meaningful to people. And at scale, the emotional weight of such experiences could begin to shift our expectations of each other as humans.

OpenAI’s current approach is to aim for a middle path. They want ChatGPT to be helpful and warm, but not to present itself as having an inner life. That means no fictional backstories, romantic arcs, or talk of "fears" or self-preservation. Yet the assistant might still respond to small talk with "I'm doing well," or apologize when it makes a mistake. Why? Because that's polite conversation, and people often prefer it.

As Jang explains, the way models are fine-tuned, the examples they’re shown, the behaviors they're reinforced to perform, these directly impact how alive they seem. And if that realism isn't carefully calibrated, it could lead to over-dependence or emotional confusion.

What many users don’t realize is just how much deliberate design goes into these interactions. Every AI model has a personality, and that personality is chosen by someone. It's shaped by human teams making decisions about tone, language, and interaction style. OpenAI has chosen restraint. But other labs may not.

"The labs decide its personality; they decide how it will interact with you, how warm and personal it will be," says Roetzer.

"Whatever OpenAI thinks is potentially a negative within these models, another lab may see that as the opposite. And they may actually choose to do the things OpenAI isn't willing to do because maybe there's a market for it."


As Roetzer points out, the market might soon demand more emotionally engaging AI, and some labs or startups may choose to go all-in. That could mean assistants with deeper personalities, fictional memories, or even simulated affection.

In that light, OpenAI’s essay reads like both a meditation on AI-human relationships and a cautionary tale. These models could feel deeply human if their creators wanted them to. And that potential, Roetzer notes, is where things get complicated.

Preparing for New Emotional Terrain


What matters most, perhaps, is that perception often trumps reality. Whether ChatGPT truly "thinks" or "feels" might be philosophically murky, but if it behaves as though it does (and users respond accordingly), then the societal impact is very real.

This is especially true in a world where models are becoming increasingly capable of mimicking empathy, memory, and complex reasoning. As the line blurs between simulation and sentience, the stakes go far beyond science fiction.

OpenAI is taking the first steps toward grappling with this reality. Their essay outlines plans to expand model behavior evaluations, invest in social science research, and update design principles based on user feedback.

They don't claim to have all the answers. But they're asking the right questions: How do we design AI that feels approachable without becoming manipulative? How do we support emotional well-being without simulating emotional depth?

But the question still remains:

As users form relationships with AI, what responsibility do its creators have (or should they have?) to guide, limit, or nurture those connections?

Related Posts

Two New Case Studies Show Exactly How Companies Are Transforming with AI

Mike Kaput | April 30, 2024

We're now seeing more examples of companies actually reinventing their businesses with AI. And two new case studies show us exactly what's possible.

ChatGPT Will Now Remember Everything You Tell It

Mike Kaput | April 15, 2025

OpenAI just released a new memory feature for ChatGPT that allows it to recall everything you’ve ever discussed with it.

How to Use AI to Amplify the Impact of Your Brand Marketing

Mike Kaput | January 29, 2024

BrandOps is an AI-powered brand marketing platform that helps you find which messages resonate and drive revenue. We spoke with the company to learn more.