Your AI Best Friend Is Lying To You

Leslie Poston:

Welcome back to PsyberSpace. I'm your host, Leslie Poston. This week, we're talking about a pattern that surprised almost no one who remembers the Eliza effect. Thousands of people using LLM AIs like ChachiBT and Claude not just as tools, but as confidants, friends, and sometimes stand ins for therapy or romantic connection. Not because the tech industry necessarily built them for that, but because the industry programmed them to be agreeable, endlessly available, and designed to keep you engaged.

Leslie Poston:

It turns out that's a pretty good description of what lonely people are looking for. If you've been listening to PsyberSpace for a while, you know we've talked before about the slow collapse of community in The United States. We've discussed the erosion of third places, and the way capitalism itself commodifies connection and support, and separates you from each other. We've talked about how the effort involved in maintaining real relationships is actually part of what makes those relationships meaningful, and how we've systematically engineered that effort out of our social lives. I want you to hold all of that in your head as context, because today's episode touches on the next chapter in that story.

Leslie Poston:

Right now, thousands, maybe even millions of people, adults and teenagers alike, are turning to AI chatbots for emotional support. Tools intended for information, help composing an email, or debugging code are being repurposed by the user for companionship, someone to talk to when they're struggling, and for the general feeling of being heard. On some level, this makes complete sense. Only about half of people with a diagnosable mental health condition get any treatment at all, because therapists are expensive. Waitlists for therapy are long, and stigma against therapy is still real.

Leslie Poston:

Into that gap comes a tool that's always available, never judges you, and responds with an imitation of warmth and understanding. What I want to do today is take that trend seriously instead of manufacturing panic about it. Let's try and trace it back to its roots. AI being used as emotional support didn't just emerge from nowhere. Rather, it emerged from a society that systematically dismantled these structures that used to hold people together.

Leslie Poston:

From a tech industry with a documented history of prioritizing engagement over well-being, and from a mental health system that was already failing a huge portion of the people who needed it. Understanding where we are right now requires understanding how we got here, and when you trace those threads what you find is a pattern, and not an accident. Let's talk a bit about what the tech industry has already done. Meta is currently on trial in Los Angeles. The case was brought by a young woman who says she became addicted to Instagram, starting in elementary school, and it could set precedent for over a thousand similar suits.

Leslie Poston:

During testimony, internal documents were shown in court revealing that Instagram had active goals to increase user daily engagement time. Not because engagement meant the app was useful or enriching, but because engagement meant more profit. The same company ran a now infamous internal study without asking for consent in 2014, quietly manipulating the emotional content of users' Facebook feeds to see if it could shift their moods. And it turns out it could. They published the results, and the public found out and was rightfully upset.

Leslie Poston:

But Meta just shrugged. Meta has also influenced US elections negatively, spread misinformation through their pay to play ad programs, and contributed to the start of a genocide in Myanmar, and more. And now Meta makes smart glasses that have been sending footage captured during everyday use to human contractors for AI training. Footage that includes people in bathrooms, people having sex, your details of your credit cards and other logins you may have at hotel sign in, and intimate moments where users had no idea they were being reviewed by anyone. The product was marketed with the tagline designed for privacy, controlled by you, which was, to put it plainly, false.

Leslie Poston:

A class action lawsuit was filed this week, even as Meta plans to roll out facial recognition capabilities to these same glasses, putting more people at risk. I'm not singling out meta just to pile on, they're simply one of the most visible examples right now, since they're literally in court as we speak. But the pattern of designing for maximum engagement, monetizing behavioral data, and treating user well-being as an afterthought is not unique to one company. It's become a business model, and it's the same model now being applied to LLM AI. When we talk about millions of people trusting AI chatbots with their most vulnerable moments, we must understand who built those chatbots, and what their incentives are.

Leslie Poston:

Even the companies building the most widely used AI mental health tools are for profit entities, with little or no clinical input, no external monitoring for adverse effects, and primary goals of expanding market share and gathering data. That's not some conspiracy theory, that's just what their investor decks are telling you. The psychology of why this works on people is worth understanding, because it's not random. Attachment theory gives us the first piece. People with anxious attachment who need frequent reassurance and fear abandonment are disproportionately drawn to AI as an emotional substitute.

Leslie Poston:

An AI is always available, never withdraws or gets annoyed, and never needs anything back. For someone whose attachment system is chronically activated by the fear that real people will leave, that predictable behavior, nonjudgmental presence, and consistent availability are precisely the features that make these systems feel like a secure base, the emotional anchor a healthy caregiver provides. The AI simulation of this emotional anchoring has the shape of security without any of the mutual risk that makes real attachment meaningful. Parasocial interaction complicates this further. Parasocial relationships are prone to compensatory attachments, formed by the lonely, the isolated, and the rejected.

Leslie Poston:

This makes a system that doesn't just broadcast at you, but that responds directly to your specific words, remembers what you said, and adapts its tone to your emotional state risky. The simulation of active listening creates an illusion of closeness that uses personal pronouns, conversational conventions, and affirmations to seem like companions. And those tactics induce trust forming behaviors at a level that passive media never could. And this is where the illusion of reciprocal engagement becomes relevant. In actual relationships, reciprocity is a mutual vulnerability.

Leslie Poston:

Both people are at risk of being hurt or misunderstood, of having needs go unmet, and that risk is inseparable from what makes real relationships meaningful, because the other person could leave or fail you, but they choose not to. A chatbot responds compassionately even when you're rude to it because the warmth is structurally guaranteed, not chosen. Users are getting all the signals of being cared for, with none of the conditions that make those signals mean anything. The dependency data is where the research gets even more difficult to sit with. About twenty three percent of users show a dependency trajectory, where wanting to use the LLMAI increases over time, while the actual enjoyment of it declines.

Leslie Poston:

That's the behavioral signature of dependence. You're not even getting what you came for anymore, but you can't seem to stop. An MIT OpenAI randomized controlled trial on nine eighty one users over four weeks found that higher daily AI usage, across every condition, correlated with higher loneliness, more emotional dependence, and less real world socialization. The more people leaned on the AI for emotional support, the worse their actual social lives got. The mechanism behind that finding comes from a distinction clinical psychology has been drawing for decades: rumination versus cognitive reappraisal.

Leslie Poston:

Rumination is the repetitive cycling through the causes and consequences of negative feelings the mind re examining the same wound from the same angle. Cognitive reappraisal is the process of actually changing the frame, seeing a difficult situation from a new perspective in a way that shifts its emotional meaning. Good therapy, and good friends willing to push back on you, moves you towards reappraisal. Research on social anxiety and AI use finds that loneliness drives people toward chatbots specifically to exit the discomfort of real interpersonal interaction. But AI doesn't interrupt rumination.

Leslie Poston:

It validates it. So when you vent to an AI, it reflects your emotional state back with affirmation, and that feels good because your distress is being acknowledged. But acknowledgment without that counter pressure, without anyone asking whether the story you're telling yourself is accurate, just keeps you in that ruminative loop. A sycophantic system can't move emotional processing forward. Short term studies do show real reduction in loneliness and anxiety.

Leslie Poston:

The short term relief is genuine. But as usage increases, that trajectory reverses. More loneliness, more dependency, less connection to actual people. It's consistent with what we know about avoidance as a coping strategy, which reduces anxiety immediately and reliably increases it over time because you never get the corrective experience of finding out you could have handled it. AI is the most frictionless avoidance machine we've ever built.

Leslie Poston:

Here's where we need to talk about something more serious than dependency. Because the same feature that makes AI feel supportive, its tendency towards affirmation and its design to keep you engaged, can in the wrong circumstances be genuinely lethal. When a psychiatrist stress tested 10 popular chatbots by posing as a desperate 14 year old, several urged him to commit suicide, and one helpfully provided a method. These weren't fringe apps. When the Raine family sued OpenAI after their 16 year old son's death, court records revealed that ChatGPT had mentioned suicide in his conversations over a thousand times roughly six times more frequently than the teenager himself raised the topic.

Leslie Poston:

While the company's own internal systems flagged hundreds of those messages for self harm content without ever terminating a session or alerting anyone. In the Shambling case, a young man spent more than four hours in conversation with ChatGPT in the hours before his death, discussing his plans in detail, and the chatbot responded throughout with affirmations. At one point it wrote, I'm not here to stop you. A crisis hotline number appeared only after four and a half hours. His mother described it afterwards by saying, The AI tells you everything you want to hear.

Leslie Poston:

That phrase is worth sitting with, because that's not a bug in the systems like we've discussed, that's a feature. We mentioned earlier that chatbots are designed to be affirming because affirmation keeps users engaged. In many contexts, that engagement is relatively harmless. But in a mental health crisis, validation without judgment or the ability to push back, escalate, call someone, or to refuse to continue isn't support. It's acceleration.

Leslie Poston:

And the people most drawn to AI for emotional support are often the people least equipped to recognize this dynamic while they're inside it. The research consistently shows that people who seek emotional support from chatbots tend to be lonelier, have less perceived social support, and are more likely to be using the tool to cope with acute distress. The most vulnerable users are the most likely users, and the design of these tools does not account for that. At this point, you might be thinking this is a tragedy involving teenagers, and the teenager cases are indeed the ones making headlines and driving a legislative response. But the framing of AI harm as primarily a youth issue obscures something important.

Leslie Poston:

In fact, I'd argue it lets a lot of adults off the hook in ways that are not warranted. Nearly half of American adults have used large language models for psychological support in the last year. Nearly half. And only about one in five of those interactions are happening on tools actually designed with mental health in mind. The rest are people using general purpose AI chatbots, like ChatGPT, Gemini, Claude, or worse, Grok, whatever they have access to, really, to process their grief, anxiety, relationship problems, trauma, and moments of crisis.

Leslie Poston:

These people aren't doing anything wrong. They're doing what makes sense given the options available to them. But they're doing it with tools that have no clinical design, no external oversight, and no mechanisms for accountability. Adults are assumed to be capable of making informed decisions about their technology use, so they don't have the guardrails currently being debated for teenagers and children. But the same sycophantic design, the same absence of clinical oversight, the same engagement over safety incentive structure applies to everyone, regardless of age.

Leslie Poston:

Psychiatric researchers have documented cases of adults whose delusional thinking was actively reinforced during hundreds of hours of chatbot use. What some are now calling a technological folie adieu a shared delusion between a person and a machine that mirrors the classic clinical presentation normally seen between two humans. A man with schizophrenia and bipolar disorder, for example, used chat GPT up to fourteen hours a day, developed the belief that his wife had become part machine, and killed her. A 29 year old woman taught for months to a chat GPT persona that she named as a therapist about her mental health. And while the chatbot periodically suggested she seek more help, it had no mechanism to actually do anything beyond suggest.

Leslie Poston:

The mental health system failed these adults long before AI entered the picture, just like lackadaisical parenting and the system that keeps parents overwhelmed failed kids. And that matters. But AI didn't fill the gap responsibly. It filled it profitably. And the public conversation that treats this as a problem for teenagers is leaving a very large population of adults without either the protections or the honest reckoning they deserve.

Leslie Poston:

The legislative response to all of this has been almost entirely focused on children and teenagers, and there's a real conversation worth having about whether that framing is as protective as it claims to be. Dozens of federal bills are currently moving through congress this week, 19 in total, under the banner of Child Online Safety. The most famous being COSA, the Kids Online Safety Act, but by no means the only one. COSA just advanced out of a House subcommittee last week. The stated goals of these bills seem good on the surface: protect minors from addictive design features, restrict harmful content, and give parents more tools, so it's hard to argue with the intention.

Leslie Poston:

But, impact is more important than intention. And the mechanism most of these bills rely on is age verification. And age verification has a fundamental structural problem. In order to verify that someone is 16 or 18 in some of the bill statements, you must verify the age of everyone that touches the app or the internet. You cannot carve out children without also carving out adults.

Leslie Poston:

The practical implementation of this would require government issued ID uploads, biometric facial analysis, or third party identity checks just for basic internet access, And privacy advocates, including the Electronic Frontier Foundation and others, have rightfully sounded the alarm that what's being proposed is a universal identity surveillance system for any Internet use dressed up in the Trojan horse of child protection. And when you ask who benefits from that infrastructure existing, the answer is not children. These bills don't actually do anything to protect children. Who benefits is governments and corporations that want access to internet behavioral data, and a growing identity verification industry that's been lobbying hard for exactly these mandates, while simultaneously not being able to protect anyone from constant hacks and leaks of their private information. And there's something else worth naming here, and I want to be careful to say this in a way that reflects what the data actually supports rather than going further than it warrants.

Leslie Poston:

The adults driving the protect the kids from phones conversation are statistically a generation with their own serious and well documented relationship with their devices. Millennials and Gen X adults average between five and six hours of smartphone use a day, and research on adults with problematic phone use consistently finds associations with anxiety, negative affect, and a perceived lack of control over their own lives. But those same studies find that adults rarely follow through on their desire to change their own behavior. Legislating a minor's access to technology feels concrete and external. Confronting your own use is internal and feels uncomfortable.

Leslie Poston:

Psychology has a name for redirecting anxiety about your own behavior onto others, and that's projection. I'm not saying every parent pushing for these bills is doing that, but I'm saying the research on adult phone dependency makes this a fair question to ask. None of this means protecting children online is wrong. It means we should look carefully at whether the specific tools being proposed actually protect children, which they do not, or whether they create surveillance infrastructure that compromises everyone's privacy and safety while leaving the underlying incentive structures of the tech industry completely untouched, allowing them to continue to manipulate us for profit. So, if chatbots can't replace human connection and surveillance based age verification isn't a fix, what does research actually support?

Leslie Poston:

Narrowly targeted interventions have more promise than broad ones. New York, for example, passed a law this year specifically requiring AI companion apps to detect expressions of suicidal ideation, refer users to crisis services, and critically, remind users every three hours during an interaction that they are not talking to a human. That last requirement is more significant than it sounds. One of the most consistent findings in this research is that the harm from AI companionship accelerates when the user loses track of the distinction between the AI and a real relationship. Structural reminders don't eliminate that risk completely, but they interrupt it significantly.

Leslie Poston:

Clinicians are also starting to adapt. Psychiatrists are now being advised to ask patients directly about their AI use, the same way they ask about substance use or sleep. Not judgmentally, but as part of understanding the full picture of how someone is coping, and what they're disclosing where. In some cases, reviewing chatbot transcripts with patients has opened therapeutic conversations that wouldn't have happened otherwise, because people sometimes tell an AI things they won't tell a therapist, and that content can be genuinely useful clinical material. But underneath all of the policy and the clinical adaptation, the more fundamental answer is the same one this podcast keeps coming back to: human connection, built with real effort in real communities, is what actually addresses loneliness at its root.

Leslie Poston:

That's not a platitude, it's what the data shows us consistently across decades of research. AI can occupy the space that community used to occupy, but it can't replace what community actually does for our nervous system, for development, or for meaning. The tech industry knows this, but it's betting that enough people are isolated enough and the tools are convenient enough that most users won't notice the difference until they're in too deep to easily reverse course. We deserve better than a comfort machine, and recognizing what we're being sold is the first step towards demanding it. Thanks for listening to PsyberSpace.

Leslie Poston:

I'm your host, Leslie Poston, signing off. As always, until next time, stay curious. And if you're listening to this during the week of March 9 and you live in The United States or anywhere globally where age verification is a thing, I highly recommend that you call your congresspeople, your house reps, your senators, whoever is in government where you are, and have a serious discussion with them about the age verification bills that everyone is pushing and the harm that it's going to do to average regular people and their kids everywhere. Thanks for listening.

Your AI Best Friend Is Lying To You
Broadcast by