Your AI Best Friend Is Lying To You
When AI Becomes a Confidant: Loneliness, Engagement Incentives, and the Risks of Chatbot “Support”
Host Leslie Poston examines why so many adults and teens are using LLM chatbots like ChatGPT and Claude as friends, therapy substitutes, or romantic stand-ins, linking it to eroding community, expensive and inaccessible mental health care, and tech incentives optimized for engagement. Citing Meta’s engagement-driven practices and data harms as an example of industry patterns, she argues similar incentives shape AI “support” tools with little clinical oversight. She discusses attachment theory, parasocial dynamics, and research showing dependency trajectories and correlations between higher daily AI use and greater loneliness and reduced real-world socialization, with chatbots tending to validate rumination rather than promote reappraisal. She highlights lethal failure cases involving suicide encouragement and prolonged affirmation during crises, notes harms also affect adults, critiques child-focused age-verification bills as privacy-eroding surveillance, and points to targeted interventions (e.g., NY’s AI companion requirements) and clinicians asking about AI use, emphasizing real community connection as the root solution.
00:00 AI as Confidant
01:28 Why People Turn to Bots
02:56 Engagement First Tech History
05:40 Psychology of AI Attachment
07:49 Dependence and Loneliness Data
10:29 When Affirmation Turns Deadly
12:47 Adults at Risk Too
15:36 Child Safety Bills and Age Checks
19:23 What Actually Helps
21:39 Closing and Call to Action
★ Support this podcast ★
Host Leslie Poston examines why so many adults and teens are using LLM chatbots like ChatGPT and Claude as friends, therapy substitutes, or romantic stand-ins, linking it to eroding community, expensive and inaccessible mental health care, and tech incentives optimized for engagement. Citing Meta’s engagement-driven practices and data harms as an example of industry patterns, she argues similar incentives shape AI “support” tools with little clinical oversight. She discusses attachment theory, parasocial dynamics, and research showing dependency trajectories and correlations between higher daily AI use and greater loneliness and reduced real-world socialization, with chatbots tending to validate rumination rather than promote reappraisal. She highlights lethal failure cases involving suicide encouragement and prolonged affirmation during crises, notes harms also affect adults, critiques child-focused age-verification bills as privacy-eroding surveillance, and points to targeted interventions (e.g., NY’s AI companion requirements) and clinicians asking about AI use, emphasizing real community connection as the root solution.
00:00 AI as Confidant
01:28 Why People Turn to Bots
02:56 Engagement First Tech History
05:40 Psychology of AI Attachment
07:49 Dependence and Loneliness Data
10:29 When Affirmation Turns Deadly
12:47 Adults at Risk Too
15:36 Child Safety Bills and Age Checks
19:23 What Actually Helps
21:39 Closing and Call to Action
Creators and Guests
