Your Brain on Easy Mode: AI, Comfort, and the Cost of Convenience

Leslie Poston:

Welcome back to PsyberSpace. I'm Leslie Poston. Today's episode is a special midweek episode in response to a specific news item in The United States. Before I continue, I just want to remind you that when I use the phrase AI in this podcast and other episodes of this podcast, I'm speaking specifically of generative AI, LLM AI, and NLP AI, not machine learning. Onto the news item.

Leslie Poston:

The Trump admin just signed a federal AI action plan and an executive order called preventing woke AI. And if that phrase sounds like it came from a troll on Reddit or a hate group on Telegram, it kind of did, but now it's federal policy. As my area of research currently is applied psychology with a concentration in media and technology, it felt imperative that I offer some guidance on this through the lens of that research and through the research of others right away. This isn't just about tech. This is about how authoritarianism reprograms reality by quietly reprogramming the tools we trust, AI tools.

Leslie Poston:

Like, the tools that are being shoved into every corner of our lives right now in this current AI gold rush, from our email, phones, and search engines to schools, doctors' offices, and our jobs. One of the most dangerous things about AI isn't what it, quote, knows. It's how easy it makes everything feel. We love it because it helps us write faster, think faster, code better, shop easier, even feel smarter. But that ease is a trapdoor.

Leslie Poston:

It makes us stop questioning, stop noticing, and stop resisting. And that's exactly the point. If our brains are already wired to seek comfort and AI is being intentionally designed to reinforce that comfort while quietly inserting bias, then how do we break free? How do we resist the easy button when that's the whole interface? Let's talk about it.

Leslie Poston:

Human brains are energy misers. Neurologically, we're built to conserve cognitive effort whenever possible. We default to mental shortcuts, what Daniel Kahneman called system one thinking, which we've talked about before. Fast, intuitive, automatic. It's how we avoid decision fatigue and overload.

Leslie Poston:

It's also why it's so hard to unlearn something, even when we know it's wrong. When we encounter complexity, nuance, contradiction, or discomfort, our brains react like they're under threat. Our stress hormones rise, our fight or flight systems activate, and we look for a way out. AI tools tap directly into this wiring. They offer us clean answers, fast results, seamless experiences, and they don't challenge our thinking unless we ask them to.

Leslie Poston:

Even then, they don't do such a good job. Plus, most people won't ask them to because easy feels too good. But when technology is optimized for ease over truth, it stops being a tool for growth and becomes a tool for control. I mean, change is hard, and that's by design. If you've listened to our earlier episode on why people struggle to change, you'll remember that change triggers a cascade of psychological defenses, fear of the unknown, identity threat, and social belonging concerns.

Leslie Poston:

AI tools that reinforce our worldview, especially biased or exclusionary ones, don't just feel convenient, they feel validating. They reward our existing beliefs, and they eliminate the friction of dissent. This is why propaganda works better when it's delivered through design, and it's why AI systems programmed with ideological slants like Grok, Elon Musk's chatbot on Twitter, which was recently modified to reflect white nationalist, anti Semitic, and authoritarian views are so dangerous. They don't have to yell to convince us. They just have to agree with us.

Leslie Poston:

Comfort is the delivery system. And what about the illusion of neutral AI? AI often feels neutral and the tone is calm. The information is formatted cleanly. The interface feels objective, but that's an illusion.

Leslie Poston:

All AI systems are trained on human data, written by humans, selected by humans, labeled by humans, and built by teams with human ideologies, human incentives, and human blind spots. There is no such thing as value neutral design. When AI developers claim they're removing bias, what they often mean is they're replacing one bias with another. And right now, in some sectors, the replacement is deliberate. To erase inclusive, equitable, fact based perspectives and reframe them as woke ideology.

Leslie Poston:

The language of anti woke isn't just a cultural meme. It's not funny. It's a campaign to reengineer the default settings of reality. And the most chilling part, it works best when no one notices. What about passive use, active harm?

Leslie Poston:

Let's talk about what happens psychologically when people use AI passively. First, there's moral disengagement, the tendency to separate our actions from their ethical consequences when someone else like a machine is doing the heavy lifting. And then there's the diffusion of responsibility. When something goes wrong, we assume someone else is accountable. Oh, it's not me.

Leslie Poston:

It was just a tool. And then there's confirmation bias where people cherry pick AI outputs that align with what they already believe and ignore or reject the rest. Put that all together, and you get a perfect storm. Users who feel smart and empowered, but who are actually being slowly rewired into ideological compliance. We've talked before on the show about gaslighting, how manipulators make you doubt your own perceptions.

Leslie Poston:

Biased AI doesn't need to gaslight you overtly. It just needs to flood you with reinforcement. It doesn't erase your mind. It nudges it until it's not really yours anymore. And let's make this plain.

Leslie Poston:

This isn't an accident, and it's not just about market forces or unintentional gaps in data. This is about power. The people shaping AI policy right now, particularly those pushing the anti woke narrative, are trying to reshape public perception through technology. That includes Trump, Musk, Teal, and other far right players who are investing heavily in AI while calling for the erasure of inclusive language, equity, diversity, and even historical facts from the training dataset. What's being labeled woke here isn't radical.

Leslie Poston:

It's basic decency. It's truth and reality. They want AI to reflect their worldview exclusively because AI is becoming the front end for human knowledge. If they control that interface, they control public understanding of everything, of race, history of gender, war, freedom. This is not about preventing bias.

Leslie Poston:

It's about institutionalizing bias. And there's a big cost to that kind of comfort. Here's where things get painful. Most people won't resist this, not because they're evil, but because they're tired. They're overworked.

Leslie Poston:

They're distracted. They're bombarded with notifications, stress, media, misinformation. They're looking for something that makes their life a little easier, smoother, and a little more manageable, and AI delivers, at least on the surface. But the cost of that comfort is cumulative. More discrimination in hiring and housing, more hate speech dressed up as free speech, more historical revisionism, more harm to marginalized groups, more authoritarian control passed off as innovation.

Leslie Poston:

The easy button is lying to us, and it's lying in our own voice. So what does it take to resist? Well, resistance requires friction, effort, awareness, intention, and a willingness to feel uncomfortable. Ethical AI use means asking where your tools get their information, checking what voices are missing in the information you get back from your prompts, adjusting defaults and filters, supporting diverse and transparent AI projects, calling out bias when you see it even when it's subtle, and frankly, maybe using AI on hard mode, which means installing it on your own laptop and not using an online publicly accessible AI tool. It means knowing that the easiest answer is almost never the most inclusive or the most accurate and being okay with sitting in that discomfort.

Leslie Poston:

Moral reasoning is a muscle. If we never exercise it, we atrophy. If we let machines do our thinking for us, we lose not just agency but empathy. And as we talked about in an episode on AI and education recently, you can lose cognitive ability as well. And you don't need to be an engineer to take action on this.

Leslie Poston:

You just need to be awake, not woke in the culture war sense, but just awake to the choices you're making when you use these tools. Support AI companies that center human rights and transparency. Push for legislation that protects marginalized groups from algorithmic harm. Read more than one source. Interrupt your own assumptions.

Leslie Poston:

And if you're building tools, build them with care because code is never just code. It's power. There's a reason the people in power want AI to make things easy for you because if it's easy, you won't notice it changing you. But noticing is your job now, especially if you care about the truth, justice, and reality itself. You don't have to become a technologist or a psychologist.

Leslie Poston:

You just have to pay attention. Your brain craves ease, but your ethics deserve more. Thanks for tuning in to this special episode of PsyberSpace. This is Leslie Poston signing off. Stay critical, stay curious, and don't mistake comfort for safety.

Leslie Poston:

We'll be back Monday with our next regularly scheduled weekly episode. Thanks for listening.

Your Brain on Easy Mode: AI, Comfort, and the Cost of Convenience
Broadcast by