The Psychology of AI Slop: How Synthetic Junk Erodes Attention, Trust, and Meaning

Leslie Poston:

Welcome back to PsyberSpace. I'm your host, Leslie Poston. And this week, we're talking about what AI slop is doing to your brain. So much of the AI slop out there looks like Internet nonsense with weirdly rendered hands on humans, plastic looking images of real people, fake historical photos, deep fake political videos made with higher quality models that are more convincing, inspirational quotes attributed to people who never said them, and text and videos that feel almost human but don't quite land or that give you that weird uncanny valley feeling. Some people will simply scroll past and laugh, but too many people either fall for the slap or accidentally amplify it.

Leslie Poston:

The real problem isn't that this stuff looks dumb, however. It's what's happening to your brain when you're surrounded by an endless industrial supply of deception and misinformation. AISlop isn't just AI generated content that turned out terrible or boring. It's content produced at industrial scale, optimized for volume and reaction speed instead of accuracy, meaning, or usefulness. The business model is simple.

Leslie Poston:

Generate enough slop fast enough, and some percentage will get clicks, shares, and ad impressions before anyone stops to evaluate it. A lot of it isn't even trying to fool you in a sophisticated way. It just needs to exist in sufficient quantity to flood the spaces where you'd otherwise encounter something real. The range of AI slop is wider than you may realize. At the cheaper end, you have AI written articles stuffed with keywords, fake product reviews, and synthetic social media personas posting at scale to manipulate engagement metrics.

Leslie Poston:

Further along, you have fabricated quotes attributed to real experts, AI generated images presented as documentary evidence of things that never happened, and even fake scientific summaries that sound authoritative enough that people share them without checking. At the more sophisticated end, and this is where it gets genuinely dangerous, you have deepfake video and audio of real people saying things they never said, targeted well enough to circulate in communities that will find them credible. Heck, even Grammarly was in the news this past week for faking help from experts in their fields who never agreed to give coaching or help in the first place. What ties all of this together isn't the technology. It's the intent to produce something that performs the function of real information without actually doing any of the work.

Leslie Poston:

The simulation of meaning without any substance. Your brain wasn't built to reliably detect that kind of mimicry at the scale it's now operating. You might assume that if you can spot AI slop, you're immune to it, but that's not how cognition works. We've spoken about system one and system two thinking on past episodes, so you won't be surprised that it comes into play with AI slop as well. Recall that your brain doesn't evaluate incoming information from scratch every time it's presented with something new.

Leslie Poston:

It relies on mental shortcuts heuristics because it has to. The brain is managing far more input than it can consciously process, so it uses efficiency strategies, responding to novelty, familiarity, faces, emotional charge, urgency, and repetition. These are adaptive mechanisms that generally serve you well, but AI slop is engineered deliberately or by market pressure to hit those exact triggers. It doesn't need to hold up under scrutiny. It only needs to interrupt your attention before you've made a conscious decision to engage with it.

Leslie Poston:

That's called attentional capture, a stimulus pulling your focus before deliberate evaluation can kick in. Layered on top of that is processing fluency, which means the easier something is to process visually or linguistically, the more positively we evaluate it, independently of whether the content is accurate or meaningful. AI slop is often optimized for exactly that kind of surface level smoothness. Clean layouts, familiar emotional beats, or a face looking directly at the camera. That ease of processing gets mistaken at a preconscious level for credibility.

Leslie Poston:

It doesn't make you gullible. It reflects a mismatch between a brain shaped by one kind of information environment and the very different one you're actually living in now. If the Internet has started to feel like a drain in a way it didn't used to, AI slop is a significant part of why. Every piece of low quality content you encounter adds a small cognitive cost. You spend a moment, sometimes a fraction of a second, sometimes several, deciding whether something is real, worth your time, manipulative, or safe to engage with.

Leslie Poston:

Those moments are trivial individually, but across an entire session online, they compound into decision fatigue, the documented degradation in decision quality that follows extended cognitive effort. The more AI slop you've had to evaluate, the more your system two thinking depletes, and the more you default to fast automatic heuristic responses, which is exactly when platforms want you to keep scrolling. What makes AI Slot particularly costly is that it adds all of that friction without returning anything. You're spending mental energy to correctly conclude that something was a waste of your mental energy. What's also worth knowing is that intentional forgetting, or the brain's ability to suppress irrelevant information so it doesn't interfere with new learning, is itself a cognitively demanding process that depends on available working memory resources.

Leslie Poston:

Research on the directed forgetting paradigm shows that when working memory is already under load, that suppression capacity breaks down, which means the junk you've been waiting through doesn't just tire you out. It becomes harder to mentally set aside, lingering in ways that interfere with what comes after it. The problems caused by exposure to AI slop can get genuinely unsettling. Think of the illusory truth effect, for example, which is where repeated exposure to a statement can make people rate it as more true, regardless of whether it actually was. One study using actual fake news headlines, the kind that circulated on Facebook during the twenty sixteen election, found that even a single prior exposure to a false fact or story was enough to meaningfully increase perceived accuracy, not just across a session, but as much as a full week later.

Leslie Poston:

This effect held even when the stories were labeled as disputed by fact checkers. That last part matters because it punctures the comfortable assumption that the problem is other people who we want to label gullible, politically motivated, or say they're not paying enough attention. The mechanism for the illusory truth effect isn't credulity. It's familiarity. When you've encountered something before, it processes more easily, and your brain interprets that ease as a signal that the information is on solid ground.

Leslie Poston:

You don't have to spend the effort to decide to believe it. The decision happens upstream of conscious evaluation, which is precisely why fact checking labels applied after the fact only have a modest effect. By the time the warning gets there, the fluency has already done its work. To recap, mere repeated exposure to something increases how favorably we evaluate it. Even when that exposure was subliminal, even when we have no information about the thing's actual merits, and even if that thing is wrong.

Leslie Poston:

Over 200 studies have replicated this effect. You don't have to think something is good to start preferring it. You just have to see it enough. This is one reason where you've heard me counsel you not to blindly share untrue statements from people like politicians that you want to correct. At the scale AI Slop now operates, thousands of variations of the same fake image, the same fabricated quote, the same synthetic news story seeded across thousands of platforms.

Leslie Poston:

These effects aren't edge cases. They're now the business model. And that's also why the scale matters so much politically. The illusory truth effect doesn't sort by party affiliation or education level. It works on everyone, which means that a slop flooded information environment doesn't just mislead individuals.

Leslie Poston:

It eats away at the shared factual baseline that productive society and productive disagreement requires. The reason platforms keep serving you, AI slop, isn't a mystery. And it's not primarily about bad actors, although there are plenty of those. It's a digital behavioral architecture that was deliberately designed to exploit the same psychological mechanisms we've been talking about. Heck, we did a episode last week on the meta lawsuit that talked about some of these very things.

Leslie Poston:

The underlying structure involves a variable ratio reinforcement schedule, which is a powerful schedule for producing persistent compulsive behavior. Unlike fixed rewards, where you know exactly when a payoff is coming, variable ratio schedules deliver rewards unpredictably after an unknown number of responses. That unpredictability is the key. It's why slot machines are built the way they are. The behavior becomes highly resistant to extinction because you can never be certain that your next poll won't be the one that pays off.

Leslie Poston:

Algorithmically attenuated social media feeds operate on exactly this principle. And in fact, that's one reason why newer protocols like the AT protocol behind Blue Sky have a harder time taking off. They're bringing people back to a more chronological feed after they've gotten used to the slot machine effect. Now layer AI on top of that architecture. The platforms aren't rewarding you with quality content on a variable schedule.

Leslie Poston:

They're rewarding you with reaction triggering content, and AI Slop is extremely good at triggering reactions cheaply. It doesn't need to be accurate, well made, or meaningful. It just needs to produce a click, a share, an outraged comment, or even just a pause long enough to count as an impression. The incentive system doesn't distinguish between engagement with something real and engagement with something synthetic. It just measures engagement.

Leslie Poston:

So AISlop floods the feed, not because anyone decided it should, but because the architecture selects for it automatically. And the humans who make careful work are competing against a machine optimized to bury them. What often gets missed in this conversation is that blaming users for consuming what they're served misunderstands the psychology entirely. Variable ratio reinforcement schedules work on everyone. Knowing the mechanism doesn't make you immune.

Leslie Poston:

The emotional cost of AI slob is the part I find most worth talking about. Many of you are experiencing it without having language for it. When a significant portion of your digital environment starts to feel hollow, uncanny, or untrustworthy, the emotional response isn't always dramatic or identifiable. Sometimes it just produces kind of a low grade wrongness, a numbness or a background irritation that you can't fully locate. Maybe you even get a sense that caring about what you're looking at would be naive because it's probably manufactured anyway.

Leslie Poston:

That's a reasonable psychological response to an environment that repeatedly rewards skepticism and punishes investment. But there's something a little deeper happening than just irritation or fatigue. The search for meaning is a fundamental human drive. It's not a luxury or a philosophical indulgence. It's an actual psychological necessity.

Leslie Poston:

When people can't find meaning in their environment, something called the existential vacuum starts to fill in boredom, apathy, or a persistent sense of futility. Human beings have always used cultural artifacts, stories, art, shared symbols, creative work as primary vehicles for making this meaning together. And when those artifacts are made by people with something genuine to communicate, they carry that intent, creating conditions for connection, recognition, and the feeling that someone else has articulated something true about being alive. AI slop mimics this form of human expression while removing the content and intent entirely. It looks like a story.

Leslie Poston:

It sounds like advice might resemble art, but there's no one behind it trying to reach you. At some level, even when people can't name it, they feel that absence, and that feeling accumulated across thousands of interactions contributes to feeling a kind of ambient meaninglessness that I think is worth taking seriously. The economic damage to human creators is starting to be documented in ways that are hard to dismiss. One global study commissioned by CSAT, which represents over 5,000,000 creators worldwide, found that music and audio visual creators stand to lose roughly a quarter of their current income to AI substitution by 2028, with that revenue just flowing back to the tech companies whose models were trained on those very creators' work without permission or compensation. That's a specific transfer of economic value away from people who make things towards systems that fake things.

Leslie Poston:

What makes this psychologically interesting beyond the obvious economic harm is what research is finding about how audiences respond to human versus AI made work. People perceive AI generated creative work as less effortful, less authentic, and less creative than human made work. Even when the outputs are objectively similar in quality, The identity of the creator matters independently of the work itself. People value the fact that a human being spent time, made choices, and tried to communicate something of value. And when that's absent, something registers as missing, even if you can't articulate what.

Leslie Poston:

Then there's something I think of as the verification text. When something catches your attention now, you have to decide whether to fact check it, reverse image search it, track down whether the quote is real or the photo is synthetic, a meaningful percentage of people just won't do that work, which is part of how slop spreads. The ones who do that work are paying with the same cognitive budget they would otherwise spend on something worthwhile. And that cognitive cost compounds across every session every day. The cumulative effect of all of this extends past individual frustration.

Leslie Poston:

A functioning public life depends on people sharing enough of a common reality to have actual arguments and discussions about it. An AI slop degrades that directly, flooding the information environment with contradictory claims, eroding trust in sources and science, and making the cognitive work of careful evaluation harder for everyone. I want to try and end this practically because panic and full time fact checking, neither of those are the answer. And that approach just isn't sustainable, and it generates its own form of exhaustion. A more useful frame, I think, is selective protection of your own cognitive resources.

Leslie Poston:

Spend less time in algorithmic feeds, not zero, but less because algorithms are designed to surface what gets a reaction rather than what earns your time. And curating your own sources even imperfectly gives you more control over that. Pay close attention to how you feel during and after different kinds of consumption. If something consistently leaves you irritated, numb, or depleted without giving you anything back, that in itself is information worth acting on. Seek out slower, more careful work deliberately, long form journalism, real books, podcasts that go deep into one topic.

Leslie Poston:

Not because speed is always bad, but because depth is a practice for your brain, and a brain that never exercises deliberate evaluation will get progressively worse at it. We talked about that in previous episodes on AI. Connecting back to what we talked about with meaning. Pay attention to what leaves you feeling like you've actually been somewhere after you put it down. What makes you think or feel something real or wanna share it because it said something true?

Leslie Poston:

That capacity to be reached by human work is worth protecting. Because the alternative isn't just worse content. It's a gradual numbing to the difference between something made to grab you and something made to reach you. Those aren't the same thing. And staying able to tell them apart is still one of the most important things you can do with your attention.

Leslie Poston:

Also, check out a recent study by Wharton on something called cognitive surrender. This is going to be especially relevant to people making nonfiction content, business content, b to b content, people who are beginning to outsource their day to day lives to AIs, just read that paper from Wharton. I'll put it in the show notes. It's very interesting. It's a new phenomenon, and I'll probably do an episode on that paper little later this season.

Leslie Poston:

Thanks for listening to PsyberSpace. I'm your host, Leslie Poston, signing off. And before I tell you to stay curious until next time, I do wanna mention that I am anomaly for, women in podcasting award for 2026. And it is the kind of award that your vote would help me to make some headway in winning it, and I would appreciate it. I'm so honored to be nominated.

Leslie Poston:

I'll put a link for that in the show notes as well. And as always, share this podcast with someone you think it would help. Subscribe so you don't miss an episode. And until next time, stay curious.

The Psychology of AI Slop: How Synthetic Junk Erodes Attention, Trust, and Meaning
Broadcast by