Why Does Debating Bad Ideas Make Them Stronger?

Leslie Poston:

Welcome back to PsyberSpace. I'm your host, Leslie Poston. This week we're talking about why debate content so often functions as a distributor and legitimization tool for bad ideas, and not as a factual takedown of them. I want to call your attention to something that once you see it, you can't un see it. Debate content whether that's on TV panels, podcasts, YouTube reactions, TikTok stitches, quote tweet dunks, or even full on livestream showdowns doesn't do what people think it does.

Leslie Poston:

Usually, the intention for those who engage in debate, from the standpoint of presenting facts or reducing harm, is good. People want to expose a terrible idea, pressure test it, embarrass it, and then defeat it with facts and science in public. The idea is that if viewers or listeners just knew the facts, the terrible idea would be relegated to the trash bin, like it deserves. But here's what actually happens: that terrible idea gets upgraded. Why?

Leslie Poston:

Because we gave it airtime, oxygen, and thus, legitimacy. Worse, now it's been distributed to people who never would have encountered it otherwise. Debate became marketing for the exact terrible idea or opinion you were trying to debunk. This episode is about the psychology behind that pattern, Not because I want anyone to stop caring about the truth, or stop pushing back against harmful ideas, but because I want us to get smarter about what actually works. If your goal is to reduce harm, you need tools that reduce harm, not formats that quietly multiply it.

Leslie Poston:

So let's talk about why debate fails in this specific way, and what we can do instead. One of the most powerful psychological shortcuts humans have is social proof. If something is presented as part of a serious conversation, our brains treat that as information about status. In other words, the format carries meaning in addition to the content held within it. It's one reason why I picked this format for this podcast instead of having a guest based, discussion based podcast.

Leslie Poston:

I'm interested in sharing with you truth and research that you can use. When an idea shows up inside a debate structure, that structure is already implying a few things. That there are two legitimate sides, that it's reasonable to weigh them, and that the audience should make decisions on which to treat as valid based on preference or persuasion, rather than on evidence. Even if you, as the participant or host, strongly disagree with the terrible idea and come fully armed with facts, the very act of participating in the debate suggests that the idea belongs on the same shelf as your reality based position. We've seen Gavin Newsom make this mistake on his podcast multiple times now, giving legitimacy and audience to harmful ideas by hosting far right talking heads on his show, amplifying their harmful views instead of reducing their harmful impact.

Leslie Poston:

This is one reason why people can walk away from watching or listening to a debate saying, I don't know. Both sides made some points. Even when one side was grounded in fact and reasonable, and the other side was advocating for terrible acts against your fellow humans, debate formats train the audience to treat claims as contenders, and not as things that have already been evaluated by evidence. Debate turns Is this true? Into Which performance did I like more?

Leslie Poston:

And that shift is the first step in giving legitimacy to something that does not deserve it. There's a specific version of this that has a long history in journalism and media: the idea of false balance. False balance is what happens when fairness is defined as giving equal time to unequal claims. It's the idea that every topic has two sides that should be represented evenly, regardless of whether the evidence is evenly distributed. A classic example in research on media coverage is climate change.

Leslie Poston:

In a landmark analysis of US prestige press coverage, the authors of the study argued that the norm of balanced reporting contributed to a public conversation that diverged from scientific consensus, because the coverage presented the issue as more uncertain and more contested than the research actually supported. What matters for our purposes today is not really the topic, but the mechanism. When you stage debate where the evidence is lopsided, the audience is not receiving a balanced view. They're receiving a distorted view. And that distortion happens precisely because the format treats credibility as something that can be negotiated socially rather than as something that can be supported empirically.

Leslie Poston:

And that's the trap. Debate often feels like a moral high ground because it signals openness, tolerance, and intellectual confidence. Meanwhile, it quietly gives a bad idea what it needs most to take root, which is a seat at the table. Now let's get into a part that really stings, especially for people who care about educating the public. Human beings tend to interpret familiarity as a cue for truth.

Leslie Poston:

And that's not because they're stupid. It's because human brains are efficiency machines. Familiar information feels easier to process, and easy to process can be misread as more credible. We talk a little bit about our brain's comfort seeking mechanism in our episode from last year on comfort as the enemy of progress. It's worth checking out.

Leslie Poston:

This problem of familiarity shows up in classic research, and it's often referred to as the illusory truth effect. In one early paper, researchers found that when plausible statements were repeated, people were more likely to judge them as true, regardless of whether they actually were true. Repetition increased perceived validity. More recent work takes this into the misinformation era that we're in now. One widely cited study on fake news headlines found that even a single prior exposure increased later perceptions of accuracy, including after a delay.

Leslie Poston:

So just seeing a misleading headline before hearing misinformation made that misinformation feel more believable later. So when you take debate content that repeats a harmful claim, even if your tone is disgusted, sarcastic, triumphant, you're still doing a form of distribution, increasing exposure and building familiarity with the bad idea you're trying to defeat. That means you can accidentally make that lie easier to remember than the correction, especially because corrections are usually longer, more nuanced, and less emotionally simple than the original claim. And there's a reason for this. Emotional arousal enhances memory consolidation, and cognitive fluency acts as a heuristic for truth.

Leslie Poston:

Lies are often designed to be emotionally potent and simple, while corrections require nuance. There's also a reality about audiences that debate culture tends to ignore. Most people are not watching the debate as neutral judges. Many people watch a debate the same way they watch sports. They're there to see their team win, to feel confirmed, and to collect lines and memes.

Leslie Poston:

They also love proximity to the feeling of dominance, the feeling of moral superiority, or the pleasure of humiliation that they get from watching or listening to their side win. Even people who avoid conflict can get pulled into debate content because it creates narrative tension. It's entertainment, wrapped in civic clothing. Meanwhile, like we've talked about before, changing beliefs is slow and psychologically costly. When people have already incorporated misinformation into their mental model of how something works, you can correct the facts and still have the misinformation continue to influence their reasoning.

Leslie Poston:

This is called the continued influence effect, and it's one reason misinformation is so persistent. We talked more about this in our episode on misinformation last year if you want to explore the idea more deeply. This is also where debate creates an asymmetry. The person spreading a bad idea can say something punchy and memorable in about five seconds, Whereas the person responding often needs context, definitions, evidence, and careful phrasing, because reality and truth can be complicated and nuanced. So the audience experiences that as one person is confident and simple, and the other person, not they seem like they're rambling.

Leslie Poston:

That means that even when someone wins on substance, the format still rewards the person who was more performative, confident, aggressive, and emotionally satisfying to watch. Let's talk about incentives for a moment, because incentives matter more than intentions. Debate formats typically reward whoever can generate more engagement in less time. That means speed, certainty, strong emotion, spectacle, and a clean narrative matter. Research on cognitive processing shows us that when people are evaluating arguments under time pressure or cognitive load, they rely more heavily on peripheral cues like confidence, emotional intensity, and rhetorical polish rather than the actual substance of the argument.

Leslie Poston:

But accuracy doesn't behave that way. Accuracy often requires acknowledging limits. This includes saying things like it depends and then referencing evidence, or distinguishing between what we know and what we suspect. It often requires clarifying definitions, and it even means correcting your own side sometimes. These aren't behaviors that go viral.

Leslie Poston:

It's also why bad faith tactics thrive in debate spaces. If someone can stack claims rapidly, change the subject when challenged, frame themselves as persecuted if they're confronted, and keep the emotional intensity level in the audience high, they can dominate attention even while being completely wrong. In older eras, this was mostly an interpersonal tactic. In our current media ecosystem, it becomes a strategy that fits the machinery of distribution. Debate turns tactics into content, and content into reach.

Leslie Poston:

Even if you're an excellent communicator, you're not operating in a vacuum. Most debate content lives on platforms that reward engagement, and engagement is often driven by outrage, conflict, and identity threat. There's research showing that moral and emotional language spreads more effectively through social networks. In a large scale analysis of social media communications about polarizing issues, the presence of moral emotional words increased diffusion, meaning the content traveled farther. There's also research on how social reinforcement shapes behavior online.

Leslie Poston:

One study on moral outrage expression found that when outrage gets rewarded through engagement, it can increase future outrage expression over time. The design of the platform becomes a teacher, nudging people toward what gets rewarded. And critically, the algorithmic layer can amplify these patterns. One recent study examined engagement based ranking, and found that the engagement based algorithm amplified more partisan and emotionally charged content, including anger and outgroup animosity, compared to a reverse chronological baseline. So debate is not simply a conversation.

Leslie Poston:

It's a kind of raw material that these systems know how to process. Debate produces conflict, and conflict produces engagement, which in turn produces reach. And reach is the prize that all bad ideas want. At this point, you can probably feel why some people actively seek debate as a strategy. They're in your comments telling you to debate me, bro.

Leslie Poston:

They're watching shows like Jubilee for tips. If you're trying to spread a fringe idea, you face several problems. You need exposure, legitimacy, and the appearance of being a serious person, being taken seriously by other serious people. You need to reach beyond your existing small audience, and you need your idea to feel like a real option that thoughtful people might genuinely consider. Debate solves all of that at once.

Leslie Poston:

It's giving you access to someone else's audience, framing you as their peer. It's signaling your idea is worth wrestling with, and giving you clips you can use for promotion. It's giving you a chance to perform calmness while the other person gets upset, which makes you look reasonable even when you're wrong. And if you don't get invited back, you've gained grievance points. Now you can claim you were silenced, canceled, or victimized.

Leslie Poston:

And this is part of why propaganda models emphasize volume, repetition, and distribution over coherence. Rand's fire hose of falsehood framing is helpful here because it highlights how high volume, rapid, repetitive messaging can gain traction even when it's inconsistent or objectively false. Debate culture is an ecosystem where these tactics can thrive. Because debate isn't designed to filter truth, debate is designed to generate contest and manufacture consent. So what do you do if you still care about the truth, and you care about harm reduction, and you don't want to accidentally become the ad campaign for a horrible idea?

Leslie Poston:

The first move is to shift from two sides framing to weight of evidence framing. Instead of presenting claims as equals, present them in proportion to what credible evidence supports. It sounds basic, but it's a profound change in practice, because it stops treating attention as the currency of legitimacy. The second move is to focus less on repeating specific claims and more on teaching patterns. This is where prebunking and inoculation approaches can help.

Leslie Poston:

There's research on interventions that aim to build psychological resistance by exposing people to weakened doses of misinformation tactics. One well known example is the bad news game approach, which teaches how to avoid common manipulation strategies by putting the player in the role of someone trying to spread misinformation so that the tactics being spread misinformation so that the tactics being used used to spread misinformation to you become recognizable. That approach matters, because it doesn't require you to keep restating the lie. It teaches people how to spot the technique, which is more durable, and often more generalizable. The third move is to change your target.

Leslie Poston:

A lot of debate content is framed as this duel between two individuals. That automatically centers the person with the bad idea. A harm reduction approach often centers the audience instead. It asks, what does this audience need in order to be less vulnerable to this tactic, less emotionally hijacked by this lie, and less likely to spread it? And often the answer is not a takedown.

Leslie Poston:

Sometimes the answer is context, media literacy, a clear explanation of how a claim fails, and resources people can use to talk to their own families, coworkers, or communities without escalating into performative conflict. I wanna close this episode with something you can use in real time, because this is where people get stuck. You see a bad take, and immediately you feel moral urgency, this impulse to respond. And that impulse is human, and sometimes it's appropriate, but not always. The question is how to respond without turning that response into distribution.

Leslie Poston:

Before you engage publicly, ask yourself a few questions. First, ask, is this real, or is this an AI deepfake or AI slop? With synthetic media becoming increasingly convincing, verify the source before amplifying content that might be manufactured to provoke outrage or spread misinformation. Second, is this issue genuinely unsettled, or has it been settled by credible evidence? If it's settled, debate framing is more likely to confuse than to clarify.

Leslie Poston:

Third, is this person acting in good faith? Good faith disagreement has a tell. People define terms, respond to evidence, and update their claims when they're corrected. Bad faith engagement has its own tell. It's slippery, theatrical, and optimized for attention rather than learning.

Leslie Poston:

Fourth, and this might actually be the second most important, who benefits from amplifying this idea? If the person or idea relies on outrage and clip ability, your response may be part of the growth strategy for misinformation. And fifth, what is the lowest amplification way to protect the most people? That might mean responding without repeating the claim. It might mean not linking to the person making the original claim so they don't get algorithmic juice.

Leslie Poston:

It might mean speaking generally about the tactic or pointing people to a strong explainer. Maybe it means focusing your energy on supporting targets rather than arguing with the instigator. This is not about being more passive. It's about choosing formats that don't reward harm. If there's one idea I'd love for you to carry out of this episode, it's this.

Leslie Poston:

Debate is not automatically an instrument of truth. Debate is a social ritual that signals legitimacy. And in a media economy driven by attention, legitimacy and distribution are powerful assets. If your goal is to reduce harm, you can still respond to bad ideas, but you want your responses to not increase familiarity with the lie, not give false balance to the evidence, and not center the person who benefits most from this lie being centered. Next time you see debate content framed as platforming versus free speech, I want you to notice the missing question, which is, what does this format do to an audience's psychology?

Leslie Poston:

Because once you see how legitimacy and familiarity work, you start realizing that some fights are not lost because the truth is weak. They're lost because the venue is designed to use our brains against us by leveraging spectacle. Thanks for listening to PsyberSpace. I'm your host, Leslie Poston, signing off. As always, until next time, stay curious and stay skeptical.

Leslie Poston:

And subscribe if you don't wanna miss a week, and share it with a friend that you think might like this con

Why Does Debating Bad Ideas Make Them Stronger?
Broadcast by