Moral Licensing: How Doing Good Gives People Permission to Be Worse

Leslie Poston:

Welcome to PsyberSpace. I'm your host, Leslie Poston. This week we're doing a shorter episode and we're talking about the tendency of humans to behave worse right after they've done something good. This shows up in many places at once: environmental behavior, racial bias, workplace ethics, and even activism. The mental mechanism is the same in all of them, so once you know what's happening and what to look for, you're going to start seeing it everywhere, including in yourself.

Leslie Poston:

This phenomenon has a name. It's called moral licensing. And it works like this: When you do something that confirms your identity and how you see yourself as a good person, your brain registers it as progress towards a goal. Once you've made progress towards a goal, motivational pressure decreases. The neural systems that drive self regulation are sensitive to the gap between your current state and your goal state, so when you close the gap, your drive reduces.

Leslie Poston:

The result, which we can see across many studies and in many contexts, is that people who have recently acted virtuously in one area become measurably more willing to behave in ways they otherwise wouldn't sometimes in the same domain and sometimes in an entirely different one. This is why you can sometimes see this paradox in people who are ostensibly religious, for example. The model most of us operate with assumes virtue compounds. If you do good things, you become a better person, and that identity makes future good behavior more natural. And that's not entirely wrong over the long arc of habit formation, but it misses a shorter term dynamic that runs in the opposite direction.

Leslie Poston:

Moral identity doesn't only develop slowly over time. It also functions as a state the brain tracks in real time, and when recent behavior has moved that state in a positive direction, regulatory effort eases off. A useful analogy might be the difference between thinking about fitness as a cumulative long term goal versus treating calories as a daily budget. Most people don't consciously apply budget logic to their ethics, but when research creates conditions to test it, behavior follows the budget model more reliably than the accumulation model. You run five miles, and later you eat something you otherwise wouldn't have, because the run satisfied the goal state enough to reduce restraint.

Leslie Poston:

Moral licensing is a similar regulatory dynamic, operating in the ethical domain. Some of the earliest and cleanest evidence for this came from environmental psychology, where researchers were trying to make sense of a pattern that kept appearing in consumer behavior. People who made a green purchase weren't necessarily making more of them over time, and in some conditions they made fewer. One study, published around 2010, found that participants primed to think of themselves as having made environmentally conscious choices were subsequently more likely to lie and cheat at unrelated tasks compared to people in control conditions. And this wasn't a finding about people who were faking environmental concern.

Leslie Poston:

The psychological effect of having made the responsible choice of having registered oneself as someone who does the right thing was shaping behaviour in areas that had nothing to do with the environment. The greener environmental purchase became cognitively the good thing done that day, and that was sufficient to reduce regulatory effort elsewhere. What makes this useful is that the licensing crossed domains completely. If this were just substitution within a single category, like I donated money so I don't need to volunteer time, it would be interesting but fairly contained. But dishonesty that appeared in the follow-up tasks bore no relation to the environmental behavior at all.

Leslie Poston:

The Moral License was general, and that generality is important because it means the virtuous act you performed in one part of your life this morning may be quietly reducing motivational pressure in a part of your life you haven't consciously connected to it later. It also matters that the effect didn't require a significant act of virtue. Small, ordinary choices were sufficient to trigger it. The brain doesn't appear to be carefully evaluating the magnitude of the good deed before reducing regulatory drive. It's registering that a self relevant moral goal has been approached and adjusts accordingly.

Leslie Poston:

And that makes this a phenomenon of everyday decision making, not just exceptional circumstances. The research that gave moral licensing its name came from work on racial bias, and it's the version of the phenomenon that tends to generate the most discomfort precisely because it implicates people who are genuinely trying to be fair. The original studies found that when participants had recently been given an opportunity to demonstrate their egalitarian values, such as expressing support for a Black political candidate, for example, or explicitly endorsing nondiscrimination, they were subsequently more willing to make choices that favored white candidates or align with majority group preferences in other contexts. Having established their credentials as unbiased appeared to give them more behavioral latitude to act in ways that were biased. This has been replicated across different study designs, different populations, and different methods of establishing the initial credential.

Leslie Poston:

The discomfort this finding generates tends to produce a misreading, so it's important to be precise. The research doesn't suggest that people who express anti bias values are secretly harboring the opposite. What it does suggest is that the act of publicly establishing a virtuous credential reduces subsequent regulatory effort in the brain in ways the person isn't consciously tracking. So you've demonstrated that you're not that kind of person. So your motivational system that would otherwise constrain bias consistent behavior eases off.

Leslie Poston:

This plays out in concrete organizational context as well, such as hiring decisions, performance reviews, or even who gets the benefit of the doubt in a meeting. Someone who has recently invisibly championed equity in one setting doesn't automatically carry that orientation forward into the next decision. And the research suggests that they may in some conditions show more bias than someone who made no public statement at all. And that's not an argument against speaking up. We should continue to speak up.

Leslie Poston:

It's an argument for being clear about the differences between speaking up and following through because the brain can treat them as equivalent and they are not. This pattern shows up in organized political behaviour and in professional settings and in each context it takes a slightly different shape. In the activism research, people who had engaged in visible low cost actions were less likely to subsequently make the kinds of behavioral changes the cause they'd supported would actually require. The participation had apparently satisfied the relevant goal state enough to reduce motivation for harder follow through. This doesn't mean organizing is pointless or that people who sign petitions are acting in bad faith.

Leslie Poston:

It means more that public participation can register internally as more conclusive than it is. The harder, less visible, less socially rewarded work that tends to follow gets less motivational fuel in the brain because the goal state has already shifted. In organizational psychology research, the effect appears among leaders who score high on ethical self identification People who have built professional identities around being the ethical one can be more likely than average to engage in minor violations. Not because the self identification is fraudulent, but because a strong moral identity creates a wider margin before behavior threatens the self-concept. A small deviation is easier to absorb without triggering the self regulatory response that would otherwise correct it.

Leslie Poston:

The identity functions as a buffer rather than a constraint. Self regulation works around goal states. When you're pursuing a goal and the gap between your current state and the goal is large, your motivational drive is high. As the gap closes, drive decreases, so the system that monitors discrepancy between where you are and where you're trying to be registers less urgency and competing motivations get more room. This is a feature of how goal pursuit works generally and not something unique to ethics.

Leslie Poston:

Moral identity operates the same way. It has a reference point, an internal representation of what being a good person looks like, and behaviour is regulated partly by the perceived distance from that reference point. When recent behaviour has moved you closer to it, the regulatory pressure that would otherwise drive continued virtuous behavior decreases. There's also an interaction with self regulatory resource depletion to consider. Acting virtuously requires cognitive and motivational effort and those resources are finite.

Leslie Poston:

Some of the subsequent behavioral latitude may reflect the system conserving resources in areas where your system has already expended them. Something that complicates this a little more is that the reference point isn't fixed across people. It's anchored to the self-concept, which means people with very strong ethical self identities may show stronger licensing efforts in some condition because the gap between their behavior and their moral reference point closes faster and more completely. The internal certainty of being a good person can do some of the motivational work that sustained ethical behavior requires, at least in the short term. And that's not a reason to lower ethical standards.

Leslie Poston:

It's more a reason to be skeptical of the feeling of moral solidity than we're usually inclined to be. None of this is deliberate. People experiencing moral licensing aren't calculating that past virtue entitles them to a present shortcut. The regulatory adjustments happen below conscious awareness, which is a large part of why the effect is so consistent and why catching it requires deliberate attention. Moral licensing doesn't require bad intention.

Leslie Poston:

It happens to people everywhere, even people who genuinely hold the values they think they do in areas where they actively care about their behaviour. The question isn't whether you're a hypocrite, it's whether you're paying enough attention to notice when reduced regulatory effort is doing the work that conscious motivation needs to do. From the inside, licensing tends to feel like a sense of having done your part or a quiet certainty that your account is settled so to speak. You donated so you don't need to follow-up. You voted so you don't need to stay engaged between elections.

Leslie Poston:

You said the right thing in a meeting so the structural problem that never quite gets addressed can wait another quarter. That feeling of completion is definitely worth examining when it shows up, because the regulatory system that produces it isn't tracking whether the underlying problem has actually been solved. What the research doesn't say is that good acts are self defeating or that caring is counterproductive. What it does say is that the relationship between good intentions and good outcomes requires more ongoing motivational effort than the goal completion model implies, so the moral license expires faster than the feelings suggest it does. It's important to keep an eye out on that.

Leslie Poston:

Thanks for listening to PsyberSpace. I'm your host, Leslie Poston, signing off. Until next time, stay curious, and don't forget to subscribe so you never miss an episode.

Moral Licensing: How Doing Good Gives People Permission to Be Worse
Broadcast by