The AI Transparency Trap: Why Honesty About AI Use Destroys Trust

Leslie Poston:

Welcome back to PsyberSpace. I'm your host, Leslie Poston. Usually, we review several studies relating to a theme or topic, but today we're doing things a little differently. Today, we're diving into one new research finding that reveals something deeply uncomfortable about human psychology. Imagine you're being honest.

Leslie Poston:

You disclose that you used generative AI to help with a work task. Your colleague doesn't disclose, but secretly, they used AI too. Who gets trusted more? If you guessed the honest person, you'd be wrong. A massive new study with over 3,000 participants across 13 different experiments found that people who admit to using AI are trusted significantly less than those who stay silent, even when the silent ones are secretly using AI themselves.

Leslie Poston:

And here's the kicker. The people who use AI in secret are often the harshest judges of those who admit it. This isn't a story about AI. It's a story about identity, legitimacy, and why our psychology creates perverse incentives that punish honesty and reward deception. Today, we're going to unpack why this happens, what it reveals about how we construct professional identity, and why this transparency trap is almost psychologically inevitable.

Leslie Poston:

Let's get into it. Let me tell you about the study that kicked off this whole exploratory episode. Oliver Schilke and Martin Reimann published research this past month called The How AI Disclosure Erodes Trust in the journal Organizational Behavior and Human Decision Processes. They conducted 13 preregistered experiments with over 3,000 participants in multiple contexts: classrooms, hiring, investment decisions, and creative work. The findings were consistent and striking.

Leslie Poston:

In one study with 195 students, professors who disclosed using AI for grading were rated significantly less trustworthy, scoring 2.48 on a seven point scale compared to professors who disclosed using a human teaching assistant who scored a 2.87 or professors who made no disclosure at all who scored a 2.96. That's a large effect size, and it's not a small difference in perception. Another study with four twenty six participants, startup founders who disclosed AI use received trust scores averaging 4.55 out of seven, while founders who made no disclosure averaged 5.63. That's more than a full point drop just for being honest about AI use. And here's what makes this even more interesting.

Leslie Poston:

The researchers tested everything. They tried different ways of framing the disclosure. They tested whether it mattered if people already knew AI might be involved. They looked at voluntary versus mandatory disclosure policies. None of it prevented the trust erosion.

Leslie Poston:

Being honest about AI use consistently resulted in lower trust regardless of how it was framed. The researchers called this the legitimacy discount. When you disclose AI use, people perceive your work as less socially appropriate, less legitimate, even if the quality of the work itself seemed identical. And this isn't about algorithm aversion, which is when people distrust AI systems themselves. This is different.

Leslie Poston:

This is about people distrusting you when you admit to using AI even more than they would distrust the AI operating alone. So why does this happen? That's what we're unpacking today. In a previous episode of PsyberSpace called Mind Locked, I talked about identity protective cognition, how our brains respond to challenges to our beliefs, not with rational evaluation, but with threat responses. When core beliefs become part of our identity, contradictory information activates brain regions associated with self preservation and social identity, not the regions involved in logical reasoning.

Leslie Poston:

The AI transparency trap is a textbook case of this mechanism. When someone discloses AI use, observers aren't simply asking, is this work good? They're asking, what does this mean about me? If AI assisted work is considered legitimate, professional quality work, then what does that imply about the purely human work that I do? Am I suddenly less valuable?

Leslie Poston:

Less skilled? Less necessary? This is an identity threat. Professional identity for knowledge workers is built on a foundation of human cognitive labor our thinking, our creativity, our expertise. AI threatens that foundation.

Leslie Poston:

But here's the thing. As long as everyone pretends AI isn't being used or at least doesn't talk about it, everyone's identity remains intact. The internal and societal story that real work is purely human can be maintained. When someone breaks that implied social contract by being honest about AI use, they're not just making a disclosure about their own work. They're forcing everyone else to confront an uncomfortable question about their own professional value.

Leslie Poston:

And people really, really don't like being forced to confront identity threats. This explains why the trust penalty happens regardless of work quality. The work could be objectively excellent. It doesn't matter. The disclosure itself is the problem because it triggers identity protective responses in observers.

Leslie Poston:

Their brains shift from evaluate the work mode to protect my sense of self mode. And in that mode, punishing the person who triggered the threat by judging them as less trustworthy is a way of protecting the boundaries of what counts as legitimate professional work and, by extension, protecting your own professional identity. Let's talk about the concept of legitimacy more directly because it's central to understanding what's really happening here. In sociology and organizational psychology legitimacy refers to the perception that an action, decision, or entity is appropriate, proper, or desirable within a given social system. It's not about whether something actually works well or produces good outcomes.

Leslie Poston:

It's about whether it conforms to social expectations and norms. The researchers found that AI disclosure reduces perceived legitimacy. But what does that actually mean? It means that using AI violates an unspoken norm about how professional work should be done. This norm says real professionals do their own thinking.

Leslie Poston:

Real expertise comes from human cognition. Authentic work is only purely human. These aren't written rules. They're implicit expectations that define group boundaries. They tell us who belongs in the professional in group and who doesn't.

Leslie Poston:

And like all group boundaries, they're maintained through social enforcement. Praise for those who conform, punishment for those who don't. Here's where it gets psychologically interesting. Legitimacy judgments are asymmetric. Conforming to norms gains you only a small amount of trust, but violating norms loses you a lot of trust.

Leslie Poston:

This asymmetry creates a strong incentive to either genuinely conform or at least appear to genuinely conform. When you disclose AI use, you're essentially admitting to norm violation. Even if the norm is outdated, even if it doesn't make practical sense, even if secretly everyone else is violating it too, the public admission marks you as someone who doesn't belong to the in group of legitimate professionals. This is boundary policing in action. It's not about evaluating whether AI makes work better or worse.

Leslie Poston:

It's about maintaining the symbolic boundaries that define professional identity. And those boundaries are defended precisely because they're under threat. If AI really can do significant parts of knowledge work, then the boundary between professional and not professional becomes unclear. So people double down on enforcing the rules about what counts as legitimate professional behavior. The irony, of course, is that this enforcement mechanism, punishing transparency, incentivizes the exact opposite of what we claim to value.

Leslie Poston:

We say we value honesty and transparency, but our actual behavior reveals that we value maintaining comfortable identity boundaries much more. Now let's talk about what might be the most psychologically interesting finding from this whole line of research. That the people who use AI secretly are often the harshest judges of people who disclose their AI use. This isn't unique to AI. This is a well established pattern in psychology called moral compensation or moral licensing.

Leslie Poston:

When people privately violate a moral standard or a social norm, they often become stricter enforcers of that standard publicly. We sometimes see this play out in politics, for example. It's a way of resolving cognitive dissonance. Let me take you back to another previous episode, the one where I talked about the psychology of caring behavior. I talked about cognitive dissonance, which is the uncomfortable psychological tension you feel when your beliefs and your behaviors don't align.

Leslie Poston:

Festinger's research shows that people tend to resolve this dissonance not by changing their behavior to match their beliefs, but by adjusting their beliefs or by doubling down on public adherence to the norm that they're privately violating. Secret AI users experience cognitive dissonance. They believe that real professionals don't use AI, or at least that good work is purely human, but they're using AI. This creates psychological tension. One way to resolve that tension is to become even more vocally critical of people who admit to using AI.

Leslie Poston:

See, I'm still one of the good ones. I'm upholding professional standards. I'm judging these people who admit it. It's projection. It's self protection through enforcement.

Leslie Poston:

And it's entirely predictable from a psychological standpoint. It also creates a vicious cycle. The more people secretly use AI, the more cognitive dissonance exists in the system. The more dissonance exists, the harsher the public judgment becomes toward anyone who's honest. The harsher the judgment, the stronger the incentive to hide your AI use.

Leslie Poston:

And round and round it goes. What we end up with is a social system where everyone knows that AI use is widespread, but nobody can talk about it honestly without facing social penalties. It's a collective fiction that everyone maintains because the cost of breaking it the identity threat, the legitimacy discount, the social judgment simply feels too high. In my episode on moral psychology, I distinguished between intrinsic and extrinsic morality. Intrinsic morality is when your behavior is driven by internal values and principles.

Leslie Poston:

Extrinsic morality is when your behavior is motivated by external factors rewards, punishments, or social approval. The AI transparency trap reveals a massive gap between stated values and revealed preferences. We claim to value honesty. We claim to value transparency. We claim that disclosure is the ethical thing to do.

Leslie Poston:

These are our stated values, the principles we say we hold. But our revealed preferences, the actual behaviors we reward and punish, are telling a different story. We punish transparency. We reward secrecy. Or at least we don't punish it.

Leslie Poston:

Our actual behavior reveals that we value identity protection and social conformity more than we value honesty. And this isn't about individual hypocrisy. This is about a collective coordination failure. In theory, everyone would benefit if transparency about AI use became the default. We could have honest conversations about best practices.

Leslie Poston:

We could develop better policies. We could learn from each other's experiences and factor consent into AI practice. But in real practice, the first person to be transparent pays a massive social cost while everyone else maintains plausible deniability. It's a classic collective action problem. The individually rational choice, stay silent, produces a collectively suboptimal outcome, a culture of secrecy and deception.

Leslie Poston:

And because the penalty for transparency is rooted in identity threat rather than rational evaluation, you can't simply reason your way out of it. You can't convince people to stop penalizing disclosure by explaining that it's rational. Identity protection is not rational. It's emotional, automatic, and deeply ingrained. This is why the researchers found that even when they tried to create collective validity, priming participants to believe that AI use is common and accepted, the trust penalty was reduced but not eliminated.

Leslie Poston:

You can't completely override the identity protective response through framing alone. Let's zoom out and think about what this tells us about how professional identities are constructed and maintained. Professional identity isn't primarily about what you actually do. It's about the stories we tell about what makes someone a real professional. For knowledge workers, these stories have traditionally centered on human cognitive labor, thinking, analyzing, creating, and problem solving.

Leslie Poston:

The value of a professional was directly tied to their human intellectual capacity. AI disrupts that story in a fundamental way. If AI can perform tasks that were previously markers of professional expertise, then what does professional identity rest on? What makes someone valuable? Rather than grapple with that question directly, which would require reconstructing professional identity from the ground up, it's psychologically easier to simply declare all AI use illegitimate.

Leslie Poston:

It's easier to maintain the fiction that real work is purely human and enforce that boundary through social punishment. This is what sociologist Michel Lamont calls symbolic boundary work, the way groups define themselves through moral and cultural distinctions. Professional groups maintain their status and identity by drawing boundaries around what counts as legitimate professional behavior. Those boundaries shift over time, but they're always defended vigorously when they're under threat. What we're seeing with AI is a moment of acute boundary threat.

Leslie Poston:

The traditional markers of professional identity are rapidly becoming less clear, So the boundary enforcement becomes more aggressive. The penalties for violation become steeper, and the social policing becomes more intense. But here's the thing. This isn't necessarily conscious or deliberate. People aren't sitting around thinking, I need to protect my professional identity by punishing people who disclose AI use.

Leslie Poston:

It's automatic. It's emotional. It feels like a genuine evaluation of trustworthiness, even though it's actually an identity protective response. And this is what makes it so hard to change. You're not fighting against conscious bias or explicit prejudice.

Leslie Poston:

You're fighting against automatic psychological processes that evolved to protect group identity and maintain social cohesion. Let's talk about incentives because understanding the psychology of incentive structure is vital to understanding why this pattern persists. From an individual perspective, hiding AI use is rational. The social cost of disclosure is real and significant, lower trust, perceived illegitimacy, potential professional consequences. We've been discussing that.

Leslie Poston:

The benefit of disclosure is what exactly? Feeling ethically pure? That's a pretty weak benefit compared to the concrete social costs. So individuals rationally choose secrecy. But when everyone makes that individually rational choice, we end up with a collectively irrational outcome, creating a culture where deception is normalized, where no one can talk honestly about their actual practices, where learning and improvement are hampered because everyone's pretending they're doing things one way when they're actually doing them another way.

Leslie Poston:

This is a coordination failure. It's a situation where individual incentives and collective welfare are misaligned. Game theorists would recognize this as a version of the prisoner's dilemma or a coordination game with multiple equilibria. Here's what makes it even trickier. Penalty for transparency is front loaded and certain, while the potential benefits of widespread transparency are diffuse and uncertain.

Leslie Poston:

If you disclose AI use, you face immediate trust penalties. Maybe if enough other people also disclose, eventually the normal shift and transparency will become accepted, but that is a maybe. It's in the future, and psychologically, you're bearing the cost right now. This creates a severe first mover disadvantage. The people who are most ethical, who actually follow their stated values about transparency, get punished the most.

Leslie Poston:

The people who are least ethical, hiding AI use while publicly judging others, face no consequences, and may even be rewarded with higher trust ratings. When the incentive structure punishes virtue and rewards vice, you can't fix the problem by appealing to individual ethics. You need to change the incentive structure itself. But how do you do that when the penalties are rooted in automatic identity protective responses, rather than conscious choice? That's the trap.

Leslie Poston:

That's why this is so psychologically sticky. So where does this leave us? I want to be very clear. I am not advocating for dishonesty. I'm not saying it's okay to hide AI use because the incentives favor it.

Leslie Poston:

What I am saying is that understanding the psychological mechanisms at play is crucial for figuring out how to create better systems. But before I talk about individual strategies, I need to address something systemic that's making this whole problem so much worse. And that is the way generative and LLM AI is being forced on people. A significant part of this crisis stems from how AI is being deployed, driven by tech company and venture capital greed, pushed by hype cycles rather than genuine need or consent. AI is being embedded everywhere, often without people having any real choice about whether they want to use it or not.

Leslie Poston:

Companies are racing to slap AI powered on everything they make, not because it improves the experience, but because investors and markets are demanding it. This forced adoption exacerbates the identity threat we've been talking about. It's not just AI might replace me. It's I don't even get to choose whether or how I engage with this technology. That loss of agency intensifies the psychological resistance and makes the legitimacy crisis so much worse.

Leslie Poston:

People aren't just resisting AI itself. They're resisting having their autonomy stripped away, being treated as passive recipients of whatever technical change companies decide to impose. That's a legitimate form of resistance, and it deserves respect. A better approach would be to make AI opt in by design. Let people choose whether and how they use AI tools.

Leslie Poston:

Provide informed consent. Position AI as a specific tool to augment human effort in contexts where people find it genuinely useful, not as a mandatory overlay on everything or as a replacement for human capability. When people have agency over their AI use, when they can decide, yes, I want to use this tool for this specific task because it helps me accomplish my goals, disclosure becomes less threatening. It's no longer admitting to something that was forced on you or that undermines your value. It's making an intentional choice about tools, which is something professionals have always done.

Leslie Poston:

This addresses both the autonomy problem and helps normalize transparent, intentional AI use. It turns AI from an identity threat into a legitimate professional tool chosen and deployed thoughtfully. Now beyond that systemic issue, if we want to move toward a culture where transparency about AI use is possible without penalty, we need to address the identity threat that drives the legitimacy discount. That's a much harder problem than simply implementing disclosure policies. Here are some things that might actually help at the cultural and institutional level.

Leslie Poston:

First, we need to actively reconstruct professional identity in ways that allow professionals to incorporate rather than exclude AI use. And this means telling new stories about what makes someone a valuable professional. Maybe it's not about raw cognitive horsepower anymore. Maybe it's about judgment, discernment, knowing when to use which tools, how to evaluate outputs, how to fact check, how to integrate AI assistance with human insight. That's still a skilled valuable role.

Leslie Poston:

It's just a different role than we're used to. Second, we need institutional leaders to model transparency about their own AI use. When high status individuals disclose AI use without apology, it helps shift the norms, but this requires people with secure positions to take social risks on behalf of the broader community. That's hard. It requires courage and a willingness to absorb short term legitimacy costs for long term cultural change.

Leslie Poston:

Third, we need to make AI disclosure mandatory across the board. If everyone is required to disclose, then disclosure loses its signaling value. It's no longer marking you as different from the in group if everyone has to do it. The researchers found that mandatory disclosure reduced but didn't eliminate the trust penalty. But it's better than the current situation where voluntary disclosure is essentially self punishment.

Leslie Poston:

Fourth, we need to have honest conversations about what AI actually can and can't do well. A lot of the anxiety around AI comes from uncertainty and worst case thinking, as well as a little bit of worst practices from some of the people who have trained AI models. If we can create more realistic understanding of AI's actual capabilities, what it genuinely helps with and where human judgment remains essential, it might reduce the identity threat. People are less threatened when they understand that AI is a tool that augments rather than replaces human capabilities. But I want to be realistic.

Leslie Poston:

These are hard, slow changes. In the meantime, individuals face real dilemmas. Should you disclose AI use and face the trust penalty? Should you stay silent and maintain legitimacy? There's no easy answer, and it depends on your specific situation, your risk tolerance, and your institutional context.

Leslie Poston:

What I can say is this: Understanding the psychological dynamics doesn't resolve the ethical dilemma, but it does help you make more informed choices. You can go into decisions with your eyes open about the trade offs rather than being blindsided by unexpected social penalties. And when possible, you can push back against forced AI adoption and advocate for systems that give people real agency and choice. Let me close by zooming out one more time because this isn't just about AI. The dynamics we've explored today show up in countless other contexts.

Leslie Poston:

Anytime a new tool, practice, or technology threatens established professional identities, you see similar patterns. Photographers fought against digital photography. Graphic designers resisted computer aided design. Musicians pushed back against electronic instruments. The pattern repeats.

Leslie Poston:

Identity threat leads to legitimacy policing, which leads to social penalties for early adopters who are honest about using new tools. Eventually, the norms shift. Things like digital photography are now standard. Computer aided design is universal, and electronic music is an entire respected genre. But that shift takes time, and the people who are most honest during the transition period often pay social costs for their honesty.

Leslie Poston:

What this really reveals is how powerfully identity shapes our cognition. We like to think of ourselves as rational evaluators who assess tools and practices based on their outcomes. But when identity is at stake, rationality goes out the window. We assess things based on whether they confirm or threaten our sense of who we are and where we belong. We also see how social norms can create perverse incentives that persist even when everyone knows they're counterproductive.

Leslie Poston:

The AI transparency trap isn't maintained because anyone thinks it's a good system. It's maintained because changing it would require coordinated collective action, and the individual costs of being an early defector are too high. And finally, we see the gap between stated values and revealed preferences. We claim to value transparency, but we punish it. We claim to value honesty, but we reward strategic silence.

Leslie Poston:

Understanding that gap, really understanding it, not just intellectually acknowledging it, is critical for making sense of human social behavior. These patterns aren't bugs in human psychology. They're features. They evolve to help us maintain group cohesion, protect valuable identities, and coordinate social behavior. But in the face of rapidly changing technology, these same features can create traps that are hard to escape.

Leslie Poston:

Recognizing the trap is the first step. Figuring out how to escape it individually and collectively is the harder work that lies ahead. If this episode made you think a little differently about AI transparency, professional identity, or the gap between what we claim to value and what we actually reward, I'd love to hear about it. As always, you can find show notes, references, and the transcript at PsyberSpace. If you enjoyed this episode, please share it with someone who might benefit from understanding these psychological dynamics.

Leslie Poston:

And if you want to dive deeper into related topics, check out our previous episodes on identity protective cognition, moral psychology, and cognitive dissonance. They all connect to the themes we explored today. Thanks for joining me on PsyberSpace. I'm your host, Leslie Posten, signing off and reminding you, stay curious. And don't forget to subscribe so you never miss an episode.

The AI Transparency Trap: Why Honesty About AI Use Destroys Trust
Broadcast by