ai-strategy
9 min read
View as Markdown

The Transparency Trap: What Happens When You Tell Customers About Your AI

New research reveals disclosure backfires. Regulations require it anyway. Here's how businesses are navigating the growing gap between customer psychology and legal reality.

Robert Soares

Oliver Schilke ran 13 experiments. Over 5,000 participants. The results were consistent every single time.

“In each experiment, we found that, when someone disclosed using AI, trust declined significantly,” said Schilke, a professor at the University of Arizona’s Eller College of Management. His research team measured trust drops of 16% among students evaluating professors, 18% among investors viewing advertisements, and 20% among clients assessing graphic designers.

The implications are uncomfortable. Honesty costs you.

The Numbers Tell a Story Nobody Wanted to Hear

Here’s what the research actually found. Students who knew their professor used AI to assist with grading trusted that professor 16% less than students who didn’t know. The grades were identical. The feedback quality was the same. The only difference was disclosure.

Investors behaved similarly. When told an advertisement involved AI in its creation, their trust in the advertiser dropped 18%. Again, the ad itself was unchanged. Only the label changed.

Graphic design clients showed the steepest decline. When designers admitted to AI assistance, client trust fell by 20%. Twenty percent. For the same work.

Martin Reimann, another researcher on the Arizona team, found something even more striking: “Even with people who were very familiar with AI and used it frequently, the erosion of trust was there.” Tech-savvy audiences were not immune. They still penalized disclosure.

The researchers call this “the transparency dilemma.” You want to be honest. Honesty makes people trust you less.

But Getting Caught Is Worse

Schilke’s team also tested what happens when AI use is revealed by someone else. A third party uses a detector. A colleague mentions it. The client finds out through some other channel.

“Trust drops even further if somebody else exposes you after using an AI detector or finding out about it some other way,” Schilke explained.

So concealment carries its own risk. The worst outcome isn’t voluntary disclosure. The worst outcome is getting caught.

This creates an impossible position. Tell customers upfront, lose some trust. Hide it and risk losing much more trust if exposed. There’s no path that preserves full credibility.

The honest path costs less. But it still costs.

Meanwhile, Governments Are Making the Decision for You

The EU AI Act takes effect in stages through 2027. Transparency obligations under Article 50 require businesses to inform users when they interact with AI systems. Deepfakes must be labeled as artificially generated. Emotion recognition systems require disclosure. Text generated by AI for public information purposes must be marked.

California’s AI Transparency Act became effective January 2026. Colorado follows with its own requirements. Illinois mandates disclosure when AI is used in hiring decisions. New York requires advertisers to label synthetic performers in commercials.

The FTC has been clear that deceptive practices involving AI violate existing consumer protection law. They launched Operation AI Comply to enforce this position. Violations can reach $51,744 per incident.

So while disclosure hurts trust, not disclosing may soon be illegal depending on your jurisdiction, your industry, and how you’re using AI. The psychological research says one thing. The regulatory environment says another.

The Professional Reputation Problem

A separate line of research from Duke University adds another layer. Jessica Reif, Richard Larrick, and Jack Soll surveyed 4,400 participants across four experiments. They wanted to know how colleagues perceive people who use AI at work.

The findings: people who use AI are viewed as “less competent at their jobs, lazy, less independent, less self-assured and less diligent.”

This perception persisted regardless of age, gender, or occupation. It held even when participants were told the AI improved the quality of the work. Managers were less likely to hire candidates who admitted using AI.

One exception emerged. Managers who used AI themselves rated AI-using candidates more favorably. Personal experience changed the calculation.

But here’s the practical implication: professionals believe that disclosing AI use will damage their reputation. And they’re not wrong. The Duke research confirms that colleagues do judge AI users negatively. This creates organizational pressure against transparency.

How Disclosure Timing Changes Everything

Not all disclosure is equal. When you disclose matters almost as much as whether you disclose.

Research on AI disclosure timing in customer service found that moving from immediate upfront announcement to disclosure after initial rapport-building doubled the number of calls that became meaningful conversations. The content of the disclosure was the same. Only the timing changed.

This makes sense psychologically. Upfront disclosure triggers defensive processing. The customer immediately categorizes the interaction as “dealing with a bot” rather than “solving my problem.” Their expectations shift. Their patience shortens. Their willingness to engage drops.

Disclosure after establishing rapport feels different. The conversation has already started. Some value has already been exchanged. The disclosure becomes an explanation rather than a warning.

This raises ethical questions. Is delayed disclosure manipulative? Or is it simply good communication? The answer probably depends on how long the delay is and what happens during it.

State-level AI regulations in the United States remain inconsistent. California requires disclosure when consumers interact with generative AI systems. Colorado mandates disclosure when AI makes decisions affecting consumers. Utah requires disclosure only upon request. Some states require upfront disclosure “unless obvious to a reasonable person.”

What counts as obvious? The regulations don’t define it. A chatbot with a robot avatar might qualify. A chatbot named “Alex” probably doesn’t.

The FCC proposed rules in 2024 suggesting that outbound AI calls should require disclosure “at the outset.” But the exact language remains undefined. What counts as “the outset”? The first second? The first sentence? After the greeting?

Businesses operating across jurisdictions face compliance puzzles. What’s required in California may not be required in Texas. What’s required in the EU may be stricter than anywhere in the US. And all of it is subject to change as regulators continue developing their frameworks.

What Actually Helps

The Arizona research identified one factor that reduced the trust penalty. When the AI’s helpfulness was explicitly acknowledged, negative perceptions diminished. Framing matters.

Compare these two disclosures:

“This content was generated by AI.”

“Our AI analyzed 200 data points to identify the patterns most relevant to your situation. A team member reviewed the final output.”

The first sounds like a warning label. The second sounds like a capability statement. Same underlying truth. Different psychological impact.

Christopher Penn, writing about AI disclosure for copyright protection, notes a different reason to disclose: legal strategy. US courts have ruled that AI-generated content cannot receive copyright protection. Human-created work can. By clearly labeling which elements are AI-made versus human-made, creators strengthen their copyright claims over the human portions.

“You shouldn’t claim work you didn’t actually do,” Penn argues. But beyond ethics, there’s a practical benefit: disclosure clarifies what you own.

The Customer Service Exception

Some contexts flip the transparency penalty entirely.

Grant, an ecommerce coordinator at Arcade Belts, shared his team’s experience with AI-powered customer service: “A lot of times, I’ll receive the response, ‘Wow, I didn’t know that was AI.’”

His customers weren’t upset. They were impressed. The AI was fast, accurate, and available. Disclosure didn’t hurt. If anything, it demonstrated capability.

This makes sense in context. Customer service is about problem resolution. Speed matters. Twenty-four hour availability matters. If AI delivers those benefits, the disclosure becomes a feature rather than a bug.

The trust penalty research focused on contexts where AI might be seen as cutting corners. Content creation. Grading. Design work. Areas where human effort is part of the perceived value.

Customer support is different. Nobody values slow responses or limited availability. AI addresses real pain points. Disclosure highlights that.

Brand Positioning Changes the Calculation

A technology company disclosing AI use reinforces its identity. A craft brewery disclosing AI use contradicts its identity. Context shapes everything.

Companies that position themselves as innovative and cutting-edge face lower trust penalties for AI disclosure. Their customers expect technological sophistication. AI fits the brand promise.

Companies that position themselves around authenticity, craftsmanship, or human connection face higher penalties. AI doesn’t fit the story they’ve told.

This suggests disclosure strategy should align with brand strategy. For some businesses, full transparency makes sense. For others, the minimum legally required disclosure makes more sense. Neither approach is inherently more ethical. They’re different fits for different situations.

The Uncomfortable Middle Ground

Most businesses exist in a middle ground. They’re not technology companies where AI is expected. They’re not artisan producers where AI feels jarring. They’re regular businesses trying to serve customers efficiently.

For them, the research suggests a few principles:

Lead with value. Disclose AI use in the context of what it does for the customer, not as a standalone fact.

Be matter-of-fact. Don’t apologize. Don’t over-explain. State what happened and move on.

Highlight human involvement. “AI-assisted, human-reviewed” performs better than “AI-generated.” If humans are involved, say so.

Match intensity to stakes. High-stakes decisions warrant clear, prominent disclosure. Low-stakes interactions can get lighter treatment.

Don’t delay more than necessary. Early disclosure allows customers to calibrate expectations. Late disclosure feels like a reveal.

What the Future Probably Looks Like

The regulatory trajectory is clear. More disclosure requirements, not fewer. More jurisdictions implementing rules. More specificity about what must be said and when.

The psychological research is also clear. Disclosure carries costs. Those costs may decrease over time as AI becomes normalized. They haven’t yet.

What this means practically: businesses will need to disclose more than they currently do, and they’ll need to get good at disclosing in ways that minimize damage. The companies that figure this out first gain an advantage. The companies that don’t will face both regulatory penalties and ongoing trust erosion.

The transparency trap won’t resolve itself. Customers will continue to trust disclosers less while demanding more disclosure. Regulations will continue tightening while psychological research continues confirming the penalty. The gap will persist.

Learning to operate within that gap is the actual skill.


Related reading: AI Data Privacy Compliance covers GDPR and CCPA requirements. AI Copyright Ownership Issues explores the intellectual property implications of AI-generated content.

Ready For DatBot?

Use Gemini 2.5 Pro, Llama 4, DeepSeek R1, Claude 4, O3 and more in one place, and save time with dynamic prompts and automated workflows.

Top Articles

Come on in, the water's warm

See how much time DatBot.AI can save you