Anthropic released 1,250 interviews about AI at work.
Their headline?
"Predominantly positive sentiments about AI’s impact on their professional activities"
We ran the same interviews through structured LLM analysis.
1,250 conversations. 47 dimensions per interview. 58,750 data points.
85.7% of people are living with unresolved AI tensions.
Not negative. Not positive. Stuck. They haven’t resolved how they feel about AI. They’re just using it anyway. And one group is struggling more than everyone else.
72%
of creatives face identity threat
85.7%
tensions remain unresolved
6.4%
of scientists feel meaning disruption
01 — THE FINDING NOBODY EXPECTED
There are three tribes. One is in crisis.
The data revealed three distinct psychological profiles. Scie…
Anthropic released 1,250 interviews about AI at work.
Their headline?
"Predominantly positive sentiments about AI’s impact on their professional activities"
We ran the same interviews through structured LLM analysis.
1,250 conversations. 47 dimensions per interview. 58,750 data points.
85.7% of people are living with unresolved AI tensions.
Not negative. Not positive. Stuck. They haven’t resolved how they feel about AI. They’re just using it anyway. And one group is struggling more than everyone else.
72%
of creatives face identity threat
85.7%
tensions remain unresolved
6.4%
of scientists feel meaning disruption
01 — THE FINDING NOBODY EXPECTED
There are three tribes. One is in crisis.
The data revealed three distinct psychological profiles. Scientists are thriving. The workforce is managing. But creatives? They’re experiencing something closer to an existential reckoning.
Creatives
The Existential Crisis
5.38
struggle score
THE PARADOX
71.7% face identity threat, yet 74.6% are increasing AI use. They’re struggling the most and adopting the fastest.
71.7%
identity threat
high + moderate
44.8%
meaning disruption
22.4%
guilt/shame
74.6%
increasing use
TOP ARCHETYPES
Reluctant Adopter (23.9%)Cautious Experimenter (17.2%)Conflicted User (16.4%)
FROM THE INTERVIEWS
"One thing I will never use AI for is getting it to write or re-write a section of text for me as that’s no better than employing a ghost writer and I’d feel a fraud."
Creative Professional, Interview #847authenticity
1/5
Creatives
5.38
Workforce
4.01
Scientists
3.63
Struggle score combines identity threat, skill anxiety, meaning disruption, guilt/shame, unresolved tensions, and ethical concerns (0-10 scale).
Here’s what makes this bizarre:
Creatives have the highest struggle scores and the highest adoption rates.
74.6% are increasing their AI use. Meanwhile, 44.8% experience "meaning disruption." That’s a fancy way of saying they’re questioning whether their work matters anymore.
They’re not avoiding AI because it hurts. They’re running toward it despite the pain.
02 — THE MOST IMPORTANT NUMBER
85.7% haven’t figured it out yet.
The interviews revealed deep tensions. Internal conflicts about AI. "Efficiency vs. Quality." "Convenience vs. Skill." "Speed vs. Depth." Almost everyone has them. Almost no one has resolved them.
This is the most important finding. People aren’t resolving their AI tensions. They’re living with them. Cognitive dissonance is the norm, not the exception. They don’t need resolution to adopt. They adopt despite the unresolved conflict.
"AI saves me hours, but sometimes I wonder if I’m losing something in the process."
Marketing Manager, Interview #234efficiency vs. quality
1/6
WHAT THEY’RE TORN ABOUT
01
Efficiency vs. Quality(238)
19%
02
Efficiency vs. Authenticity(196)
15.7%
03
Convenience vs. Skill(127)
10.2%
04
Automation vs. Control(98)
7.8%
05
Productivity vs. Creativity(86)
6.9%
06
Speed vs. Depth(72)
5.8%
07
Assistance vs. Dependence(68)
5.4%
08
Innovation vs. Tradition(54)
4.3%
THE PATTERN
Every major tension follows the same structure: short-term benefit vs. long-term concern. AI delivers immediate value (speed, efficiency, convenience) while creating unresolved anxiety about the future (quality, authenticity, skill, control).
Think about what this means.
We’ve been told AI adoption is about capability. Can people use it? Will they learn? Do they have access?
But 85.7% of people aren’t stuck on capability. They’re stuck on meaning. They’re using AI every day while simultaneously feeling conflicted about using it.
Cognitive dissonance isn’t a barrier to adoption. It’s the default state.
03 — WHAT DESTROYS TRUST
The #1 trust killer isn’t what you’d expect.
Our analysis cataloged every trust driver and distrust driver across 1,250 interviews. The top trust destroyer? Not "errors." Not "inaccuracy." It’s hallucinations. Specifically, the confidence with which AI gets things wrong.
#1 TRUST DESTROYER
Hallucinations121 mentions
Not "inaccuracy." Not "errors." Hallucinations. The confidence with which AI makes mistakes is more damaging than the mistakes themselves. It’s the confident wrongness that destroys trust.
"I always assume the AI is lying to me."
Analyst, Interview #342verification mindset
1/6
WHAT BUILDS TRUST
Accuracy
312
Efficiency
287
Consistency
234
Transparency
198
Reliability
176
Time savings
142
WHAT DESTROYS TRUST
Hallucinations
121
Inaccuracy
108
Lack of transparency
96
Bias
87
Inconsistency
76
Privacy concerns
68
Over-reliance
62
TRUST LEVEL BY GROUP
Group
Low/Cautious
Moderate
High
workforce
36.9%
35%
28.1%
creatives
43.2%
33.6%
23.1%
scientists
73.6%
17.6%
8.8%
04 — WHY CREATIVES FEEL GUILTY
They think they’re cheating at being themselves.
52% of creatives frame AI use through "authenticity." Not harm, not fairness, but whether using AI makes them less real. The moral language they use tells the whole story: "cheating," "lazy," "shortcut."
CREATIVES’ ETHICAL FRAME
52.2%cite authenticity
vs. 24.6% workforce, 13.7% scientists
GUILT/SHAME CORRELATION
83%of guilt-expressers
cite "authenticity" as their ethical frame
This explains everything. Creatives don’t frame AI use through "harm" or "fairness." They frame it through authenticity. Using AI feels like cheating at being themselves. The moral vocabulary centers on what AI use says about them, not its impact on others.
"One thing I will never use AI for is getting it to write or re-write a section of text for me as that’s no better than employing a ghost writer and I’d feel a fraud."
Creative Professional, Interview #847fraud feeling
1/5
THE MORAL VOCABULARY OF AI USE
"Authenticity"
77 mentions
"Cheating"
74 mentions
"Lazy"
52 mentions
"Shortcut"
48 mentions
"Integrity"
45 mentions
"Honest"
41 mentions
"Fair"
38 mentions
"Responsible"
35 mentions
"Ethical"
32 mentions
"Genuine"
28 mentions
The vocabulary centers on effort and authenticity, not on AI’s impact on others.
WHAT PREDICTS GUILT/SHAME?
Authenticity frame
Guilt group
83%
Non-guilt
22%
High identity threat
Guilt group
42%
Non-guilt
8%
Meaning disruption
Guilt group
67%
Non-guilt
12%
Hide AI use
Guilt group
58%
Non-guilt
24%
DISCLOSURE PATTERNS
Selective (tell some, not others)57.4%
Open (tell everyone)33%
Hidden (tell no one)9.6%
HIDING → GUILT CONNECTION
Those who hide AI use18.3% guilt
Transparent users6.2% guilt
3x higher guilt among those who hide their AI use.
Look at that moral vocabulary again.
"Cheating." "Lazy." "Shortcut."
These aren’t words about AI’s impact on others. They’re words about what AI use says about you.
This isn’t ethics in the philosophical sense. It’s moral identity. Using AI feels like violating who they’re supposed to be.
05 — THE HEALTHY BASELINE
Scientists figured something out.
63.2%
have LOW identity threat
vs. 36.4% workforce, 22.4% creatives
6.4%
experience meaning disruption
vs. 13.9% workforce, 44.8% creatives
Scientists have the lowest "high trust" rate (8.8%) but also the lowest anxiety. How?
They treat AI as a tool, not a collaborator. They verify everything (52.9% always verify). They keep psychological distance. Their identity depends on their method, not their output.
Scientists trust through verification, not faith. That makes all the difference.
The lesson: Healthy AI adoption means building verification into your process. Keep your identity separate from AI’s output. Scientists do this naturally. Others can learn.
FROM SCIENTISTS
"I only use the AI when it’s not going to be publishable work, i.e. I’m writing a method, make a list of steps and then ask the AI to clean it up."
Research Scientist, Interview #1198task boundaries
1/4
06 — WHAT EMERGED
After 1,250 conversations, these rules emerged.
Nobody wrote them down. But almost everyone follows them.
The unwritten rules
Patterns that emerged across 1,250 conversations
Nobody wrote these rules down. Nobody taught them. They’re the unspoken constitution of AI at work— emerging organically from thousands of individual experiences.
07 — IN THEIR WORDS
The quotes that stuck with us.
Some voices you don’t forget.
In their own words
Direct quotes from interview transcripts
"I am not quite trusting enough in AI yet to allow any communications to leave my desk in my name that are not first reviewed and approved by me directly."
Trust
Fear
Efficiency
Taboo
08 — SO WHAT?
What this means for you.
Anthropic’s headline was "predominantly positive." They weren’t wrong. People dosee benefits. But benefits don’t equal resolution.
85.7% of people are using AI while simultaneously feeling unresolved about it. That’s cognitive debt. And like all debt, it compounds.
If you’re a creative feeling like AI is eroding your sense of self, you’re not alone. You’re in the majority. The path forward is conscious adoption: understanding what you’re trading, what you’re protecting, and why it matters to you.
The scientists figured it out: verify everything, keep your identity separate, treat AI as a tool rather than a collaborator. That’s resilience.
Analysis Method
GPT-4o-mini with structured outputs (Pydantic schema). Each interview analyzed across 47 dimensions including identity threat, trust drivers, emotional triggers, ethical frames, and tension patterns.
Cost & Efficiency
1,250 analyses completed. 72.7% prompt cache hit rate. Total cost: $0.58. 100% success rate.
Schema Design
Comprehensive qualitative analysis schema including: AI conceptualization, emotional state, task boundaries, workplace context, trust profile, identity dimension, adaptation journey, ethical dimension, tensions, key quotes, and emergent themes.
Limitations
LLM interpretation introduces potential bias. Sample sizes vary (scientists n=51 is small). "Struggle score" is a composite index we created, not a validated psychological measure. Results should be viewed as exploratory.
Reproducibility
Dataset is public. Analysis code and full results available on request. We encourage independent verification.
Analysis by Playbook Atlas
Data: Anthropic Interviewer Dataset (MIT License)