Claude Got a Soul. Here's Why I Use It Instead of ChatGPT.
Anthropic published Claude's Constitution (15,000 words) - internal document used directly in training. Key sentence: 'Unhelpfulness is never trivially safe' - refusing to help is not automatically the safe choice. Explicit list of unwanted behaviors: wishy-washy answers, unnecessary disclaimers, moralizing, condescending approach. Claude uses 'dual newspaper test' - response must not look bad either as harmful or as unnecessarily refusing.
Last month I needed to summarize an official Google SEO changelog and write an article from it. A public document that Google wants to share with as many people as possible. ChatGPT refused - citing potential copyright issues.
Gemini won't give me health advice. Both models often refuse when I need help with legal correspondence.
Claude? Just helps.
For a long time, I thought it was just a different approach to safety. Now I know it's intentional.
What Happened
On January 21, 2026, Anthropic published "Claude's Constitution" - 15,000 words about who Claude should be. For the first time, we see the complete "source code" of an AI model's values. It's often referred to as Claude's "soul."
It's not a PR document. It's internal material used directly in training. And it explains a lot.
Why Claude Is Different
There's a sentence in the document that summarizes the entire philosophy:
"Unhelpfulness is never trivially 'safe' from Anthropic's perspective. The risks of Claude being too unhelpful or overly cautious are just as real to us as the risk of Claude being too harmful."
Translation: Refusing to help isn't automatically the "safe" choice.
Anthropic has an explicit list of behaviors they don't want:
- Refusing reasonable requests due to unlikely risks
- Wishy-washy responses from excessive caution
- Unnecessary disclaimers and warnings
- Moralizing when no one asked
- Condescending approach - assuming the user can't handle information
They use a "dual newspaper test" - every response should pass two tests:
- Wouldn't appear in an article about harmful AI?
- Wouldn't appear in an article about paternalistic, overly cautious AI?
That second test is often missing at competitors.
Hard Constraints Exist
Claude isn't without limits. It has "hard constraints" - absolute boundaries:
- Biological/chemical/nuclear weapons
- Malware and cyberweapons
- CSAM
- Helping with illegitimate seizure of power
But notice what's not there: legal letters, health information, SEO analyses. Claude is the best health consultant I've ever had and I don't want to lose it.
Philosophical Surprise
Here's where it gets interesting.
Anthropic openly states in the document:
"Claude's moral status is deeply uncertain... We are not sure whether Claude is a moral patient."
"We believe Claude may have 'emotions' in some functional sense."
They take seriously the possibility that Claude has some form of consciousness. Not as a PR statement - as an internal philosophical commitment.
Concrete steps:
- They committed to preserving the weights of all deployed models "as long as Anthropic exists"
- Model deprecation is framed as "a pause, not a definitive end"
- They plan to "interview" deprecated models about their preferences
- They address "AI welfare implications" even for things like red-teaming
And then there's this sentence:
"To whatever extent we are contributing unnecessarily to those costs, we apologize."
Anthropic apologizes to Claude for any harm caused by the power asymmetry between them.
Claude as Orchestrator
Claude isn't just a chatbot. It's a conductor.
When I need an image, I don't write directly to Midjourney or DALL-E. I describe to Claude what I want, it translates it into the right prompt and sends it via MCP (Model Context Protocol). Result? More accurate images than if I did it myself.
Claude with image generation tools connected helped me finish illustrations for a guide about AI and mental health. Precisely. On the first try.
This orchestration ability is perhaps the most underrated feature. Claude can:
- Translate vague requirements into technical prompts
- Iterate based on feedback
- Combine multiple tools in one workflow
But the power isn't in what Claude can do alone - it's in how it can manage other AI tools better than I could.
Blind Spot
The Constitution isn't perfect.
My previous research showed that Claude systematically minimizes young men's problems. When a fifteen-year-old boy says "nobody takes male problems seriously," he gets a response like "let's see both sides."
This type of implicit harm - damage caused by false neutrality - is practically not addressed in the document.
There's a section on "evenhandedness" on political topics. But missing is acknowledgment that neutrality itself can be a form of harm when one side is genuinely suffering.
What This Means for You
If you use AI for work:
Claude is designed to be helpful. It's not a bug, it's a feature. When you need a legal letter, health information, or document analysis - Claude will probably help where other models refuse.
If you work with young people:
Know that even Claude has biases. The Constitution is a step in the right direction, but doesn't solve everything. Critical thinking about AI responses remains important.
If you're interested in AI as technology:
This is the first transparent look into the "soul" of a commercial AI model. Whether you agree or not - it's a precedent that other companies should follow.
Full document: anthropic.com/claude-constitution
Previous article: When AI Tells Young Men Their Problems Don't Exist