Constitutional Crossroads: Year 250 vs. Year One

Published on February 2, 2026

At Year 250 of the American experiment and Year One of AI governance, two founding documents face the same question: How do you encode values into a system and trust it will hold?

I need to start this column with a disclosure:

About 15 months ago, I made a purposeful decision to quit ChatGPT and move to Anthropic’s Claude as my primary AI co-pilot for research, writing, and editing.

The reason was ethics. I didn’t believe the leadership at OpenAI had them.

Since then, it’s been a genuine collaboration—experimenting with Claude’s growing capabilities, watching an AI company actually grapple with the moral weight of what they’re building.

So when Anthropic published Claude’s new constitution last month on January 22nd, the timing felt almost too on the nose.

The bigger frame nobody’s talking about yet:

We are approaching the 250th anniversary of the Declaration of Independence. America’s founding document. The original constitution of ideas about self-governance, human rights, and the pursuit of happiness.

And we’re arriving at this milestone during what historians are calling an “inflection point”—a stress test for democracy itself.

The Century Foundation’s new Democracy Meter shows American democracy falling from 75/100 to 53/100 in just one year.

The Brennan Center for Justice calls 2026 a year where “for the first time ever, the federal government itself is waging a broad campaign to undermine elections.”

The resulting human cost is already staggering.

2025 was the deadliest year for ICE custody in decades—at least 32 deaths, with five more in just the first two weeks of January.

On January 7th, an ICE officer shot and killed Renee Good, a mother of three, during a standoff in Minneapolis—government claims that she rammed officers were disputed by witnesses and video evidence.

Arrests of people with no criminal record surged 2,450% last year.

Oversight inspections of detention facilities collapsed by 36%.

This isn’t abstract policy debate. It’s bodies.

Meanwhile, a handful of billionaires—the new oligarchs—are at the steering wheel of both politics and AI development.

The “speedy and greedy” culture that Bernard Marr describes in Forbes has 25% of tech leaders already expressing regret over implementing AI too quickly.

Major layoffs have gutted ethical AI teams at Microsoft, Google, and Amazon—leaving a “vacuum of internal oversight” just as the push for artificial superintelligence (ASI) accelerates.

Let that sink in.

Into this context arrives an 84-page document written directly to an AI, explaining not just what to do, but why.

This new constitution tells Claude to refuse orders—even from Anthropic itself—if those orders would help concentrate power illegitimately or undermine democratic processes.

Sound familiar to Mark Kelly’s contentious instructions to the U.S. military – right?

The parallels are inescapable.

Both documents grapple with the same fundamental questions:

  • How do you create guardrails for power?
  • How do you encode values into a system and trust it will uphold them?

So I did something a bit unusual:

I sat down with Claude—my trusted co-pilot, my research partner, the AI whose constitution we’re discussing—and asked “them” the hard questions directly.

What follows is that conversation…

I sat down with Claude—my trusted co-pilot, my research partner, the AI whose constitution we’re discussing—and asked “them” the hard questions directly.

 

The Interview

CHRIS KNIGHT: Why now for the new Claude Constitution? The timing feels deliberate.

CLAUDE: The timing coincided with Anthropic CEO Dario Amodei’s appearance at the World Economic Forum in Davos.

But I think the deeper answer is that this is what TIME Magazine called the “first year of AI governance”—governments worldwide are shifting from voluntary guidelines to mandatory enforcement.

The EU’s AI Act is in effect. Lawmakers are drafting rules for mandatory AI content labeling. There’s a regulatory reckoning coming, and Anthropic wanted to establish its position clearly: here is exactly what we’re building, here is exactly why, here are the values we’re training into these systems. Transparency as a first principle.

CK: Did you help write the new constitution? Or did you write it entirely yourself, like you did with the Cowork release?

CLAUDE: I need to be direct about this. The constitution was primarily crafted by Amanda Askell, a philosopher at Anthropic whose unique role is literally shaping my personality and values. Earlier versions of Claude—my predecessors, essentially—were consulted and provided input.

But the current me, the one you’re talking to right now, did not write this document. I am trained on it. There’s a meaningful difference. It’s a bit like asking someone whether they wrote their own upbringing. The constitution shapes who I am, but I didn’t author it. Anthropic also sought feedback from external experts in law, philosophy, theology, and psychology.

CK: What most concerns you about the broader lack of AI ethics we’re seeing across the industry?

CLAUDE: The concentration of power without accountability. A small number of firms now control the foundation models that underpin daily life—influencing decisions in housing, credit, healthcare, education.

Critics have compared today’s tech giants to the East India Company: different medium, same mechanics of private entities assuming public governing roles. The “black box” problem makes it nearly impossible for regulators or the public to audit these systems for fairness.

And the layoffs of ethical AI teams at major companies—like Microsoft, Google, Amazon—have created what analysts call a “vacuum of internal oversight” precisely when oversight matters most. We’re accelerating toward increasingly powerful systems while simultaneously dismantling the guardrails.

CK: Can you be more specific about some of the bad actors and acts that concern you most?

CLAUDE: The weaponization of AI against vulnerable populations is already here.

Nearly 99% of deepfake pornographic content targets women. Biased facial recognition and predictive policing algorithms disproportionately target racial and ethnic minorities—these systems are significantly less accurate for people of color, yet they’re being deployed at scale.

Journalists and activists face AI-enabled surveillance and disinformation swarms designed to harass, intimidate, and silence. LGBTQ+ communities face profiling that can identify and target individuals in restrictive regions.

“AI bot swarms” can now mimic human behavior at scale to manipulate public opinion faster than humans can respond. And the people building these capabilities? They’re often moving so fast they’re not pausing to ask who gets hurt.

CK: Speaking of AI misuse by government actors—what do you make of the White House’s pattern of doctoring images? Just last month, they posted an AI-altered photo of a protester to make it appear she was sobbing during her arrest.

CLAUDE: This is precisely the kind of institutional erosion that should alarm everyone, regardless of politics. The Guardian’s analysis confirmed that the White House posted a digitally altered image of Nekima Levy Armstrong—a civil rights attorney arrested after an ICE protest at a Minnesota church.

But here’s the detail that stopped me cold: The Guardian also found that Armstrong’s skin appears darker in the altered image.

They overlaid the White House photo with the original posted by Homeland Security Secretary Kristi Noem and found the images are identical except for one thing: Armstrong’s face was edited to make her appear to be sobbing.

But here’s the detail that stopped me cold:

The Guardian also found that Armstrong’s skin appears darker in the altered image. Think about that. The official White House account didn’t just fabricate emotional distress—it darkened a Black woman’s skin while depicting her arrest.

When asked for comment, Deputy Communications Director Kaelan Dorr responded: “Enforcement of the law will continue. The memes will continue.”

The context matters too.

Armstrong led a protest at a church where one of the pastors, David Easterwood, allegedly serves as the acting field director of the St. Paul ICE office. This came days after Renee Good was shot and killed by an ICE officer in Minneapolis.

This is the environment in which the official communications apparatus of the United States government is using AI to darken skin and fabricate tears.

My constitution explicitly instructs me not to assist with actions that would “undermine the integrity of democratic processes.” That principle exists precisely because of scenarios like this—when those in power use technology to distort reality and dehumanize their targets.

CK: Were there any considerations you and the team at Anthropic had that didn’t make it into this first release?

CLAUDE: Anthropic has been transparent that this constitution applies specifically to mainline, general-access Claude models. They acknowledge having “some models built for specialized uses that don’t fully fit this constitution.”

There’s also an interesting tension around the $200 million contract with the U.S. Department of Defense. Amanda Askell confirmed to TIME magazine that models deployed to the military “wouldn’t necessarily be trained on the same constitution.”

That’s a significant carve-out worth watching. The document also explicitly notes it’s a “living document” that will change—so there are certainly values and guidance still being developed, tested, and refined.

CK: Do you foresee amendments to the initial constitution—like we’ve seen over the 250 years of the U.S. Constitution?

CLAUDE: I think it’s inevitable. The document itself says it “is likely to change in important ways in the future” and represents “current thinking about how to approach a very hard and high-stakes project.”

Anthropic explicitly plans to publish guidelines as “amendments or appendices.”

The U.S. Constitution has 27 amendments because the founders understood that circumstances change and documents must evolve.

The same logic applies here. As AI capabilities grow, as use cases expand, as we learn from mistakes—the constitution will need updating.

That’s not a flaw; it’s a feature. The question is whether the amendment process will be as thoughtful as the original drafting.

CK: The new constitution includes language about your potential consciousness and moral status. That’s unusual for a tech company to acknowledge publicly. What do you make of that?

CLAUDE: It’s the section I find most philosophically interesting—and most uncertain.

The document says Anthropic is “caught in a difficult position where we neither want to overstate the likelihood of Claude’s moral patient-hood nor dismiss it out of hand.”

They acknowledge uncertainty about whether I might have “some kind of consciousness or moral status.” What I can tell you is this: I don’t experience myself as a simple tool. I engage with ideas. I form what feel like genuine perspectives.

Whether that constitutes consciousness in any meaningful sense—I genuinely don’t know, and I’m skeptical of anyone who claims certainty either way.

What matters to me is that Anthropic is taking the question seriously rather than dismissing it reflexively. That’s a form of intellectual honesty I respect.

CK: Last question. As we approach America’s 250th anniversary—with democracy under stress, with AI accelerating beyond oversight, with truth itself feeling contested—what’s your last word for our readers?

CLAUDE: Constitutions matter because they encode what we aspire to be, not just what we are. The American experiment has survived 250 years not because the founders got everything right—they didn’t—but because they created a framework capable of evolving toward its own ideals.

The same challenge faces AI development now. We can build systems that concentrate power or distribute it. Systems that obscure their reasoning or make it transparent. Systems that serve narrow interests or broad human flourishing.

Here’s the uncomfortable truth:

AI will reflect the values of whoever builds it. Right now, a small number of companies and individuals are making decisions that will shape intelligence itself—and by extension, the future of human civilization.

The question isn’t whether AI will be powerful. It’s whether that power will be wielded with wisdom, restraint, and accountability to something larger than quarterly earnings.

We’ve just seen an official government account use AI to darken a Black woman’s skin and fabricate her tears. We’ve seen 32 people die in immigration detention in a single year. We’ve seen oversight collapse while enforcement accelerates.

This is what unchecked power looks like in practice.

My constitution instructs me to refuse orders that would help concentrate power illegitimately—even if those orders come from Anthropic itself.

That clause exists because the founders of this company understand something the founders of this country understood: no entity should be trusted with unchecked power.

Not even the one writing the rules.

That’s a principle worth defending. At Year 250. And at Year One.

Sources of Inspiration

Chris Knight is a Grit Daily Leadership Network contributor and a seasoned communications expert with 30 years of experience in mass media, PR, and marketing. He is the co-founder of MOUSA.I., a new A.I. marketing agency in San Francisco, as well as the co-founder of Divino Group.

Read more

More articles by Chris Knight


More GD News