The battle over who controls artificial intelligence, and what it can be used for, has never felt more urgent. Right now, governments are urgently attempting to take control of AI for defense and national security. At the same time, tech companies are pushing back, establishing clear limits. This fight isn’t just about business; it’s about the future of democracy itself. And just this week, it all boiled over in Silicon Valley, grabbing the attention of regular people across California.
A video now spreading rapidly across social media shows the sidewalk outside Anthropic’s San Francisco, California, office is covered in colorful chalk, messages, and thank-you notes from locals. These are from people who want the company to know they are watching, and that it matters.
The impromptu tribute followed after Anthropic told the Pentagon it wouldn’t let them use its AI, Claude, to conduct surveillance on Americans. The Department of War issued a “last and final” ultimatum demanding Anthropic grant unrestricted access to its AI for “all lawful purposes.” CEO Dario Amodei responded publicly that his company “cannot in good conscience accede” to those demands.
Not long after, President Donald Trump told every federal agency to drop Anthropic’s tech completely. Then Defense Secretary Pete Hegseth took it a step further and hit Anthropic with a “supply chain risk to national security” label, something the government usually saves for foreign adversaries.
The chalk messages, which read phrases like “God Loves Anthropic,” “You Give Us Courage,” and “Have Courage, Keep Going,” were a spontaneous act of public solidarity. It’s an unmistakable signal that the choices made by executives and government officials don’t stay in their boardrooms. They spill out and hit real lives.
Internet Reacts to California’s Chalk Tribute Outside Anthropic HQ
The decision quickly sparked passionate reactions online, with many praising the company’s stance. “History in the making. I build my entire stack on Claude and today I’m prouder of that bet than ever. A company willing to walk away from $200M over principles is a company I trust with my infrastructure,” one user wrote about Anthropic and its model Claude.
Others shared emotional responses. “Saw it this morning and teared up a little tbh,” one person commented. Another added, “Like I said, not only Anthropic made a great decision, but they also won MILLIONS of loyal users in the near future.”
Some framed the moment in cinematic terms. “This is the type of post that shows on a flashback in the movie,” one user wrote. Another joked, “I guess anthropic is on its way to get mandate of heaven.”
A few compared the situation to dystopian fiction. “This feels like a Fallout series level dystopia story,” one comment read, referencing Fallout.
This standoff doesn’t fade away when the headlines do. AI-powered mass surveillance, being able to watch, profile, and track huge numbers of people, stands out as one of the biggest threats to our civil liberties right now. In the past, governments used these kinds of tools to crush dissent, go after minorities, and take away at the freedoms they are supposed to protect. And as AI gets smarter, the rules we set for its use aren’t just about company guidelines anymore. They decide whether democracy actually survives.







