-->

TestyTim.com

TestyTim.com

Illustration Credit: Timothy R. Butler/Nano-Banana-Pro

At Arms Over Anthropic

By Timothy R. Butler | Posted at 2:12 PM

Remember when everyone on the Right was rightly upset at the government censoring opinions it found distasteful? Somehow that seems forgotten in the other war of this weekend.

I am, of course, speaking of the one the Department of Defense (d.b.a. Department of War) is waging against one of America’s leading-edge technology companies, Anthropic.

I get why the Pentagon wants not just access to Anthropic’s Claude — which it already had to a huge degree — but unrestricted access. I’ve played with the latest trendy models enough to know right now Anthropic’s Claude is on top. It is not hard to imagine the ways it could empower America’s fighters to protect us citizens against ever more powerful adversaries such as China.

Agentic tools like Claude Code (or even Microsoft Copilot, which leans heavily on Claude in its eclectic mix) show on a civilian level what AI can do. Meanwhile, semi-autonomous drone systems in Ukraine demonstrate how this technology can improve — to use a favorite word of Secretary Pete Hegseth — “lethality” in the defense of freedom.

But little extra imagination is needed to also see why Anthropic might be hesitant on a no-holds-barred approach to the so far controlled monster they’ve been building. No political bias is necessary on their part: they understand the dangers latent in the system they’ve been building better than the rest of us do.

I referenced Jurassic Park the other week and Anthropic is clearly the Ian Malcolm to Secretary of War Hegseth’s John Hammond. It’s the AI company’s very founding DNA, started by people who feared another Dr. Hammond, too “preoccupied with whether or not they could, [to] stop to think if they should:” OpenAI’s frequently creepy Sam Altman.

Clever marketing ploy or deep-seated principle as it may be, Anthropic’s entire brand is tied to the idea of being the “safe AI” counterpoint. Under the leadership of Dario Amodei, Claude’s maker releases stories of AI-gone-wrong in their labs. Sincere transparency or not, they’ve fostered an image focused on safety.

Of course, most of us care about Claude not because it is safer, but because it has become the best. It took a back seat to OpenAI’s ChatGPT until suddenly it didn’t. Elon Musk’s xAI Grok is frequently a worthy foil in my estimation, but little else is.

Again, little wonder the Pentagon wants as much of Claude as they can get. But do they want too much? The DoD wants to be unshackled regarding spying here at home and using AI for “full self driving,” so to speak, AI weapons abroad. We’ve all seen and read enough sci-fi to know the potholes in those roads ahead.

Some are because the technology is so good already. AI can combine information and spot patterns at stunning speeds and that’s what makes the spying portion worrisome. The massive overreach of government in the decades since 9/11 will look like child’s play if the government’s current quest to do AI-powered surveillance of all of us is brought to reality. No law would even need to be broken, because our laws aren’t ready for what’s coming.

At the same time, some of the potholes are because of the known weaknesses of AI today. It is perfectly conceivable “Killer Claude” could target something, “terminate,” and be wrong. If you use AI tools, you know it “hallucinates” things that don’t exist at times. It’s a proverbial black eye when Health and Human Services releases reports citing non-existent studies. But what happens when the Department of War “releases” a killing machine that eliminates someone and the threat turns out to be just as illusory?

(Poetically, as I edited this column, I had Claude Code running as an “agent” in a carefully boxed in container, working on a project for me. Claude miscalculated and nuked the data it was working on and itself. Now, imagine if the “data” were a person and “nuked” weren’t metaphorical.)

Both of these points make me glad for Anthropic’s stand. I wish all AI companies were hesitant about where they were treading.

But these are symptoms, not the problem in this showdown. Hegseth and his undersecretary, Emil Michael, paint Anthropic’s hesitation as essentially a coup. Ben Thompson of Stratechery follows a similar line of thinking. Riffing on Amodei’s analogy of AI tech as being the new nuclear arms race, Thompson argues we’d never allow an American nuclear weapons maker to tell the government where it could aim those warheads.

If the military were tricked into overdependency on Claude only to have Anthropic yank the rug out from underneath it, I could sympathize. But Anthropic isn’t asking for a seat at the war planning table. The firm is merely asking not to be compelled to speak into existence things that strike at its ethical core.

Far from being a wannabe dictator, Anthropic is the latest victim of the government’s disregard of free speech when it becomes inconvenient.

Though I disagree with many of those deplatformed in the pandemic era, I still wrote about the worrying trend of censoring them back in our first article of the OFB relaunch in 2020. At first it was due to the few companies that control the vast majority of the modern internet silencing speech they disagreed with. Disturbing, but not illegal.

But the government couldn’t resist getting involved. We later learned that over the Biden Administration’s time, the government sought to coerce social media companies into silencing certain speech. That crossed a serious red line. How dare the government force private companies to silence private individuals from speech?

Conservatives were rightly outraged.

As I alluded to in the 2020 piece, this has been a long time coming. After all, we spent a lot of the 2010s litigating out whether the government could force a baker to decorate a cake celebrating a wedding he believed to be sin or a web designer to design a web site to do the same or nuns serving the poor to pay for medical procedures violating their pro-life convictions. Those defending bakers, web site designers, nuns and, yes, even social media platforms have had the right idea: to be free, we need to allow people to speak and be beholden to their consciences.

And yet. In that 2020 column, I mentioned that many of those on the Right aligned with the deplatformed folks wanted the government to force the social media companies to replatform them. I warned at the time this was hardly different from forcing the baker to bake the gay wedding cake. Yes, these are the same people who were later outraged when the government did apply force, just in the other direction.

An ugly truth: we love when government intervenes if it intervenes our way. We see the tyranny of it when it intervenes for others we disagree with. If we really care about freedom, we can’t only care about it when we agree with it in the moment.

And therein lies the biggest problem with the war on Anthropic. Not that AI can be incredibly dangerous (it can). Not that the technology may or may not be ready for such use (it isn’t). Let’s say it somehow wasn’t dangerous and it really were ready: we’d still be staring down the barrel of a far more dangerous weapon.

Anthropic isn’t the metaphorical American nuclear warhead manufacturer unwilling to protect its own country. To continue the metaphor, it is far more like some innovative nuclear medicine firm that has, heretofore, served the needs of Medical Corps for treating diseases. Could a company versed in therapeutic atomic research convert their efforts to make warheads? Sure. But, would it be reasonable to say they were a national security threat or pseudo-dictatorship if they had a conscientious objection to a side hustle developing iterations of the Bomb?

Surely not. But that’s exactly what the Department of War and its supporters are saying by branding Anthropic a “national security threat.”

The company isn’t telling the Secretary Hegseth whether he can wage war or not. Anthropic is saying it doesn’t want to create and sell tools that accomplish certain tasks it feels are unethical: spying on American citizens and creating something as close to a “Terminator” as we’ve seen. Even if there were no other AI choices available (which is clearly not the case), do we want a government to compel companies to make things they never wanted to make? Do we want to conflate conscientious objectors with “traitors to the Motherland”?

Nothing is stopping the U.S. government from using Anthropic’s best where it has proven more than willing to provide it, while utilizing other providers for tools Anthropic doesn’t want to produce.

A cake is a form of speech, but so too is computer code.

Bernstein v. United States established code as speech, logically enough since computer programming is done via a form of human language. Bernstein helped to break down the U.S. government’s efforts to kneecap the encryption technology most of us depend on for private, secure communication today.

Now, as then, the government sees forcing its control over technologies as good for national security. Perhaps it is, but at what cost? Compelled speech is antithetical to the very core of liberty; can anything so contrary ever truly secure a nation founded to protect liberty?

If the DoD gets its way, it would force a company whose speech is “safe AI,” to “speak” — through programming — into existence clearly unsafe AI. That’s one awfully big compelled cake to bake. Whether Anthropic is correct in its fears or not, such compulsion strikes at the very heart of liberty.

With dubious legal means to force its will, the Pentagon is instead doing a mafioso impersonation of “nice company you have there” to the AI innovator. Secretary Hegseth announced he will punish Anthropic’s unwillingness to bend on speech not just by choosing to use a different vendor, or even banning it from other mutually agreeable government uses, but seeking to ban any company that sells anything to the government from using it.

This is clearly well beyond what is legally allowed, but the chilling effect doesn’t depend on if the government were to prevail years from now in court. Present uncertainty does enough damage, as those bakers of last decade could attest. Our government might very well sink the leading-edge AI company — opening further opportunity for authoritarian China — and do so for the “sin” of insisting on its free speech.

That’s the kind of thing China does, incidentally. The irony is bitter: we’re adopting the ways of the adversary, undercutting our own values and possibly giving that very opponent an opening to beat us in so doing.

I’m squeamish enough on whether our government should be given tools to spy on us even more or should make machines that kill free of human control. But I’m far more squeamish that our government is again trying to compel speech.

As a pastor, those previously mentioned cases in which the government tried to steamroll believers’ consciences leave a mark on me. A few Supreme Court decisions are the only levee between sincerely held beliefs and a flood of regulations aimed to overrule those beliefs. Seeing the fragility of our liberties compels me to speak up for others’.

Sadly, many of those I’ve shared common cause on with other matters of conscience and free speech don’t care to see the parallel between baking cakes and crafting code. Favoring the present administration, they suddenly think companies should bow to the government.

Government coercion sure looks better when the shoe is on the other foot, doesn’t it?

The shoe moves back and forth from foot to foot, though. Those who forget principle now will find themselves flat-footed later. We need no sci-fi imagination to see how that disaster turns out.

Timothy R. Butler is Editor-in-Chief of Open for Business. He also serves as a pastor at Little Hills Church and FaithTree Christian Fellowship.

"
Share on:
Follow On:

Start the Conversation

Be the first to comment!

You need to be logged in if you wish to comment on this article. Sign in or sign up here.