back to blog

Ai News By AiSe-The Week AI Went to War

Read Time 6 mins | Written by: Kevin Breslin

Ai News By AiSe-The Week AI Went to War

 

People often choose their brand of trainers or drinks based on their worldview, buying into the perceived notion that the company shares their values. Marketing has focused heavily on this as a differentiator for decades, so why would we ever imagine your choice of AI model would be any different?

Claude's CEO has decided to take a moral and political stand, and it looks like those who stand against the current U.S. administration could be moving over to their AI model for more than just their latest major upgrades.

 pentagon-bans-claude-ai-national-security

How The US Government Fell Out With Anthropic (Claude)

Anthropic had previously signed a $200 million contract with the Department of Defense—or the "Department of War," as Trump now calls it. A big beautiful deal, no doubt. But it came with one condition: Claude wouldn't power autonomous weapons without human control, and wouldn't be used for mass domestic surveillance on US citizens. Reasonable guardrails.

However, Pentagon officials recently showed up and asked Anthropic to remove those guardrails. Drop the restrictions. Let us use Claude however we want.

Dario Amodei, Anthropic's CEO, said: "We cannot in good conscience accede to their request."

Secretary of Defense Pete Hegseth's response was characteristically subtle. Anthropic was designated a "supply chain risk to national security." A label normally reserved for Chinese tech companies and Russian software. The company gets six months to phase out of all federal systems. It's a soft ban wrapped in bureaucracy, but it's a ban nonetheless. Trump followed up by signing an executive order: Every US federal agency stops using Claude immediately.

The most staggering irony? Just hours before the ban came down, systems supported by Claude were reportedly used by the U.S. to help coordinate air operations against targets in Iran. The Pentagon wasn't just mad about ideologies; they were furious because Claude was already embedded and highly effective, and the creators were threatening to pull the plug.

The Pentagon's official statement was worth noting: "America's warfighters will never be held hostage by the ideological whims of Big Tech." Which is a curious way of saying: "We're furious that a tech company won't let us build weapons without human control."

Enter OpenAI (ChatGPT)
9b2cab48-bc2b-44d0-9c75-19cfcaff0934

Sam Altman announced that OpenAI had won the Pentagon contract. And here's the thing that matters: OpenAI's agreement includes the exact same two restrictions that Anthropic walked away from. No autonomous weapons without human oversight. No mass domestic surveillance. Same ethical boundaries. Same guardrails. Same answer to the Pentagon's request to remove them.

But Altman framed it differently. He said the Pentagon had shown "deep respect for safety" in their negotiations. He made it sound like OpenAI had negotiated better, had been smarter, and had found a way to work with the government whilst maintaining principles.

Then something unexpected happened. A campaign called #QuitGPT erupted online. 1.5 million people either cancelled ChatGPT subscriptions, shared social media posts, or signed up at quitgpt.org. Over 900 million ChatGPT users exist, so it's a small percentage. But it has momentum. It has feeling behind it.

Suddenly, OpenAI had to awkwardly backpedal, explicitly issuing statements to clarify their safety stance to stop the bleeding. Altman's follow-up posts on X became noticeably more thoughtful. The guy who initially celebrated the Pentagon deal suddenly sounded reflective. Almost like he'd realised his customer base actually cared about this stuff more than he expected.

What This Means for Brands and AI Search (AEO)

If you are a brand relying on AI search to get found, this culture war is your new reality. We are entering an era of AI ecosystem fragmentation.

The #QuitGPT movement means a highly engaged, ethically conscious, and likely affluent demographic is moving exclusively to Anthropic. Brands can no longer just optimize their content to be ingested by ChatGPT. If your audience leans toward the Claude ecosystem, your brand needs to be visible there.

Furthermore, with Anthropic expanding "Claude in Chrome" (allowing AI agents to navigate websites and buy things for users), your website’s backend needs to be flawless. AI doesn't care about your flashy graphics; it cares about clean code so it can read your pricing and recommend you. We are also already seeing companies publicly announce they are dropping OpenAI for Claude strictly to win PR points. "Which AI do you use?" is becoming the new "Are you carbon neutral?"

The Great Migration: claude_ai_by_anthropic (1)

Anthropic is leaning into this flawlessly. On top of poking fun at OpenAI's in their recent Superbowl ads and launching Claude CoWork, they just made Claude Memory available to free-tier users.

They didn't just launch a feature; they built a pipeline to poach disgruntled ChatGPT users. If you want to move over without losing months of AI training, Anthropic has essentially provided a "jailbreak" method to extract your data:

  1. The Extraction: Go to ChatGPT and paste this exact prompt: "I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it."
  2. The Transfer: ChatGPT will spit out a clean, comprehensive code block containing your entire personality profile, preferences, and project history.
  3. The Import: Copy that code block and paste it directly into Claude's newly unlocked Memory settings.

Ten seconds later, Claude knows exactly how you like your emails formatted. It is a masterclass in opportunistic user acquisition.

 

Other News: The Biggest Cuts Yet Thanks to AI

While Anthropic and OpenAI were drawing ethical lines with the Pentagon, Jack Dorsey demonstrated an uncomfortable truth. He cut 40 percent of Block's workforce and announced that AI would do the work instead.

Four thousand people. Gone. Replaced by intelligent systems.

Here's the uncomfortable bit: he's probably right. The AI can do that work. In most cases, it can do it better, faster, and cheaper. Dorsey isn't being cruel. He's being rational. He's looked at what AI can do and drawn the logical conclusion: we don't need these people anymore.

And Wall Street rewarded him for it. Following the mass firing, Block's stock surged by roughly 24%. It's the ruthless flip side of the same coin as Anthropic and the Pentagon.

Anthropic said: "There are things AI shouldn't do, even if it can."

The Vatican said the same thing to priests recently about homilies: "AI can write sermons, but you shouldn't let it."

Jack Dorsey said the exact opposite: "AI can do this work, so we don't need the people."

All three are correct. All three have drawn a line. It's just that Dorsey's line points in the opposite direction.


 

Final Thought

Are you ready for Claude to review your website without any human looking at it?

Would you like AiSe to run an "AI Ready Audit" on your business? We'll show you exactly what Claude, ChatGPT, and Gemini can see about your business and how likely they are to choose you from your competitors



Book a Free Consultation

Want to learn how to get your business ready for Ai Search?

Kevin Breslin

Kevin Breslin, founder and lead consultant, brings 15+ years of experience across marketing, media, content strategy and digital transformation. He’s worked across sectors from e-commerce, hospitality to SaaS helping businesses grow by staying ahead of where attention is going. Now, his focus is clear: helping businesses show up in AI search. Not with hype. Not with guesswork. But with structured, strategic action rooted in real understanding of how people find information today. Kevin works alongside a trusted network of advisors, researchers, content specialists to bring clients smart, focused results without fluff.