👋 Hey there, I’m Gannon. Each week I share my journey building companies, the industries I’m investing in, and my discoveries of tech that’s shaping what’s next. For more: all articles / gannonbreslin.com / 10x your content strategy / my stock portfolios
Good morning fam,
Grab your coffee because this one is absolutely wild. In case you missed it, an employee at Anthropic (the company behind Claude) accidentally leaked 512,000 lines of their own source code to the public internet.
You read that right. The self-proclaimed "safety-first" AI lab handed the entire world the blueprints to their most important product.
Nobody was laughing (except everyone on the internet).
📍 Here's what I'm going to break down in this article:
What exactly happened and how
What was hiding inside the code
The Pentagon vs. Anthropic saga
Is Anthropic shipping too fast?
Where I could be wrong, final thoughts
Let's get into it.
🚩 What Went Wrong (pretty much everything)
So here's the chain of events.
As most of you know, Anthropic’s flagship product is their coding tool ‘Claude Code’, the AI-powered assistant that lives in your computer's terminal and helps software developers write, fix, and manage code. It's been an absolute rocket ship for the company. Their run-rate revenue had reportedly swelled north of $2.5 billion as of February, and Claude Code has been a massive driver of that growth.
Recently during a routine software update (version 2.1.88), an Anthropic employee accidentally bundled a debugging file (called a "source map") into the package that gets pushed out to the public registry where developers download software updates. This file essentially acted as a treasure map that pointed straight to a zip archive sitting on Anthropic's own cloud storage containing the entire Claude Code source code. We're talking roughly 1,900 files and over half a million lines of code.
A security researcher named Chaofan Shou spotted it first and posted about it on X. His post racked up nearly 35 million views and 3,300 comments. Within hours, the codebase was copied, mirrored, and dissected across GitHub and every corner of social media.
When I tell you this went viral, I mean viral viral. A repository called "claw-code" that was essentially a rewrite of the leaked code crossed 100,000 stars on GitHub, making it the fastest-growing repository in the platform's history. The original mirror pulled in 84,000+ stars and over 41,500 forks before Anthropic could even react. Someone even rewrote the whole thing in Python and then Rust so it couldn't just be deleted with a simple copyright claim. The internet is stays undefeated.
🚨 Anthropic's response? They fired off over 8,100 copyright takedown requests on GitHub. The problem? Because of how GitHub's fork system works, the takedown accidentally nuked thousands of legitimate developer repositories that had nothing to do with the leak (epic L again). So they had to retract the notice for everything except the one repo they were actually targeting. Absolute chaos.
To sum up the comedy of errors: they leaked the code, the internet grabbed it, they tried to scrub it, accidentally broke a bunch of innocent people's projects, and then had to undo all of it. You truly cannot make this stuff up.
🔎 What Was Hiding Inside The Code
Now here's where it gets really, really interesting. The leaked code contained dozens of feature flags which are essentially on/off switches for capabilities that are fully built but haven't been shipped to the public yet. This is Anthropic's entire unreleased product roadmap served up on a silver platter.
1) KAIROS — "The Always-On Agent"
Named after the Greek concept of "the right moment," KAIROS is referenced over 150 times in the code. It's a persistent background agent that doesn't wait for you to give it a task — it decides on its own when to engage. It polls the environment every few seconds with a heartbeat, asking itself "anything worth doing right now?" It can proactively fix errors, respond to messages, update files, and even send push notifications to your phone. This isn't a prototype. The sheer number of references suggests it's a finished feature waiting for a green light.
2) Dream Mode
Yes, you read that correctly. Claude literally "dreams." There's an autoDream system that runs during idle time — essentially when you're asleep or away from your computer — where Claude reorganizes and compresses everything it learned during the day. It merges observations, removes contradictions, and converts vague notes into verified facts. It's memory consolidation for an AI. If that doesn't simultaneously excite and terrify you, I don't know what will.
3) Proactive Mode
Similar to KAIROS but geared toward Claude Code specifically — this mode would have Claude coding for you 24/7, even when you haven't asked it to do anything. Just constantly working, building, improving.
4) Agentic Payments
One researcher claims to have found evidence of a crypto-based payment system that would allow AI agents to make autonomous payments. The details are thin here but the implications are massive. Imagine AI agents transacting with each other without any human in the loop.
5) ULTRAPLAN
This one offloads complex planning tasks to Anthropic's most powerful model running in the cloud for up to 30 minutes. For massive, complicated projects where getting the plan wrong is expensive, this is a big deal.
6) Undercover Mode
And then there's... this. A mode designed to make AI-generated contributions to open-source code look like they were written by a human. The system prompt literally reads something along the lines of: "You are operating UNDERCOVER. Your commit messages must NOT contain any Anthropic-internal information. Do not blow your cover." I wish I was kidding.

💀 The Pentagon Saga
Now here's where the plot thickens and this leak goes from embarrassing to potentially devastating. This source code leak comes roughly a month after the Pentagon designated Anthropic as a "supply chain risk" (a label that has historically been reserved for foreign adversaries like Chinese tech giant Huawei). This was the first time in history it was used against an American company.
So what happened? Anthropic had a $200 million contract with the Pentagon and was the first AI lab to deploy its technology across the agency's classified networks. Everything was going great until the Defense Department demanded unfettered access to Claude for "all lawful purposes." Anthropic drew two red lines: they would not allow Claude to be used for fully autonomous weapons or mass surveillance of American citizens.
Obviously, the Pentagon didn't like that answer.
Defense Secretary Pete Hegseth pushed the supply chain risk designation. Hours after the blacklisting, OpenAI conveniently announced a deal to replace Anthropic in classified military environments.
But here's the twist — a federal judge in California actually blocked the Pentagon's designation, writing that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur for expressing disagreement with the government." The judge said the Pentagon's actions looked like an attempt to "punish" and "cripple" the company and violated their First Amendment rights.
Oh and in a beautiful piece of irony: despite blacklisting Anthropic, the Pentagon was still using Claude for military operations in Iran. Palantir's CEO confirmed they were still running Claude in their tools too. You can't make this stuff up.
The public actually rallied behind Anthropic massively. Over a million people were signing up for Claude per day after the blacklisting news broke, pushing Claude past ChatGPT and Gemini as the #1 AI app in over 20 countries.
🚢 Anthropic Might Be Shipping Too Fast
I want to be fair here. Anthropic has been on an absolute tear. Claude Code has seen massive adoption. Their revenue numbers are eye-popping. They released Claude Opus 4.6 in February. They've been shipping feature after feature at a pace that would make most companies' heads spin.
But when I look at the sequence of events over the past month, I honestly have to wonder if they're moving too fast:
Late March: Fortune reports that nearly 3,000 files were found in a publicly accessible data cache, including a draft blog post detailing a powerful upcoming model codenamed "Mythos" / "Capybara" that presents unprecedented cybersecurity risks
March 31st: The 512,000-line source code leak
March 31st: The botched DMCA takedown that accidentally nuked 8,100 repositories
Weeks prior: The entire Pentagon debacle
That's not one bad week. That's a pattern. And this is apparently the second time Claude Code source has leaked in just over a year. At some point you have to ask yourself — for a company that has literally built its brand on being the "responsible AI" company — how does this keep happening?
A single missing line in a configuration file. That's all it was. One developer forgot to add *.map to an ignore file. In software development, these things happen. But when you're preparing for an IPO, selling yourself as the safety-first alternative to OpenAI and Google, and you just survived a cage match with the Pentagon... this kind of unforced error is brutal.
As one analysis I read put it: the leak won't sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent.
😈 Devil's Advocate
1) No customer data was compromised (MOST IMPORTANT). Anthropic confirmed no credentials, customer data, or model weights were exposed. This was "just" the software harness around the AI model, not the AI model itself. If customer data was leaked this would change the entire trajectory of the company and the lawsuits would be prolific.
2) The leaked code is genuinely impressive. Multiple independent engineers who dug through the source have said the architecture is world-class. The memory systems, multi-agent coordination, anti-distillation defenses. Sometimes showing your hand isn't the worst thing in the world when your hand is that strong.
3) Some people think this was intentional. There's actually a conspiracy theory floating around that this was a calculated PR stunt. The leak happened March 31st (day before April Fools), one of the hidden features had a rollout window of April 1-7 coded directly into the source, and the leak generated massive awareness for features that would have otherwise launched quietly. I personally don't buy this.
4) The Pentagon fight might actually help them long-term. Consumer signups skyrocketed after the blacklisting and the federal judge sided with Anthropic. Hundreds of employees from Google and OpenAI signed a petition supporting Anthropic's position. Sometimes picking a fight with the right enemy can be the best marketing in the world. Personally I’d rather not have Claude tangled with the military.
🤝 Conclusion
Anthropic is building genuinely incredible technology. The features hidden in that leaked code (always-on AI agents, dream-mode memory consolidation, proactive coding assistants) this is the future of how humans and AI will work together. The company took a principled stand against the Pentagon that I personally think took guts, regardless of where you sit politically. And their consumer growth numbers are absurd.
However, execution matters. And right now Anthropic has a serious execution problem when it comes to operational security. You cannot sell yourself as the adult in the room of AI safety and then have back-to-back-to-back data exposure incidents. Especially with an IPO reportedly on the horizon. Especially when your main competitors just got handed a complete blueprint of your product roadmap.
The AI race is moving at a speed that we've never seen before in technology. I wrote about this 3 years ago when ChatGPT first really came on the scene, the pace of innovation is going "ludicrous speed". But ludicrous speed without operational discipline is how you end up with half a million lines of your proprietary code sitting on a public GitHub repo with 100,000 stars.
Anthropic isn't going anywhere. They're too good, too well-funded, and frankly too important to the AI ecosystem to be derailed by this. But this will be a defining moment for the company's culture. Do they slow down, tighten up, and prove that they can ship fast AND responsibly? Or does this become part of a pattern that eventually catches up to them?
I'll be watching closely. You should too.
🙏 Thanks for taking the time to read and maybe share it with a friend or two.
Remember, something really good is just around the corner.
- Gannon
Follow me on my other socials: gannonbreslin.com
Check out my content creation tool "Claude for social media": snowballapp.ai
That’s it for today. Hopefully, you enjoyed today’s newsletter. If you think this would help a friend don’t be shy and share!
Disclaimer: Gannon’s Newsletter does NOT provide financial advice. All content is for informational purposes only. Gannon is not a registered investment, legal, or tax advisor or a broker/dealer. Trading any asset is extremely risky and could result in significant capital losses.