This week, Anthropic announced the most ambitious AI security initiative in the industry's history. Here's the problem it won't solve.
It's a Tuesday afternoon. A developer on your team is working through a backlog of tasks. She opens ChatGPT, pastes in a section of proprietary source code to help debug a function. A junior analyst in finance uploads a revenue forecast to Gemini to clean up the formatting. Your head of sales asks Claude to summarize a customer contract he doesn't have time to read in full. By 3pm, sensitive data has moved into multiple AI tools. None were sanctioned by IT. None of the inputs were flagged. Your DLP didn't see any of it.
Meanwhile, in an Anthropic lab, Claude Mythos has just identified a vulnerability that's been sitting undetected in production code for sixteen years.
Both of those things are true at the same time. And only one of them is on your radar.
Project Glasswing is serious — and you should take it seriously
Anthropic announced Project Glasswing: a coordinated initiative to deploy its most capable unreleased model, Claude Mythos Preview, exclusively for defensive security work. The partner list reads like a who's who of enterprise infrastructure: Amazon, Apple, Cisco, CrowdStrike, Microsoft, Palo Alto Networks,and the Linux Foundation. And Anthropic is backing it with $100 million in usage credits.
The results from early testing are impressive. Mythos identified a 16-year-old vulnerability in FFmpeg — a bug that had been scanned more than five million times by automated tools without detection. It also surfaced a 27-year-old flaw in OpenBSD, a system whose primary reputation is security. Both have since been patched.
This is exactly the kind of initiative we need to help enterprises close their attack surface. AI-augmented vulnerability discovery — along with patch and remediation guidance — focused on open-source software that underpins most of the world's critical infrastructure, with findings shared back to the community, could finally move security teams past managing the never-ending vulnerability findings spreadsheet.
This is good news.
But Project Glasswing isn't working on your other AI problem
Project Glasswing has gotten enormous attention this week. But while the industry focuses on the vulnerabilities Mythos can find and fix, there's a different category of AI risk that no model safety initiative will ever touch. Not because the industry isn't trying hard enough. Because it's not a bug. It's a feature.
Glasswing is focused on the software supply chain layer, in finding vulnerabilities baked into code. These structural flaws have been sitting in libraries and operating systems for years, waiting for an attacker with the right capability to find them. That's a real threat, and Mythos appears to be addressing it more effectively than anything that's come before.
Workforce AI risk operates at a completely different layer. It happens in the browser, in real time, every time your employees interact with AI tools to do their jobs. It doesn't involve a hidden code vulnerability — it involves your data flowing into an AI tool your security team didn't approve, that your DLP can't see, and that your AI policy document has no mechanism to stop.
A patched FFmpeg vulnerability and a sales engineer pasting customer architecture diagrams into an ungoverned AI tool are different threat categories. They have different owners. And crucially, they have different mitigations.
Why AI providers aren't going to fix the workforce layer
It's worth being precise about why this gap exists, because it's not a criticism of AI providers. It's structural.
AI tools are designed to ingest. That's part of the product. A model that limits inputs limits its value — summarizing documents, debugging code, drafting communications, analyzing data. The ingestion capability isn't a flaw to be patched. It's the feature.
Leading AI applications have strong commercial incentives to maximize what their models can do with the inputs they receive. That's not irresponsible — it's the business. Anthropic building a safer, more capable Mythos doesn't change the fundamental dynamic: your workforce is going to keep feeding those models whatever they need to do their jobs, and the models are going to keep accepting it.
This isn't a problem that lives on Anthropic's roadmap. It lives on yours.
A written AI policy is not a control
Most organizations have responded to workforce AI risk the way they responded to BYOD a decade ago: they wrote a policy.
The policy says something like: don't share confidential information with AI tools, use only approved applications, obtain permission before uploading client data. The policy’s probably been emailed to every employee with a compliance checkbox to show it's been read (even if it wasn’t.)
In parallel, shadow AI is already rampant across your organization. Employees are using Claude, ChatGPT, Gemini, Perplexity, and dozens of other tools to do their jobs faster and better — because those tools work, because they’re cheap (or free), and because no one has given them a sanctioned alternative that works as well. The average enterprise has far more AI tools in active use than IT has approved. In fact, in a 2025 study by Varonis, 98% of employees were using unsanctioned apps, including AI tools. The kicker: all that usage happens in the browser, across sessions your security stack can't see.
Policy without enforcement is just documentation. It may satisfy an auditor. But it doesn’t stop a prompt containing source code, a customer list, or a board presentation draft from leaving your environment.
The gap between policy existence and policy enforcement has a specific location: the browser. That's where work actually happens. That's where company data is being entered into AI tools — typed, pasted, uploaded. It's also the place in your security stack with the least visibility into what's actually occurring, in session, at the point of intent.
The problem is solvable — without blocking AI adoption
None of this requires choosing between security and productivity. The instinct to block AI tools entirely is both understandable and counterproductive — you'll succeed mainly at driving usage further into the shadows while signaling to your workforce that security exists to slow them down.
What you need is full visibility into workforce AI use: every prompt, every input, every upload, across every AI tool your employees are using, including the ones you don't know about yet. And real-time guardrails that enforce context-aware policies at the point of click, before sensitive data leaves the organization, without requiring new infrastructure or asking users to change their behavior. So the friction lives where it needs to: at the enforcement layer, not the productivity layer.
Project Glasswing is going to make the world's software safer at the infrastructure level. That's genuinely valuable work, and it deserves the attention it's getting this week.
But your workforce's AI use is a different problem, at a different layer, with a different owner. Project Glasswing isn't coming for it. That’s on you.
Want to see which AI tools are being used in your environment? Request a demo.