We've been making the same argument for months. Your EDR sees the device. Your SSO sees the login. None of it sees what happens in the browser — and that's exactly where users are using AI.
A recent article in CSO Online just caught up with that reality. According to an industry survey, nearly half of security leaders — 48% — admitted that they had limited visibility into AI usage. In fact, it was their second biggest challenge in securing AI systems. One of the CISOs described how they've been managing around this issue for months, repositioning existing tools, investing in new ones, and still finishing the day with the lingering dread that they’re still missing AI use.
The gap is real, and it isn't going to close by tuning your existing security stack.
Why can't your current stack see workforce AI use?
This is the architectural challenge facing CISOs. The problem isn't that CISOs have deployed the wrong tools. It's that the tools they've deployed were designed for a different version of the threat landscape — one where dangerous activity left a trace at the perimeter, and the browser was incidental to all of it. Each layer of the stack has a real and important job. None of them were designed to look inside a live browser session.
This is worth stating specifically because "your tools have gaps" is not the same argument as "your tools were built for a different problem."
Why doesn't your EDR see what happens inside your users’ browsers?
Endpoint detection and response is built to monitor processes and file system activity at the OS layer. It sees what executes on the device, which is exactly what it should do.
When an employee pastes a customer contract into a ChatGPT prompt, or uploads a financial model to an unsanctioned AI tool, that activity doesn't execute at the OS layer. It happens inside a browser session.
No process spawns. No file moves in a way EDR is watching. This AI risk exists in a text field, an input prompt, in a tab — invisible to your EDR.
Why doesn't DLP catch data moving through the browser?
Legacy DLP was architected around email and known file-transfer egress paths. Even though DLP has evolved, moving beyond email and file-transfer monitoring, there are still structural limitations with DLP technology. The Synechron CISO highlighted how these solutions are insufficient for the shadow AI problem.
The first limitation is coverage scope. DLP integration requires knowing which applications to monitor. For sanctioned tools inside your SSO, that's manageable. For the unsanctioned AI tools your workforce is accessing outside any identity provider — the ones you haven't reviewed, haven't integrated, and in some cases haven't heard of — there's nothing to integrate with. Discovery has to come before monitoring, and DLP doesn't solve discovery.
The second limitation with DLP is its detection model. DLP identifies sensitive data by pattern: credit card numbers, social security numbers, document classifications. It was built to catch known sensitive content at egress. It wasn't built to evaluate the intent behind an AI prompt or understand why a block of code that triggers no pattern match is actually your most sensitive IP. The risk in AI interactions is often contextual, not syntactic — and that's a detection problem that pattern-matching architecture can't reliably solve.
As one CISO told Neon Cyber in a recent call: "We have a well-known web filtering solution — that gives us DNS timestamps. But we don't have visibility into any of that content. With remote working, DLP has always been the blind spot."
Why doesn't SSO or your CASB identify shadow AI usage?
SSO governs the login — it confirms who you are and whether you're authorized to access an application IT has sanctioned. It doesn't govern what you do once you're inside. And its coverage is structurally incomplete: only 10–15% of SaaS applications in active use across a typical enterprise are managed through an IDP. The rest are accessed directly, outside identity controls, in the browser.
A CASB extends AI visibility to some degree — proxying traffic to enforce DLP policies and URL filtering — but proxy architectures see network traffic and domain metadata, not what a user types into a form. A CASB can tell you an employee visited ChatGPT. It cannot tell you what they put into it.
Key takeaway: Endpoint security sees the device. Identity security sees the login. Neither one is watching what happens inside the session in between. That's not a flaw in any individual tool — it's a structural consequence of building security controls that were never designed to look there.
How can you close the AI visibility gap?
The CSO article highlights a tension that any real answer has to account for: CISOs are not just trying to see more, they're trying to govern AI use without blocking it.
The CISO of Cast & Crew called this out directly: "it's important for me as a CISO and as a business leader to not put up barriers and block AI but to build up guardrails."
Heavy-handed blocking to AI tools will only increase the shadow AI it's trying to prevent. Employees who can't access a useful AI tool through sanctioned channels will find it through unsanctioned ones, outside any visibility or policy at all.
That means the framework for closing the gap has three requirements:
Discover. You can't govern what you can't see. The starting point is a complete, real-time inventory of every AI and SaaS application being accessed across your workforce — not just what's in your IDP, but everything being accessed through the browser, including tools operating entirely outside SSO. Organizations that have deployed browser-native visibility have typically found hundreds of unsanctioned applications within the first 48 hours. That number is not a failure of your team. It's an accurate picture of how modern work happens now.
Monitor. Discovery tells you what exists. Monitoring tells you what's being done, and by who, within those applications. This means visibility into the activity inside browser sessions — every prompt, every form submission, every file upload — not just that an application was accessed.
This distinction matters because the risk isn't in the tool, it's in the interaction. A developer using an approved AI coding assistant to refactor internal logic looks different from a developer pasting proprietary source code into a personal ChatGPT account. Both sessions touch the same domain. Only one is a governance problem.
Enforce. Point-of-intent enforcement refers to the ability to act inside an active browser session, in real time, at the moment a user is about to submit a form, enter credentials on a suspicious page, or paste sensitive data into an unsanctioned tool — before that action completes. This is the line between observation and control. Dashboards that record what happened don't stop it. Detection after your employee hits the “submit” button is too late.
This isn't about catching people in “gotcha” moments. It's about being a scout and a guide at the same time. You need to be able to see when a user is about to upload a file to an AI tool that isn’t approved, but you also need to educate them with a warning that the tool they're about to use hasn't been reviewed against corporate data privacy requirements.
Enforcement for AI use means blocking only when a policy is actually being violated and staying invisible the rest of the time. People make mistakes. They also make good decisions when they have the right information at the right moment. Context-aware enforcement is what makes that possible — intervening when it matters, invisible when it doesn't.
You can't close an architectural gap with operational fixes
The CISOs in the CSO Online piece are describing what it feels like to run a security program against a gap that sits one layer beyond where all those tools operate.
Every major shift in how work happens has eventually required a new control layer. Firewalls for the network perimeter. EDR for the endpoint. CASB for cloud application traffic. Each time, the new layer didn't replace what came before — it filled the blind spot the existing stack couldn't reach.
The browser has been the notable exception. It's where work happens. It's where AI tools are accessed, where sensitive data moves through prompts and forms, where shadow SaaS proliferates outside any identity provider. And it's the one layer of the enterprise security stack that has never had a purpose-built control.
Browser security fills that gap — not by replacing your EDR, DLP, or SSO, but by operating where none of them can: inside the session itself, at the point where a user is interacting with an application, submitting a form, or pasting data into an AI prompt. A lightweight browser extension, deployed through your existing MDM or GPO in minutes, gives your security team visibility into every AI and SaaS interaction across your workforce — including everything operating outside SSO — and the ability to enforce policy at the exact moment it matters, before data leaves the browser.
See what's actually running across your workforce.
Neon Cyber surfaces a complete inventory of every AI and SaaS application in active use — including everything outside your SSO — and pinpoints who’s putting data into unapproved AI tools, all within 48 hours of deployment. Book a discovery session.