No items found.
  • Platform
  • Pricing
  • About
  • News
  • Blog
  • Help
  • Login
  • Book Now

Solutions

Explore everything you need to protect your workforce.

Ai security

Shadow SaaS
Data Leakage
Real-Time User Guidance
Phishing Protection

Browser security

Browser Observability
Authentication Governance
Back to Blog
Security

Your Next Supply Chain Breach May Start in the Browser

Software supply chain security conversations have focused almost entirely on code: malicious packages, compromised build pipelines, poisoned dependencies. The Vercel incident runs through none of that. It runs through a browser session and a consent screen — an identity connection made in three seconds, on a deprecated consumer product, that persisted silently until an attacker found it. The new supply chain attack surface isn't just in your code. It's in your browser.

Mark St. John
,
Mary Yang
Published on: 
Apr 20, 2026
On This Page
TOC Element
Share:

One developer. One browser session. One "Allow all" click on a consumer AI tool their IT team had never heard of. That was enough to open a backdoor into one of the most trusted infrastructure platforms running under the world's enterprise software teams.

This past weekend, Vercel — a cloud platform trusted by leading global brands and relied upon by hundreds of thousands of developers — disclosed a security incident that originated not with an exploit of its hardened production infrastructure, but with a consumer AI app called Context.ai.

A Vercel employee signed up for Context.ai's AI Office Suite using their Vercel enterprise Google account, outside of any IT or procurement review. They granted broad OAuth permissions to Context.ai. Context.ai was subsequently compromised. The attacker inherited that dormant OAuth token, pivoted into the employee's Google Workspace, and from there into Vercel's internal environments — exposing environment variables and a limited subset of Vercel customer credentials.

This is not a story about a startup that cut corners. Vercel is a Gartner Magic Quadrant Visionary. They’ve engaged Mandiant. They had a CISO and a security program. And none of that mattered, because the path into their enterprise started with an unsanctioned AI tool accessed via the browser and with a simple consent screen, but none of it was seen by Vercel’s security stack.

This is the breach pattern your current controls aren't designed to stop.

How did the Vercel breach actually happen?

The attack chain is worth walking through precisely, because it's the pattern, not just the incident, that matters for every security leader reading this.

Step 1: The shadow AI signup

A Vercel employee signed up for Context.ai's consumer AI Office Suite using their enterprise Google account. This appears to have been a self-service signup, not a vetted enterprise procurement. Context.ai itself has confirmed: "Vercel is not a Context customer."

This is the classic shadow AI/IT problem: a consumer app, bound to an enterprise identity, and invisible to the security team. No IT approval. No procurement review. No security assessment.

Furthermore, Context.ai noted that the affected product is a deprecated, legacy consumer product, which they no longer seem to be selling based off the corporate website. This, unfortunately, doesn’t make the breach less critical. Deprecated software doesn't deprecate the OAuth grants it issued.

Step 2: The credential harvest

Hudson Rock's research found that a Context.ai employee was compromised by Lumma Stealer malware in February 2026 — weeks before this incident surfaced. The stolen credentials included Google Workspace access, keys for Supabase, Datadog, and Authkit, and the support@context.ai account. Based on Hudson Rock's assessment, the attacker now had easy access to Context's environment.

Step 3: The OAuth pivot

With access to Context.ai's AWS environment, the attacker likely gained access to the OAuth tokens Context issued to its consumer users — including the one belonging to the Vercel employee. That token still had "Allow all" permissions on the employee's Google Workspace. The attacker used it to take over the account, then pivoted into Vercel's internal environments. Vercel's own assessment: the threat actor demonstrated "sophisticated" operational velocity and a "detailed understanding" of Vercel's systems. They also highlight that this attack is likely to affect "hundreds of users across many organizations."

The damage included access to non-sensitive environment variables and a limited subset of Vercel customer credentials, not secrets marked “sensitive,” which Vercel stores in a manner that prevents them from being read. Still, ShinyHunters is claiming to have data and is now selling it for $2 million.

Key takeaway: Software supply chain security conversations have focused almost entirely on code: malicious packages, compromised build pipelines, poisoned dependencies. The Vercel incident runs through none of that. It runs through a browser session and a consent screen — an identity connection made in three seconds, on a deprecated consumer product, that persisted silently until an attacker found it. The new supply chain attack surface isn't just in your code. It's in your browser.

Why is this a textbook shadow AI incident, and why does that matter?

Shadow AI refers to AI tools and applications accessed by employees, outside official procurement, SSO enrollment, or security review. Simply put, it’s the expansion of shadow IT to AI, but the biggest risk remains: the security team doesn't know employees are using these tools until something goes wrong.

The Vercel/Context.ai incident maps directly to this pattern. From Vercel's perspective, Context's consumer suite was a shadow AI app — accessed in the browser, authenticated with enterprise credentials, but never in the app catalog, never reviewed, never governed. The fact that it was a consumer tier product rather than an enterprise integration made it more dangerous, not less: consumer AI UX is optimized for frictionless adoption, not reduced risk and security review.

The research backs this up: 98% of enterprise employees are using unsanctioned apps, including Shadow AI (Varonis, 2025). But they’re not rogue actors. They're employees trying to be productive. But each one of those signups represents a dormant OAuth grant, an unmonitored identity connection, and a blast radius nobody has mapped.

The question is why didn't existing enterprise controls catch the Context.ai signup before it became a liability?

This gap isn't a misconfiguration — it's a structural blind spot. The browser is the operating system for modern business, and it's the one place no traditional enterprise control has ever sat.

What makes OAuth consent the sharpest edge of this attack surface?

The specific mechanism worth examining here — and the one least covered in the initial breach coverage — is the OAuth consent click as an enterprise attack surface.

When the Vercel employee clicked "Allow all" on Context.ai's OAuth consent screen, they:

  • Granted a consumer application access to their enterprise Google Workspace account — calendar, email, drive, contacts.
  • Established a persistent credential connection that would remain active regardless of whether they ever used Context.ai again.
  • Created an attack path that would remain open as long as that OAuth grant existed — whether or not Context.ai ever became a sanctioned tool.

The OAuth consent screen is a critical enterprise security moment that occurs entirely in the browser, lasts approximately three seconds, and is currently ungoverned in most security programs. The enterprise's IDPs, CASBs, DLP tools, and EDRs are all downstream of that click — they inherit whatever access was granted, but none of them had any say in it.

Key takeaway: The "Allow all" OAuth consent click is a structural enterprise security failure point that occurs entirely in the browser. Identity providers govern what happens after a sanctioned login. They do not govern the moment an employee grants a consumer application access to their enterprise account — because that decision was never in their line of sight.

Why can't your security stack see browser-based supply chain risk?

Security teams have spent a decade hardening the perimeter, locking down endpoints, and building comprehensive identity programs. Most of that investment assumes the threat is inbound — something external trying to get in.

The Vercel incident is a different architecture. The risk originated inside the organization — with an employee doing something completely normal by 2026 standards (signing up for an AI tool) — and the attack path flowed outward through a vendor, then back in through an inherited credential. The initial decision point was a browser session. The blast radius was defined by what happened in that browser session.

This is how modern supply chain risk works: knowledge workers wire up their AI and SaaS supply chain directly in the browser — discovering tools, accepting OAuth prompts, authenticating with corporate accounts, and uploading data — outside of every governance process their security team has built. The browser is where that wiring happens. It's also the place where no enterprise security tool currently has operational visibility.

Your IDP only sees 15–20% of actual SaaS and AI app usage: the sanctioned apps wired into SSO. Your CASB operates at the DNS and network layer; it doesn't see encrypted browser interactions or in-session authentication events. Your EDR governs the endpoint, not what an employee submits into a web form. The result is a comprehensive blind spot at exactly the point where supply chain exposure is being created.

How does browser-native security stop the next shadow AI supply chain breach?

Browser-native security operates inside the browser itself — not at the network layer, not on the endpoint, but at the exact point where employees discover new tools, authenticate with corporate or personal identities, and grant OAuth permissions. In the Vercel/Context.ai scenario, that means two specific interventions: flagging a Vercel enterprise identity authenticating to an unsanctioned consumer application before that relationship became a liability, and enforcing policy at the OAuth consent screen before the "Allow all" click created a persistent credential bridge to an unvetted vendor.

When a vendor breach does occur — and it will — browser-native forensic logs provide the scoping data that matters most: which users interacted with the compromised app, what data was entered or uploaded, and which identities were in play. That's what compresses incident response from weeks to hours. Browser security won't stop a third-party vendor from being compromised, but it can help determine whether that compromise becomes your breach.

What should CISOs and security leaders do right now?

Four immediate actions, in order of urgency:

1. Check your Workspace for the compromised OAuth application now.

If you’re using Google Workspace, go to your admin console and search for the client ID Vercel disclosed:

110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

If it appears, treat your Workspace as potentially accessed and begin scoping immediately. More broadly, use this incident as the trigger to audit all third-party OAuth applications with broad Workspace scopes. This audit, while point in time, will provide you with a view of unexpected applications and what’s potentially been exposed, enabling your team to prioritize investigations.

2. Map your shadow AI & IT exposure.

Starting from your OAuth audits, expand your shadow AI and IT inventory collection. If you don’t have a browser-native solution to support this, manually asking each department leader to do a self-reported survey is a start, even if it is tedious.

3. Set explicit policy for corporate credential use on consumer and personal SaaS/AI tools — and enforce it where the decision happens.

A policy document that no one reads at the moment of the OAuth consent click is not a control. Enforcement needs to be in-browser, at the point of action, or it isn't enforcement.

4. Rebuild your IR playbook with the assumption that initial compromise occurs upstream of your stack.

The Vercel playbook starts with a vendor breach and a consumer OAuth grant — not a CVE, not a misconfigured S3 bucket. Your IR runbooks need to account for the scenario where the blast radius is defined by browser behavior inside your organization, not a vulnerability in your own infrastructure.

Why we built Neon

When my co-founder, Cody, and I started Neon Cyber, we kept coming back to the same observation: the browser is where work happens, and it's also where enterprise security is blind. EDR knows the endpoint. Identity knows the sanctioned apps. The network knows the traffic, but only if you’re on-site or running traffic through a VPN.

Nobody knows what's happening inside the browser session — which AI tool an employee just authenticated to, which OAuth scopes they just granted, which corporate data just moved into an unsanctioned application.

The Vercel incident is a precise illustration of that gap, a public incident at a sophisticated company with an enterprise security program, taken down by a consumer AI signup and a three-second consent screen.

This is the breach pattern that's going to define the next wave of supply chain incidents. The question is whether enterprise security programs move with it.

If you're a CISO, VP, or Director of Security reassessing your AI governance and SaaS visibility posture in the wake of Vercel, we're happy to show you what Neon sees in your environment — typically within 48 hours of deployment, with no new infrastructure and no changes to how your team works. Schedule a time to chat today.

‍

‍

Protect the people that power your business

Subscribe to the Neon Glow-Up

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Follow Us

Company

Platform
About us
News
Blog

Platform

Browser Observability for SecOps & GRC
AI & Shadow SaaS Visibility and Control
AI Data Leakage & Insider Risk
AI Guardrails & Real-time User Guidance
AI-Powered Phishing & Social Engineering Defense
Authentication & Identity Hygiene
© {{year}} Copyright. All Rights Reserved.
Privacy Policy
Terms and Conditions