Vercel hosts millions of web applications. If you’ve deployed anything on the modern web — a Next.js app, an e-commerce storefront, a SaaS product — there’s a reasonable chance Vercel’s infrastructure was involved. The company is foundational to the modern developer ecosystem.

On April 19, 2026, Vercel announced it had been breached.

The attackers didn’t hack Vercel directly. They hacked a company called Context AI — a small AI developer tooling startup whose product a Vercel employee used. They used that connection, through OAuth, to take over the employee’s Google account. And from there, they got inside Vercel’s systems.

The threat actor using the ShinyHunters persona is now selling the stolen data for $2 million.


The Attack Chain: Step by Step

This breach is worth understanding in detail because it illustrates a risk that affects every developer who uses third-party AI tools connected to their work accounts.

Step 1: Infostealer infection at Context AI

In February 2026, an employee of Context AI — a startup building AI-powered developer workflow tools — was infected with Lumma Stealer, a commodity infostealer malware available on criminal marketplaces for a few hundred dollars per month. Lumma Stealer harvests credentials stored in browsers and applications.

The Context AI employee’s credentials — including OAuth tokens — were harvested by the malware.

Step 2: OAuth pivot to a Vercel employee’s Google account

Context AI’s product connects to users’ Google accounts via OAuth — a standard authentication mechanism that allows third-party apps to act on your behalf within services you’ve authorized.

One of Context AI’s users was a Vercel employee who had connected their work Google account to Context AI. Using the stolen OAuth tokens from Context AI’s systems, the attacker was able to authenticate as this employee’s Google account — without needing the employee’s actual password.

OAuth tokens can have long expiration periods. If a stolen token is still valid, it grants access as if the legitimate user were logged in.

Step 3: Internal Vercel systems access

From the hijacked Google account, the attacker moved laterally into Vercel’s internal systems. Google Workspace is deeply integrated into most companies’ infrastructure — access to someone’s work Google account often means access to internal documents, email history, shared drives, and in many cases, other connected services.

Step 4: Credential harvest and data exfiltration

Inside Vercel’s systems, the attacker found customer credentials that were stored without encryption. The harvested data included:

  • Google Workspace credentials
  • API keys and login tokens for Supabase, Datadog, and Authkit
  • The [email protected] account credentials

These are infrastructure credentials — the keys to other services. An attacker with a Supabase API key can potentially access a database. An attacker with Datadog credentials can read application logs. The blast radius extends beyond Vercel itself.


ShinyHunters: The Threat Actor

The breach was claimed by a threat actor using the ShinyHunters persona — one of the more prolific and technically capable cybercriminal groups active in recent years. ShinyHunters has previously claimed responsibility for breaches at Ticketmaster, Snowflake-connected companies, and numerous other high-profile targets.

The stolen Vercel data is reportedly being marketed at $2 million on breach forums. At that price point, the buyer pool is limited to well-resourced criminal organizations or nation-state adjacent actors — which suggests the data either contains high-value targets or is being used as leverage for extortion.

Vercel has notified affected customers and says it believes the number of affected accounts is “quite limited.” The company confirmed that no npm packages it publishes had been compromised and that the software supply chain remained safe — meaning the breach was a credential compromise, not a code injection that would affect downstream users of Vercel-hosted applications.


The AI Tool Problem

This breach illustrates a risk that has emerged specifically from the AI developer tooling boom of the past three years.

AI coding assistants, AI-powered code review tools, AI documentation generators, AI workflow automation — all of these products connect to developers’ work accounts via OAuth. They have legitimate access to Google accounts, GitHub repositories, Slack workspaces, and Jira projects.

When those AI companies have security incidents — and small startups have significantly weaker security postures than large enterprises — the OAuth connections they hold become pivot points into their customers’ companies.

The Vercel breach followed a classic supply chain escalation pattern:

  1. Compromise a small, less-secure company
  2. That company has OAuth access to employees at larger, more valuable companies
  3. Use that OAuth access to pivot into the target
  4. Move laterally to high-value data or systems

This is the same pattern as the 2020 SolarWinds attack and the 2023 3CX attack — except instead of a sophisticated state actor, it required a $300 infostealer and a stolen OAuth token.


The OAuth Security Problem

OAuth is not inherently insecure. When implemented correctly, with short-lived tokens, proper scope limitations, and mandatory refresh requirements, OAuth is a reasonable authentication mechanism.

In practice, many OAuth implementations have significant weaknesses:

Tokens that don’t expire. Some applications issue OAuth tokens with very long (or no) expiration periods. A token stolen in February might still be valid in April. Vercel’s breach timeline suggests this was the case — the Lumma infection was in February, the breach was discovered in April.

Overly broad scopes. OAuth allows apps to request specific permissions. An app that needs to read your calendar doesn’t need access to your email. But many apps request maximum permissions by default, and users click through consent screens without scrutinizing scope.

No notification on use. Unlike password logins, OAuth token usage typically doesn’t generate the kind of security alerts that alert users to unauthorized access. The Vercel employee whose Google account was hijacked may have had no indication anything was wrong.


What Developers Should Do Now

The Vercel breach should prompt a security audit of every AI tool in your development workflow that has OAuth access to your work accounts.

Audit your OAuth connections immediately. For Google: go to myaccount.google.com/security → “Third-party apps with account access.” For GitHub: Settings → Applications → “Authorized OAuth Apps.” Revoke anything you don’t actively use.

Apply the principle of least privilege. When connecting new tools, review the OAuth scope they’re requesting. If a note-taking app is asking for access to your email, that’s a red flag. Revoke and find an alternative that requires only the permissions it actually needs.

Use separate Google accounts for sensitive work. Many developers now maintain a secondary Google account for third-party tool connections, keeping it separate from the account that has access to production systems.

Ask your AI tools about their security posture. If a startup’s product has OAuth access to your work accounts, they’re a security dependency. Ask them about their MFA policy, their token expiration practices, and whether they’ve been through a security audit. If they can’t answer, that’s an answer.

Assume credentials are compromised until proven otherwise. If you used Context AI and connected it to your work Google account, treat those credentials as potentially compromised and rotate them — regardless of whether Vercel has told you that you’re specifically affected.

The AI tooling ecosystem is young, fast-moving, and largely unvetted from a security standpoint. The Vercel breach is the first high-profile casualty of that risk. It will not be the last.