I read the Trend Micro report on my phone at 1am last night and havent been able to stop thinking about it since. The timeline is genuinely absurd.

February 2026. An employee at Context.ai downloads a Roblox cheat. A Roblox cheat. Lumma Stealer comes bundled with it, grabs session cookies, credentials, everything. That employee had access to internal systems at a company that handles OAuth integrations for enterprise customers.

March 2026. The attacker uses Context.ai's compromised infrastructure to pivot into a Vercel employee's Google Workspace account. This Vercel employee had signed up for Context.ai's "AI Office Suite" with their enterprise credentials and granted broad OAuth permissions. A Vercel engineer gave a third-party AI tool access to their corporate Google account because the onboarding flow asked for it and they clicked through.

April 19. Guillermo Rauch confirms everything. Non-sensitive environment variables were accessed and exfiltrated. A threat actor using the ShinyHunters name is asking $2 million for the data, though the actual ShinyHunters group says theyre not involved. Vercel published their incident bulletin the same day.

Wait, it wasnt actually plaintext

Okay I need to correct something I got wrong in my initial read of this. My first reaction was "they stored env vars in plaintext??" but thats not exactly whats happening. All Vercel env vars are encrypted at rest. The "sensitive" checkbox doesnt toggle encryption on and off. What it does is change how the decryption works.

Non-sensitive vars can be decrypted by the dashboard backend. You can view them, edit them, copy them from the UI. Sensitive vars can only be decrypted at build time. Write-only. Once you set them you cant see the value again, only the app can read them at runtime.

So when the attacker got into Vercel's internal systems, they could access the backend that decrypts the non-sensitive vars. The sensitive ones appear to be safe. Vercel says they have no evidence the sensitive vars were accessed.

This is actually worse than a simple "plaintext" screwup because its more subtle. The encryption existed. The infrastructure was there. But the default was set to the less protected option and most developers never changed it because why would you. You see a text field, you paste your API key, you hit save. Nobody is hunting for a checkbox that changes the decryption scope of their environment variable. You just assume the platform handles that.

Vercel has since changed the default to sensitive. Which is an admission that the old default was wrong. But every env var created before that change is still sitting there in the less protected state unless someone manually went back and toggled each one.

Every AI tool you authorized is a door you left open

I've been watching the AI tooling space for two years and theres a pattern that bugs me. Every AI productivity tool requires broad access to function. Thats the whole point. They need your docs, your emails, your code, your workspace. The value proposition is the access.

Every AI tool you plug into your workflow is an attack surface multiplier. Context.ai wasnt advanced initialization utilize theJoin to see!. It was a Y Combinator company. Enterprise customers. SOC 2 compliance supposedly. And one employee downloading game cheats on a work machine turned the whole thing into a supply chain weapon.

I went through about a dozen AI tools I've personally authorized in the last year after reading this. Nine of them have Google Workspace OAuth permissions that include reading all emails and accessing all Drive files. Nine. I authorized every one of them without reading the permissions because the onboarding flow asked and I was in a hurry.

Actually I started counting how many OAuth apps I had authorized total and stopped at 23 because it was getting depressing. I dont even remember what half of them do. A meeting summarizer I used twice in January still has full email access. Thats on me but its also on every OAuth dialog ever designed because theyre all terrible.

6 hours to rotate one project. Now multiply.

Vercel's incident page says "limited customer credentials" were compromised. BleepingComputer says the attacker is actively selling data. Crypto developers are scrambling because wallet infrastructure ran through Vercel env vars. The immediate damage is bad enough.

But the part I keep coming back to is the trust cost. Every developer on Vercel now has to go through every env var they ever set, figure out which ones werent marked sensitive, rotate every credential, and decide if they still trust the platform. Thats hundreds of thousands of projects. Some people are reporting it took them 6+ hours just to rotate everything on a single project.

Multiply that by the active Vercel userbase and youre looking at millions of developer-hours spent on credential rotation this week. Nobody at Vercel wants anyone doing that math right now.

The breach-rotate-forget cycle

Honestly? Probably not much changes for most people. I've watched this pattern enough times. Breach happens. Posts get written. Keys get rotated for about a week. Then everyone goes back to pasting secrets into platform dashboard text fields because its convenient and the alternatives require actual work.

AWS Secrets Manager. HashiCorp Vault. SOPS + age. Self-hosted infrastructure. Real options that real teams use. All require more setup than a text field. The gap between knowing whats secure and doing whats secure is measured entirely in convenience.

I started looking into how many YC companies have had security incidents tied to... actually thats a different rabbit hole for a different post.

The one thing I am doing differently is what Im calling the 12x audit. For every AI tool I authorize, Im spending 12x the time I used to spend clicking "Allow" on actually reading what it requests. Thats still only about two minutes per tool since I was spending roughly ten seconds before. But two minutes would have caught the exact permission pattern that made this whole chain possible. Ten seconds didnt.

A Roblox cheat brought down one of the biggest deployment platforms on the internet. Not a zero-day. Not a nation-state. A game cheat that a Context.ai employee probably downloaded for their kid. The attack surface wasnt sophisticated. It was convenient. And convenience is the only product the entire AI tooling industry is actually selling.

Respect
--