Vercel’s AI Tool “v0” Massively Abused to Create Fraudulent Pages
Researchers at Okta have uncovered a disturbing new trend: unknown threat actors are abusing Vercel’s generative AI tool, v0, to create fraudulent webpages that mimic legitimate login portals—part of a broader campaign to scale phishing attacks using generative AI.
What Is v0 and How It’s Being Misused
Vercel’s v0.dev is designed to let developers and designers generate landing pages and full applications using natural language prompts. The tool is intended to streamline front-end design, but threat actors are now exploiting its simplicity to generate near-flawless replicas of login pages from well-known brands—including, in at least one case, a customer of Okta.
Researchers also found that attackers hosted additional malicious resources on Vercel’s infrastructure, including forged brand logos, suggesting a calculated effort to exploit the platform’s trusted reputation to evade security filters and user suspicion.
Upon notification, Vercel removed access to the identified phishing pages, but the findings raise concerns about how generative AI is being repurposed for cybercrime.
A Shift in Phishing Tactics
Unlike traditional phishing kits—which require coding skills, time, and effort to configure—v0 and similar open-source tools allow attackers to create fake pages with a simple prompt. This shift significantly lowers the technical barrier for would-be cybercriminals.
“The observed activity confirms that modern threat actors are actively experimenting with generative AI tools, weaponizing them to optimize and enhance their phishing capabilities,” Okta researchers wrote.
“Exploiting a platform like Vercel’s v0.dev allows attackers to rapidly generate high-quality, deceptive phishing pages, increasing the speed and scale of their operations.”
In short, cybercriminals are now scaling their operations with AI, using design-grade tools to automate and professionalize fraud.
AI’s Growing Appeal in the Cybercrime World
Vercel’s v0 is just the tip of the iceberg. Last week, Cisco’s security team released a separate report outlining how cybercriminals are increasingly using large language models (LLMs) to support a range of malicious activities.

Their research highlighted the rise of "uncensored" LLMs, specifically trained or fine-tuned for hacking tasks. One model, WhiteRabbitNeo, is being promoted as an AI assistant for (Dev)SecOps teams, allegedly capable of performing both offensive and defensive cybersecurity tasks.
Other LLMs now circulating in underground forums include:
- WormGPT
- FraudGPT
- GhostGPT
- DarkGPT
- DarkestGPT
These models are being marketed to cybercriminals as multi-purpose tools that can:
- Write and obfuscate malware
- Generate phishing emails and fake websites
- Create “invisible” malware payloads
- Locate security vulnerabilities and leaked data
- Automate reconnaissance and exploitation tasks
The common theme: speed, scale, and simplicity, putting advanced capabilities in the hands of lower-skilled attackers.
Why This Matters
This trend reveals a broader transformation in the cybercrime landscape. Generative AI is not just being explored by attackers—it’s being operationalized. By reducing the skill threshold, AI tools empower novice actors to launch highly convincing attacks once reserved for advanced threat groups.
The abuse of platforms like Vercel also highlights a growing challenge for tech providers: how to balance innovation and open access with security and misuse prevention.
As researchers continue to track these developments, one thing is clear: the weaponization of generative AI is well underway, and the security community must adapt quickly to confront its implications.

Follow for updates as more platforms, threat actors, and tools enter this rapidly evolving battleground.