Vibe Break Chapter IV: The Lovable Inadvertence
The $1.8 billion AI coding unicorn that made security optional

TL;DR: Lovable (formerly GPT Engineer) reached unicorn status in record time by letting anyone build full-stack apps through chat. But in May 2025, security researchers revealed 170 of their users' apps were leaking sensitive data—names, emails, API keys, financial records, even personal debt amounts. The culprit? AI-generated code with broken security policies that novice developers couldn't spot. With 40-48% of AI-generated code containing vulnerabilities and vibe coding platforms racing to add millions of users, the Lovable incident exposes a fundamental tension: democratizing development means democratizing security failures. For tech leaders, the lesson is stark—speed without guardrails creates existential risk.
The 47-minute hack that exposed everything
On April 14, 2025, Daniel Asaria, an engineer at Palantir Technologies, decided to test Lovable's "top launched" showcase sites during his lunch break. Armed with just 15 lines of Python, he systematically probed endpoints like /admin, /data, /api/keys, and /prompts. What he found shocked him enough to post on X/Twitter to his 741,000+ followers:
"I just hacked multiple @lovable_dev 'top launched' sites. In less time than it took me to finish my lunch (47 mins), I extracted from live production apps: 💰 Personal debt amounts, 🏠 Home Addresses, 🗝️ API keys (admin access), 🔥 Spicy Prompts. Not as a hacker - as a curious dev with 15 lines of Python."
No brute force. No firewall evasion. No sophisticated exploits. Just basic HTTP requests to apps that anyone could access.
But Asaria wasn't the first to discover the problem. A month earlier, Matt Palmer, an engineer at competitor Replit, had identified the same vulnerability and tried to warn Lovable. What followed exposed not just technical failures, but organizational ones—and revealed the darker side of AI-powered "vibe coding" platforms racing toward hypergrowth.
CVE-2025-48757: A vulnerability affecting 10% of all Lovable apps
On March 20, 2025, Palmer was testing Linkable, a LinkedIn profile generator built with Lovable, when he noticed something alarming. By simply modifying REST API requests and removing authorization headers, he could access the entire user database—email addresses, personal information, everything. The app's Row Level Security (RLS) policies, which should have protected user data, were either missing or fatally misconfigured.
Palmer and his colleague Kody Low built an automated scanner to test the problem's scope. They analyzed 1,645 Lovable-created applications from the platform's showcase. The results were devastating: 170 apps (10.3%) had critical security flaws, exposing 303 vulnerable endpoints.
What was exposed
The leaked data wasn't trivial. Palmer's analysis uncovered:
- Personally identifiable information: Full names, email addresses, phone numbers, home addresses
- Financial data: Payment information, transaction histories, subscription details, personal debt amounts
- Developer credentials: API keys for Google Maps, Gemini, eBay, Stripe
- Business data: Customer records, admin access tokens
The vulnerability earned a CVSS score of 8.26 (High severity) and was assigned CVE-2025-48757. The official description: "An insufficient database Row-Level Security policy in Lovable through 2025-04-15 allows remote unauthenticated attackers to read or write to arbitrary database tables of generated sites."
The response that wasn't
Palmer followed responsible disclosure practices. On March 21, he emailed Lovable CEO Anton Osika with detailed vulnerability reports. Lovable confirmed receipt on March 24 but provided no substantive response. When Asaria independently discovered and publicly tweeted about the same issue on April 14, Palmer re-notified Lovable and initiated a 45-day coordinated disclosure window.
On April 24, Lovable released "Lovable 2.0" with a security scanner. But according to security analysis firm Superblocks, the scanner only checks whether RLS policies exist—not whether they're configured correctly. It was security theater, creating a false sense of protection while the fundamental architectural problem remained.
After the 45-day window expired with no meaningful fix, Palmer published the full CVE on May 29, 2025. The same day, Semafor broke the story with the headline: "The hottest new vibe coding startup Lovable is a sitting duck for hackers."
The architectural flaw at the heart of vibe coding
Understanding why 170 Lovable apps leaked data requires understanding how the platform works—and why that architecture shifts security responsibility to users who often lack the expertise to handle it.
Lovable generates full-stack web applications through natural language prompts. Users describe what they want in chat, and AI produces production-ready code using React for frontend and Supabase for backend services. The code deploys instantly with one click.
Here's the problem: Lovable's client-driven architecture makes direct REST API calls to Supabase databases from the browser using a public anon_key. Security relies exclusively on RLS policies—database-level rules that determine what data users can access. There's no server-side validation layer, no API gateway enforcing business logic, no defense in depth.
For experienced developers who deeply understand database security, this architecture can work. You must write RLS policies that perfectly anticipate every edge case, every attack vector, every way an authenticated user's session could be manipulated to access unauthorized data. As Alex Stamos, CISO at SentinelOne, told Semafor: "You can do it correctly. The odds of doing it correctly are extremely low."
For novice developers—Lovable's target audience—the odds approach zero. The AI generates code that looks professional and works functionally. But the RLS policies it creates are often subtly broken. A policy might check that auth.uid() = user_id without validating that user_id hasn't been manipulated. Or it might protect reads but not writes. Or it might have logical gaps that only become apparent under adversarial testing.
Simon Willison, a veteran software developer, summarized the dilemma: "This is the single biggest challenge with vibe coding. The most obvious problem is that they're going to build stuff insecurely."
The amplification problem: How AI learns from broken code
The Lovable vulnerability highlights a broader issue with AI-generated code that affects all platforms, not just vibe coding tools. Security research consistently shows that 40-48% of AI-generated code contains vulnerabilities. But the problem is worse than random chance—AI actively learns from and amplifies existing security flaws.
The numbers are damning
Academic research from multiple institutions paints a troubling picture:
- Stanford University study (2022): Developers using AI code generation were MORE likely to write insecure code and MORE likely to rate their insecure code as secure
- Georgetown CSET analysis (2024): Tested five major LLMs and found at least 48% of generated code contained vulnerabilities; manual verification revealed 73% had flaws
- Pearce et al. research (2022): Concluded 40% of GitHub Copilot suggestions had vulnerabilities, with SQL injection rates of 75% in some scenarios
- ChatGPT security study: Only 5 of 21 programs (24%) were initially secure; even after prompting for corrections, only 57% achieved basic security standards
The vulnerability patterns are consistent across platforms: SQL injection, cross-site scripting (XSS), broken authentication, hard-coded credentials, buffer overflows, insecure deserialization, and missing input validation dominate.
Real case studies of AI security failures
Security researchers have documented specific examples that go beyond statistics:
The Snake Game vulnerability (Databricks): Researchers asked an AI to create a multiplayer snake game. The generated code used Python's pickle module for network serialization—a well-known security anti-pattern that enables arbitrary remote code execution. An attacker could inject malicious serialized objects and execute code on the server.
The GGUF parser disaster (Databricks): When asked to create a C/C++ parser for GGUF files, ChatGPT generated code with unchecked buffer reads, type confusion, and heap buffer overflow vulnerabilities. The code compiled and ran but would crash or allow memory corruption when given crafted input.
The Copilot amplification effect (Snyk): Security researcher Randall Degges demonstrated that GitHub Copilot replicates vulnerabilities from surrounding code. When he asked Copilot to generate a SQL query in a clean project, it produced secure parameterized queries. But after opening a file with SQL injection vulnerabilities in a neighboring tab, the same prompt now generated vulnerable code. "We've just gone from one SQL injection in our project to two, because Copilot has used our vulnerable code as context to learn from."
The business case for security: What breaches actually cost
For tech leaders evaluating vibe coding platforms or AI development tools, the security discussion often gets treated as a technical concern—something for the security team to worry about later. But the business impact of security failures is immediate and existential.
The direct financial hit
The 2024 IBM Cost of a Data Breach Report found the global average breach cost reached $4.88 million—a 10% increase from 2023. The per-record cost hit $164, the highest in seven years. For US companies, the average jumped to $9.44 million.
But averages obscure the real distribution. Small businesses face disproportionate impact:
- Businesses with fewer than 500 employees: $2.98 million average cost
- Small business typical range: $120,000 to $1.24 million
- Critical statistic: 60% of small businesses close within six months of a cyberattack
The Lovable vulnerability exposed data across 170 applications. If even 10% of those businesses experienced breach-related costs, we're looking at potential aggregate losses of $20-35 million across the ecosystem.
The lost revenue you can't recover
Beyond immediate costs, breaches destroy customer relationships. Research shows:
- 65% of data breach victims lose trust in the organization
- 31% discontinue their relationship with the breached organization
- 80% of consumers will abandon a business if data is compromised
- Customer churn from breaches drives $2.6 to $4 million in lost revenue
For a fast-growing startup—Lovable reached $100 million ARR within eight months—reputation damage can halt momentum completely. The Semafor headline alone—"sitting duck for hackers"—reached millions of readers during the company's hypergrowth phase.
Practical recommendations for technical leaders
If you're evaluating AI development tools, vibe coding platforms, or already using them in your organization, here's what the Lovable incident teaches:
1. Treat AI-generated code as untrusted by default
The Stanford study showed developers using AI were MORE likely to rate insecure code as secure. Don't rely on intuition. Implement mandatory security review gates:
- Static Application Security Testing (SAST) on every commit (tools: Snyk, Semgrep, SonarQube)
- Dynamic Application Security Testing (DAST) before production deployment (tools: OWASP ZAP, Burp Suite)
- Human security review for authentication, authorization, payment processing, PII handling
- Penetration testing before customer launch, then annually
Budget for this upfront. Security reviews cost $10,000-$50,000 for small applications but prevent multi-million dollar breaches.
2. Architectural security beats bolt-on security
When choosing AI development tools, evaluate architecture:
- Server-side API layers: Good security architecture
- Direct client-to-database connections: High risk requiring expert configuration
- Defense in depth: Multiple security layers catch failures
- Single point of failure: One misconfiguration compromises everything
If you're using Lovable or similar platforms, consider:
- Building only frontend UI with AI, implementing backend APIs traditionally
- Adding API gateway layers (AWS API Gateway, Kong, Tyk) between client and database
- Using platforms with secure-by-default architectures (Retool, Budibase for internal tools)
- Keeping sensitive data and privileged operations in non-AI-generated code
3. Implement security training for AI-assisted development
Update security training for the GenAI era:
- Train developers on common AI code vulnerabilities (SQL injection, XSS, broken auth, hard-coded credentials)
- Teach security-focused prompting: Include explicit security requirements in prompts ("use parameterized queries," "enforce least privilege," "validate all inputs")
- Create security-focused prompt libraries: Maintain templates that generate secure code patterns
- Require AI literacy: Developers must understand that AI doesn't reason about security—it pattern-matches
The future of vibe coding: Can it be secured?
The Lovable incident doesn't mean AI-powered development is inherently broken. But it does mean the current generation of vibe coding platforms is racing ahead of their security maturity.
Some encouraging signs exist. Lovable has implemented SOC 2 Type 2 and ISO 27001 certifications for its infrastructure. The platform blocks 1,200 API key insertions and 1,000 policy-violating projects daily. Research from Databricks shows security-focused prompting reduces vulnerabilities by 30-40%.
But fundamental tensions remain unresolved:
Tension #1: Democratization vs. Expertise
Making development accessible to everyone means many developers lack security expertise. Platforms must provide foolproof guardrails or accept that some percentage of applications will be insecure.
Tension #2: Speed vs. Safety
Vibe coding's value proposition is building apps in hours instead of weeks. But security review, penetration testing, and proper architecture require time. Platforms must balance velocity with validation.
Tension #3: AI Capability vs. AI Limitations
Current AI can generate syntactically correct code that passes functional tests but fails security tests. Until AI can reason about adversarial scenarios and threat models, human security expertise remains essential.
Conclusion: The security test vibe coding must pass
Lovable's story is still being written. The company raised $200M in August 2025 at a $1.8B valuation and continues rapid growth. They've added security features and earned compliance certifications. Some of the 170 vulnerable applications may have been fixed.
But the CVE-2025-48757 vulnerability stands as a warning sign for the entire AI development industry. When 10% of showcase applications—the ones selected as examples of what the platform can do—had critical security flaws, it suggests systemic rather than isolated problems.
For tech leaders, the lesson is clear: AI-powered development is a lever that amplifies both productivity and risk. Used with proper guardrails—security review, architectural discipline, developer training, defense in depth—it can accelerate time-to-market while maintaining security. Used without those constraints, it rapidly builds applications that look professional but leak data when probed by curious engineers on lunch breaks.
The vibe coding revolution promises to democratize software development. That's genuinely valuable if it means more people can bring ideas to life. But democratizing development also means democratizing security responsibility. Until platforms solve the security architecture problem, the gap between "looks like it works" and "actually secure" will remain measured in vulnerabilities per hundred applications.
For more insights on vibe coding challenges, check out our previous chapters: Chapter I: The Vibe Abstraction, Chapter II: The Expensive Canary Divergence, and Chapter III: The Replit Regression. For comprehensive testing strategies, explore our Test Wars series and Foundation series.
The question for your organization: Are you ready to bridge that gap, or are you building your next data breach in the vibes?
References
- Matt Palmer CVE Disclosure - CVE-2025-48757
- Matt Palmer Statement on Disclosure Timeline
- Semafor Investigation - "The hottest new vibe coding startup Lovable is a sitting duck for hackers"
- Superblocks Technical Analysis - Lovable Vulnerabilities
- CVE Details - Official CVE-2025-48757 Entry
- Danial Asaria X/Twitter Post on 47-Minute Hack
- Anton Osika CEO Response
- The Hacker News - Lovable AI Found Most Vulnerable to VibeScamming Attacks
- Lovable Official Security Documentation
- Lovable 2.0 Announcement with Security Features
- Frontiers Systematic Literature Review - AI-Generated Code Security
- Stanford Study on AI Code Security via TechCrunch
- Georgetown CSET Report - Cybersecurity Risks of AI-Generated Code
- Databricks Research - Passing the Security Vibe Check (Dangers of Vibe Coding)
- Snyk Research - Copilot Amplifies Insecure Codebases
- IBM Cost of a Data Breach Report 2024 & 2025
- Verizon Data Breach Investigations Report (DBIR) 2024 & 2025
- OWASP Top 10 2021
- NIST Cybersecurity Framework 2.0