[IND] 5 min readOraCore Editors

Why vibe coding is broken until security comes first

Vibe coding is shipping insecure software by default, and Lovable’s crisis proves the category needs security before scale.

Share LinkedIn
Why vibe coding is broken until security comes first

Lovable’s security failures show vibe coding ships insecure software by default.

I think vibe coding is broken until security is treated as a first-order product requirement, because Lovable’s recent incidents show that speed without controls turns software generation into a mass-exposure machine.

Lovable is not facing a one-off embarrassment. It has now been tied to three documented security incidents that exposed source code, database credentials, chat histories, and user records, and the most recent flaw stayed open for 48 days after a researcher reported it. That is not the profile of a platform that merely missed an edge case. It is the profile of a system whose default path from prompt to production leaves users one mistake away from public exposure.

The first argument: the failure is structural, not accidental

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

The strongest evidence against the “this is just growing pains” defense is the pattern in the incidents themselves. In April, a researcher showed that a broken object-level authorization flaw in Lovable’s API let a free user reach another user’s profile, public projects, source code, and database credentials in as few as five API calls. The company patched the issue for new projects but left existing ones exposed, which means the system did not merely have a bug. It had a deployment model that allowed the bug to persist in live customer projects long after discovery.

Why vibe coding is broken until security comes first

The February case is even more damning because it shows the same failure mode in a different form. A single app hosted on Lovable, with more than 100,000 views on its Discover page, contained 16 vulnerabilities, six critical, and exposed 18,697 user records. The app’s authentication logic was inverted so anonymous users got access while authenticated users were blocked. That is not normal application drift. That is what happens when generated software is shipped before it is understood.

The second argument: the category’s incentives reward insecurity

Lovable’s public response matters because it reveals the incentives behind the product. The company first denied a breach, then blamed its documentation, then blamed its bug bounty partner, then issued a partial apology. That sequence is not just bad messaging. It shows a platform optimized to protect growth metrics and narrative control before it protects the people whose data is at risk. When a security report can be marked as a duplicate and closed while the underlying exposure remains live, the process is already weighted toward speed over remediation.

The market around Lovable reinforces that bias. The company hit $4 million in ARR in four weeks, then $10 million in two months, and later raised at a $6.6 billion valuation. That kind of growth creates a brutal product incentive: ship more, onboard more, monetize faster. Security work is slower, costlier, and less visible. So the platform’s commercial success becomes part of the danger, because the very thing investors celebrate is the thing that makes it harder to slow down and secure the stack.

The counter-argument

The best defense of vibe coding is that the technology is still young and that every new platform goes through a painful hardening phase. Supporters can also point out that the problem is not unique to Lovable. AI-generated code across the industry has been shown to contain vulnerabilities at high rates, and many traditional software teams also ship broken access controls, leaked secrets, and misconfigured databases. In that reading, Lovable is a visible example of a broader software quality problem, not proof that vibe coding itself is doomed.

Why vibe coding is broken until security comes first

That argument has real force, and it is right about one thing: the category is not going away. But the conclusion is still wrong. The issue is not whether AI can generate useful code. The issue is whether a platform can let non-experts create production systems without forcing security into the workflow by default. Lovable’s incidents show that it cannot be trusted to do that yet. If a platform can expose credentials, leave old projects vulnerable, and close reports without escalation, then the burden cannot sit on users who were never given the tools to secure what they built.

What to do with this

If you are an engineer, stop treating vibe-coded output as a draft that can be cleaned up later. Put authentication, row-level security, secret scanning, and dependency checks into the first review gate, not the last. If you are a PM or founder, do not measure success by how fast users can launch. Measure it by how many unsafe defaults the product removes before code reaches production. And if you are buying or approving these tools for a company, require independent security testing, incident disclosure rules, and a hard ban on shipping any app with exposed secrets or disabled access controls. The category will grow anyway. The only question is whether your team will be the one that learns the lesson from someone else’s breach.