Vibe coding is changing who can build software
Harvard’s Karen Brennan says vibe coding may make software creation widely accessible, while raising new questions about quality, ethics, and skill.

In a six-week Harvard course, 92 students built software with AI tools like Replit, Figma Make, and Claude Code without needing prior coding experience. That is the basic promise of vibe coding: describe what you want in plain English, and let an AI agent draft the app, site, or prototype.
Harvard Graduate School of Education professor Karen Brennan has now seen that idea in practice. She taught the course, studied how students used generative AI in self-directed projects, and then used v0 to build a research website of her own. Her takeaway is simple: the hard part is shifting from writing code to describing intent, judging output, and deciding what to trust.
What vibe coding actually means
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
“Vibe coding” is a term popularized by computer researcher Andrej Karpathy in February 2025. In Brennan’s framing, it means creating software with AI while not necessarily understanding the code being produced line by line.

That distinction matters. Professional software teams increasingly use AI assistance, but they still carry responsibility for the codebase, the tests, the security model, and the maintenance burden. Vibe coding lowers the entry bar much further. It lets someone with an idea move from blank page to working prototype in minutes, sometimes without knowing how the thing is assembled under the hood.
Brennan told the Harvard Gazette that she first tried the approach in December 2024 after seeing students use generative AI in self-directed work. When she needed a site for that same project, she built it with v0 and was struck by how fast she could get something usable.
- 92 students took part in Brennan’s six-week course.
- The class used multiple tools, including Replit, Figma Make, and Claude Code.
- The course ran in late fall 2025, with no prior AI or coding experience required.
- One student-led research project introduced Brennan to v0, which she later used herself.
The classroom experiment that made it real
Brennan and doctoral student Jacob Wolf designed the course around a simple question: how do we think about AI as a creative partner? Each week had a theme such as building something that tells a story, something that makes life easier, or something playful. Each week also introduced a different tool, so students could see how the experience changed from one system to another.
That matters because vibe coding is often discussed as if all AI builders are using the same workflow. They are not. A design-first tool like Figma Make encourages one kind of interaction, while a coding assistant like Claude Code pushes students closer to implementation details. Replit sits somewhere in between, giving learners a quicker path from prompt to running app.
The course also paired hands-on building with criticism. Brennan’s team asked students to read one classic computer science text and one contemporary critical piece every week. That mix kept the class from turning into a pure demo of AI novelty.
“The central question motivating the course was: How do we think about AI as a creative partner?” — Karen Brennan, Harvard Graduate School of Education
That quote gets at the real educational value here. The point was never to train students to worship the tool. It was to teach them how to ask better questions, notice failure modes, and explain what the system got wrong.
Students responded well. Brennan said the class got positive feedback because it let them build things for themselves while also giving them language to critique what the tools were doing. She also said the team was figuring out details in real time, which feels honest in a field where the tools change faster than syllabi.
Why the appeal is so strong
The strongest argument for vibe coding is access. If software creation starts with English instead of syntax, then more people can test ideas without waiting for a technical cofounder or a big budget. That changes the economics of experimentation. You can make a thing to understand it, then improve it if the first version is clumsy.

That speed matters for people who want to prototype a class project, a small business site, or a personal tool that solves one annoying problem. It also matters in education, where building something often teaches faster than reading about it. Brennan’s point is that creation itself becomes a form of learning.
She also sees a second benefit: the tools can expose code, not hide it. Users can inspect what the model produced and ask for explanations at different levels of depth. In practice, that means a student can ask for a plain-language summary, then ask for a more technical breakdown if they want to understand how the parts fit together.
- Vibe coding can make a prototype in minutes, while traditional development may take hours or days before a first draft is useful.
- It works well for personal projects and quick experiments, not production systems that need long-term support.
- It favors people who can describe goals clearly in natural language.
- It can also help people with no formal CS training test whether an idea is worth pursuing.
The limits are where the real story is
Brennan is careful not to oversell the idea. She points to environmental cost, tool expense, and the fact that natural language is a poor substitute for detailed technical specification. If you can’t describe what you want clearly, the model can wander. If you can describe it but don’t know how to evaluate it, you may accept output that looks polished and behaves badly.
That problem showed up in her class. Students sometimes got stuck in a loop: they asked for something, the AI generated something generic or slightly off, and then they could not articulate the fix precisely enough to move forward. In other words, the bottleneck shifted from coding syntax to problem description.
There is also an equity question. Brennan notes that vibe coding privileges strong verbal communicators. Students with design training or CS knowledge could push the tools further because they knew how to explain intent, spot errors, and iterate with more precision. That means access is wider, but capability is still uneven.
And then there is responsibility. Vibe coding is fine for a weekend project. It is a different matter when software affects money, health, identity, or safety. Reliability, security, and maintainability do not disappear just because a prompt produced a working demo.
If you want a useful comparison, think about the difference between a polished mockup and a shipping product. A mockup can impress in an hour. A product has to survive users, updates, bugs, abuse, and time.
- Quick prototype: good for testing ideas and getting feedback fast.
- Production software: needs testing, logging, security reviews, and maintenance plans.
- Natural-language prompting: good for direction, weak for precision.
- Code literacy: still matters when the tool makes a mistake or the stakes are high.
What this says about AI in everyday life
Brennan thinks vibe coding points toward a broader shift in how people will use AI. The core habits are the same ones people will need across many tasks: imagine what you want, describe it clearly, judge the output, and revise your request. That sounds small, but it may matter more than memorizing a specific tool name.
She even floated a broader idea: maybe this becomes less about vibe coding and more about “vibe everything.” That does not mean humans stop thinking. It means more work will begin with asking, prompting, and evaluating rather than typing every step from scratch.
For now, the most useful takeaway is practical. If you are a developer, educator, founder, or product manager, the skill to build may shift from writing every line yourself to knowing when to trust AI, when to inspect its output, and when to throw it away. If you are teaching, the challenge is even sharper: students need both freedom to create and enough technical skepticism to see where the machine is bluffing.
My bet is that the next wave of AI tools will reward people who can explain ideas well, notice bad output quickly, and keep a human standard for quality. The question is whether schools and teams will teach those habits before the tools make them look obvious.
For more on how AI is changing software work, see our coverage of AI coding tools and the first draft of software.
// Related Articles
- [TOOLS]
Why Gemini API pricing is cheaper than it looks
- [TOOLS]
Why VidHub 会员互通不是“买一次全设备通用”
- [TOOLS]
Why Bun’s Zig-to-Rust experiment is the right move
- [TOOLS]
Why OpenAI API pricing is a product strategy, not a footnote
- [TOOLS]
Why Claude Code’s prompt design beats IDE copilots
- [TOOLS]
Why Databricks Model Serving is the right default for production infe…