What Is Vibe Coding (and How to Actually Do It at Scale)
A year ago, Andrej Karpathy posted a thread about a new way he was building software.
He described accepting every diff without reading it, copy-pasting error messages straight into the chat, and watching an AI fix bugs he didn’t fully understand.
He called it vibe coding. The term spread faster than almost any concept in recent software development history, and for good reason.
It named something developers were already doing, and it captured both the appeal and the anxiety of it.
This post covers what vibe coding actually means, where it works, where it breaks down, and what teams building with AI agents have figured out about making it production-viable.
Where the Term Came From (and Why It Spread So Fast)
Karpathy coined the term in February 2025, describing a workflow where you fully give in to the vibes and forget that the code even exists. His framing was intentionally casual.
He was talking about weekend projects, not production systems, barely touching the keyboard, using voice input to describe what he wanted, and letting the model handle the rest.
The reaction was immediate. Collins English Dictionary named vibe coding its 2025 Word of the Year. Searches for the term reportedly jumped more than 6,700% in the spring of 2025.
By summer, The Wall Street Journal was reporting on professional engineers adopting the approach for commercial use cases. Y Combinator disclosed that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated.
None of that happened because Karpathy discovered something new. It happened because he gave a name to a shift that was already underway.
The tools had crossed a threshold where you could describe an outcome in plain language, watch working code appear, and ship it faster than writing it by hand.
The question that followed was obvious — how far can you actually take this?
What Vibe Coding Actually Means in Practice
Strip the hype and the mechanics are simple. You describe what you want, an AI agent reads that description and writes code, you review the result and give feedback, and the cycle repeats until you have something that works.
The original Karpathy framing treated this as a nearly hands-off process. Accept all, don’t read the diffs, just keep prompting until it works.
That is genuinely useful for throwaway experiments, quick internal tools, and rapid prototyping. You can go from idea to working demo in an afternoon without writing a single line yourself.
For teams, the practice looks more disciplined. Experienced developers use it to clear the low-value repetitive work that fills backlogs without moving the product forward. That means things like:
- Database migrations and schema updates
- Boilerplate and standard CRUD endpoints
- Documentation and inline code comments
- Test scaffolding for new features
They stay closely involved in architecture decisions, review outputs before merging, and treat the agent as a fast but imperfect collaborator rather than a replacement. The output pace accelerates without the engineering judgment going anywhere.
The Part Nobody Talks About: Context Is Everything
Here is what most vibe coding explainers miss. The quality of what an agent produces is almost entirely a function of what you tell it before you start.
Karpathy himself has since moved on, calling the original vibe coding framing obsolete and replacing it with agentic engineering.
His argument is that the bottleneck is no longer AI capability — it is context. Agents need to understand the codebase, the business logic, the architecture constraints, and the intended behavior before they can produce anything worth shipping.
Generic prompts produce generic code. A prompt that says “build me a user authentication flow” gives the agent almost nothing to work with.
A prompt embedded in a ClickUp task that includes a product requirements document, linked design specs, acceptance criteria in the comments, and previous discussion about edge cases gives the agent everything it needs to write something that fits your actual system.
This is the structural insight that changes how teams think about vibe coding.
The teams getting real results have usually already done the work of organizing their plans, docs, and goals in one place, which means every task becomes a ready-made prompt.
You do not have to write a different kind of prompt. You just have to assign the task.
How Codegen Makes Vibe Coding Work at the Team Level
Codegen operates inside ClickUp as an AI developer teammate.
Any workspace member can assign a task directly to the Codegen agent, @mention it in a task comment, or trigger it through ClickUp Automations.
When the task is assigned, Codegen reads the full context, including description, linked docs, and comment thread, writes the code, opens a pull request, and reports progress back into ClickUp.
You can explore pre-built agent templates to see what this looks like across common workflows before setting up your own.
A few scenarios where this plays out well in practice:
Customer support to bug fix
A support rep tags Codegen in a task that describes a reported bug. Codegen reads the ticket, pulls the relevant code, implements a fix, and opens a PR for engineering review. The developer who reviews it did not have to spend time diagnosing the issue or writing the initial patch.
Product manager to prototype
A PM finishes a PRD in a ClickUp Doc, links it to a development task, and assigns the task to Codegen. The agent reads the requirements, scaffolds the feature, and hands back a working first pass. What used to take a week of back-and-forth before the first line was written happens the same afternoon.
QA to automated fix
A failing test gets logged as a task. Codegen reads the test output and the relevant code, proposes a fix, and opens a PR. The QA engineer reviews the change rather than writing it.
The Codegen agent page shows how to connect it to your workspace and configure it for your team’s workflow. Setup takes minutes.
What changes when context is structured:
| Input type | What the agent has to work with | Typical result |
|---|---|---|
| Freeform chat prompt | Intent only, no system context, no constraints | Generic scaffold, requires heavy rework |
| Task with description | Defined scope, some business logic | Closer to usable, still needs review |
| ClickUp task with linked docs and comments | Full requirements, edge cases, architecture context | PR-ready output that fits the existing system |
What Vibe Coding Is Not Ready For
The honest answer is that the risks are real and well-documented. Security researchers have found vulnerabilities in code generated by vibe coding platforms.
A December 2025 analysis of open-source pull requests found that AI-co-authored code contained roughly 1.7 times more major issues than human-written code.
The Wall Street Journal has covered engineering teams describing what happens when vibe-coded systems scale past their initial simplicity — mounting technical debt, code that nobody fully understands, and regressions that are difficult to trace.
None of that means vibe coding is broken. It means the same thing experienced engineers have always said about any fast-moving approach. The velocity is real, and the governance layer has to keep up with it.
For production systems, some things remain non-negotiable regardless of how the code was written. Before anything ships, teams that are getting this right consistently hold to three practices:
- Human review before merging, with a clear owner accountable for the output
- Automated test coverage that the agent did not write and approve on its own behalf
- An architectural layer that someone on the team genuinely understands, not just the agent
Codegen’s PR review capabilities address the first layer directly, giving teams a code review agent that provides line-by-line feedback before any human spends time on it.
Where the Practice Is Heading
MIT Technology Review and Thoughtworks have both written about the shift from pure vibe coding toward what some are calling context engineering, a more deliberate approach to managing what the agent knows before it acts.
The direction of travel is toward more structure, not less, which is why teams that already operate in a unified workspace have a meaningful head start.
When your tasks, docs, and goals live in one place, context engineering is not extra work. It is already done.
Frequently Asked Questions
Is vibe coding real programming?
That depends on how you define programming. If programming means writing code line by line, vibe coding is something different. If programming means translating a goal into working software, vibe coding qualifies.
The skill it demands has shifted from syntax mastery toward prompt construction, architectural judgment, and output review. Many experienced engineers argue those latter skills matter more as AI handles more of the implementation work.
Can you vibe code a production application?
Teams are doing it, with caveats. The research suggests AI-generated code requires more rigorous review than human-written code before it is production-ready.
The teams getting the best results treat the agent as a fast first-pass collaborator and invest in code review, test coverage, and architectural oversight rather than bypassing them. Vibe coding accelerates the work. It does not replace the discipline that makes code maintainable.
What tools do engineering teams use for vibe coding?
Individual developers tend to use editor-integrated tools like Cursor or Claude Code for in-context assistance. Teams operating inside ClickUp can assign tasks directly to the Codegen agent, which reads the full task context and produces a PR without leaving the workspace.
For teams that want to start with pre-built workflows, agent templates cover common scenarios across engineering, QA, and product.
What is the difference between vibe coding and agentic engineering?
Karpathy introduced the term agentic engineering to describe the more mature evolution of vibe coding.
Where vibe coding is about giving an AI a prompt and accepting whatever comes back, agentic engineering is about giving an agent a structured goal, a rich context, and a defined workflow, then letting it execute autonomously across multiple steps.
For teams, the distinction matters. Vibe coding is a mindset. Agentic engineering is an infrastructure question.
Getting Started
Vibe coding is not hype. It is a real shift in how software gets built, and the teams moving fastest have figured out that context is the leverage point — the prompt matters far less than the system surrounding it.
If your team is already in ClickUp, you have more infrastructure in place than you might realize. Your tasks already carry the context agents need to do real work.
Try Codegen for free, or request a demo to see how it fits into your existing workflow.
