Glossary
Plain-English definitions of AI, automation, and software development terms used throughout Codegen Blog.
20 terms across 4 categories
Showing 20 terms
Agentic Coding
A development approach where AI agents autonomously plan, write, test, and iterate on code with minimal human intervention.
API Orchestration
The coordination layer that manages how multiple APIs communicate, sequence calls, and handle failures in complex workflows.
AST (Abstract Syntax Tree)
A tree representation of source code structure used by compilers and analysis tools to understand program logic.
Autonomous Agent
An AI system capable of independently completing multi-step tasks by planning, executing, and self-correcting without human prompting.
CI/CD
Continuous Integration / Continuous Delivery: automated practices for building, testing, and deploying code changes.
Code Review Agent
An AI agent that automatically reviews pull requests, checking for bugs, security issues, and style violations before human review.
Context Window
The maximum amount of text (measured in tokens) that an LLM can process in a single request, including both input and output.
Fine-tuning
The process of training a pre-trained model on a smaller, task-specific dataset to adapt its behavior for a particular use case.
LLM (Large Language Model)
A neural network trained on massive text datasets to predict and generate human-like text, powering all modern AI coding tools.
MCP (Model Context Protocol)
An open protocol by Anthropic that standardizes how AI clients communicate with external services, tools, and data sources.
Multi-file Editing
The ability of a coding agent to make coordinated changes across multiple files in a repository in a single operation.
Prompt Engineering
The practice of designing and refining instructions given to an AI model to produce accurate, relevant, and useful outputs.
Pull Request
A method for submitting code changes for review before merging into a shared codebase, serving as the AI-to-human handoff point.
RAG (Retrieval-Augmented Generation)
A technique that enhances LLM responses by retrieving relevant documents from an external knowledge base before generating output.
Sandboxing
Running AI agent code in an isolated environment that cannot affect live systems, enabling safe parallel execution.
SWE-bench
A benchmark for evaluating AI coding agents against real GitHub issues, measuring whether agents can produce correct fixes.
Technical Debt
The accumulated cost of shortcuts and deferred maintenance in a codebase that slows future development.
Telemetry
The collection and analysis of performance data from AI agent runs, including cost, execution time, and success rate metrics.
Tokenization
The process of converting text into smaller units called tokens that a language model can process and reason over.
Vibe Coding
A development style where developers defer to AI, accepting generated code without detailed review, coined by Andrej Karpathy.
No terms match your search. Try a different word.
Start building faster with Codegen
AI agents that automate your development workflow
Try Codegen free