Skip to main content

Glossary Term

Context Window

By The Codegen Team · Updated March 26, 2026

The maximum amount of text (measured in tokens) that an LLM can process in a single request, including both input and output.

A context window is the maximum amount of text, measured in tokens, that a large language model can process in a single request. This includes both the input (the prompt, code, documents provided) and the output (the model’s response).

Context window size directly impacts what a coding agent can do. A small context window means the agent can only see a few files at a time. A large context window (100K+ tokens) allows the agent to ingest entire codebases, understand cross-file dependencies, and make coordinated changes across dozens of files in a single pass.

As of 2026, context windows range from 8K tokens (older models) to 1M+ tokens (Claude, Gemini). The practical utility depends not just on size but on how well the model retrieves and reasons over information within that window.

In plain English

How much text an AI can hold in its working memory at once — the bigger the window, the more of your codebase it can read before answering.

Why it matters

A small context window means the AI only sees part of the picture. It might fix a function while missing the dependency three files away that breaks when the fix runs. Context window size is what determines whether an agent can reason about a whole system or just a slice of it.

In practice

Two agents tackle the same refactor. The first has an 8K token window — it reads the file it was given, makes the change, and misses that six other services import the function it just renamed. The second has a 200K window — it maps all 23 references across the codebase before touching anything, then updates them consistently.

How Codegen uses Context Window

Codegen runs on Claude, which supports 200K tokens on Sonnet and 1M on Opus — enough for large codebases in a single session. But raw context window size is not the full story. Codegen also passes structured task context from ClickUp into the agent at the start of every session: the ticket, linked specs, and conversation history. That is a qualitatively different kind of context than just reading files — the agent knows the business reason for the change, not just the current state of the code.

Frequently Asked Questions