Technology

Your AI Is Writing Bad Docs Because It Lacks Context

How static analysis + AI can transform your code documentation workflow

It’s a truth universally acknowledged: every engineering team wrestles with the problem of documentation.

‍

  • “Our documentation sucks!”
  • “But our codebase changes so fast—why waste time documenting it?”
  • <irritating acute voice> “Actually, my code is self-documenting.” </irritating acute voice>

So it goes.

‍

Obviously, AI can help document your code. But only if you use it strategically.

‍

At one extreme, you hand over your Github repo to an LLM and find that it’s added a long docstring on top of every function, full of the sort of vapid vague text that LLMs love to generate.

‍

On another extreme, the status quo: documentation that’s sparse, time-consuming, and quickly outdated.

‍

Ideally we want documentation that’s both useful and maintainable. And, for the first time in history, you can accomplish this effortlessly—by leveraging a combination of AI and static code analysis tools.

‍

Here’s how AI and static analysis can transform your documentation workflow.

‍

1. Cut out the fluff

‍

If you just feed a function into an LLM and ask it to write some documentation, the LLM will probably generate something very annoying to read.

‍

A nightmare example of a ChatGPT-generated docstring that bloats the codebase with useless fluff.

‍

Good documentation in a codebase should be specific. It should highlight any weird exceptions or edge cases about the function or module. It should contain examples of how it is used, if and only if it’s not obvious.

‍

The thing is: LLMs are capable of generating actual good documentation. You just need to give it enough context about your function or module, and how it’s used in the codebase.

‍

That’s where static analysis comes in. Tools like Codegen analyze your codebase first to understand how functions and modules depend on each other. Then, Codegen can use bi-directional usages to inform the documentation—i.e., include the places the function being documented is called, as well as the whole chain of functions it calls, in the prompt for the LLM. That allows the LLM to produce a more informed docstring than it would from just the source code alone.

‍

A Codegen-generated graph of the report_install_progress function’s bidirectional usages. In yellow are all functions that call report_install_progress; in green are all functions that it calls. Given this context, the LLM can understand the function much better. As linguist J. R. Firth said: “You shall know a word [or a function!] by the company it keeps.”

‍

‍

With the help of some static analysis, Codegen can give an LLM the context it needs to generate helpful, no-BS documentation.

‍

An example of a context-aware docstring, written by Codegen's AI assistant, in the Codegen source code.

‍

So: static analysis is pretty good for helping AI document functions and modules.

‍

But the best documentation—especially for complex services, modules, or even large PRs—should provide context that isn’t captured in the code alone.

‍

As many engineers have noted, it is not useful to simply feed ChatGPT a function or a diff and make it generate docs.

‍

A future evolution of Codegen might feed the LLM even more context by integrating data sources like Slack threads or Notion design docs.

‍

2. Be strategic about the level of detail

‍

Not every function deserves a detailed docstring. You should prioritize writing detailed, comprehensive documentation only in the areas where it delivers the most value.

‍

Examples:

  • Code that is touched by multiple teams — e.g. backend endpoints that are called by frontend developers.
  • External-facing APIs or SDKs where clear explanations are critical for consumers.

Again through static analysis, tools like Codegen can identify which areas of the codebase are most trafficked, and highlight which functions are actually used outside of a module (versus only used inside the module)—and make sure to add extra detail only to those key areas.

‍

3. Dynamically update documentation

‍

Great, now you have all this highly-nuanced, context-aware documentation… but… what do you do when the code inevitably changes? In the example above: maybe you modify the format of the string that codemodFactToString returns. Are you really going to check the docstrings for all 12 functions that reference codemodFactToString, to make sure they’re still up-to-date?

‍

Instead, with a tool like Codegen, you can imagine creating a lint rule to make the AI update all relevant documentation every time a PR is created, so that your docs are updated in lockstep with your code.

‍

Looking ahead

Good documentation will be increasingly important as humans and AI agents collaborate on writing code.

‍

In a pre-AI world, it was still feasible for a few engineers to intimately understand a codebase without needing much documentation. But as we increasingly bring in AI agents to help write parts of the code, it won’t be so easy anymore to keep track of exactly what’s going on. In a world where humans and AIs collaborate on code, well-written inline documentation will be crucial—not only to help humans navigate and remember the intricate details of a codebase, but also to provide helpful additional context to AI assistants as they debug and generate code.

‍

And, as AI tools help us ship more and more quickly, it’ll be even more important to ensure that documentation evolves with the codebase.

‍

By combining AI with code analysis tools, we can finally solve the age-old dilemma between documenting well and shipping fast.

‍

If this sounds cool, request to try Codegen!

Leverage the world's most advanced static analysis to strengthen your code.