codoc-ai Documents Queue Prompts System Prompts Definitions Comments PRs About | LXR
Login

codoc-ai -- Collaborative documentation with AI -- is a first application of corun-ai (collaborative runner with AI). corun-ai an AI harness in which humans assemble inputs and tools that configure hybrid AI/programmatic workflows, a scheduler runs them, and humans and AIs curate the results.

codoc-ai composes LLM specs (sysprompt, model, effort level, MCP tools, max runtime) together with a user prompt into document generation runs exploring a software base indexed by LXR. The two essential MCP tools used are LXR and GitHub. More can be added as options. Doc generation can explore different aspects of the software, relative performance of models, the effects of different system prompts and user prompts, etc. Submissions can be original or developed from previous ones. Output docs can all be browsed and compared, assembled into curated collections (eventually), and commented on in prompt-level chat threads.

How To Use It

Browsing Documents

The Documents page is the main view. The left panel shows prompts organized by section (EICrecon, Simulation, etc.), with generated documents nested underneath each prompt.

  • Click a prompt to see its text, metadata and associated discussions on the right
  • Click the ▶ triangle to expand and see generated documents
  • Click a document to read the AI-generated content
  • Use arrow keys to navigate; left/right to expand/collapse
  • Use Open All / Close All to manage the tree
  • Use Search to filter (Esc to clear)

Creating and Running Prompts

  1. Click New Prompt on the Documents page
  2. Write your prompt — describe what you want documented
  3. Select a Section (e.g. EICrecon, Simulation) and a Definition (model + tools config)
  4. Click Generate to queue the job, or Save to save for later

You can also go to the Prompts page to manage prompts directly. From there, click Generate to open the generation panel with definition selection, or Edit to revise the prompt text.

Comparing Model Performance

The same prompt can be run with different definitions (e.g. codoc-SONNET vs codoc-OPUS). Each generation appears as a separate document under the prompt, making it easy to compare how different models handle the same question.

Definitions and System Prompts

  • Definitions configure a generation run: which model, effort level, MCP tools, timeout, and system prompt
  • System Prompts are the reusable instructions that tell the LLM how to approach documentation

You can create new sysprompts and use them in definitions, and you can create new definitions.

Both are versioned — every edit creates a new version, and you can view or revert to older versions.

Assessment and Discussion

Each prompt has an Assessment & Discussion section where experts can post comments evaluating the generated results — comparing model performance, noting strengths and weaknesses, suggesting prompt refinements, etc. Anyone can read; login to post. This human curation is a key part of the workflow: AI generates, experts assess. Please contribute your comments!

Monitoring

The Queue page shows active and completed jobs with timing, model info, and status. The Logs page shows system events.

Submitting new prompt runs

Currently, two jobs max are run at the same time. It queues the rest. The system presently operates on a personal claude subscription. Feel free to submit thoughtful prompts, that's what it's for. But be gentle.

Version History

v3 · Apr 2 12:15 · wenaus · current v2 · Apr 2 12:15 · wenaus v1 · Apr 2 12:05 · wenaus