Skip to main content

The Core Problem

Every time you start a Claude session, it re-reads the same raw files — meeting notes, strategy docs, research writeups — from scratch. With 100+ files, that’s thousands of tokens spent just reaching baseline context before any real work begins. LLM Wiki Compiler solves this by compiling your raw files into topic-based articles once, then having Claude read the synthesized wiki instead. Knowledge compounds across sessions rather than fragmenting.

The Flow

Raw source files  →  LLM compilation  →  Topic-based wiki  →  Agent reads wiki
  (many files)       (/wiki-compile       (13 articles)         (not raw files)
                      or /wiki-ingest)
In practice, this looks like:
383 files (13.1 MB)  →  13 articles (161 KB)  →  84% fewer tokens per session
That 81x compression comes from synthesis: instead of storing every raw sentence, the wiki stores what things mean across all sources — with backlinks to the originals whenever you need the detail.

Two Compilation Paths

/wiki-compile

Batch compilation. Reads all your source directories, classifies files by topic, and writes or updates every topic article in one pass. Incremental by default — only recompiles articles whose source files changed since the last run. Use this for your regular workflow.

/wiki-ingest

Interactive single-file ingestion. Reads one file, shows you key takeaways, asks what to emphasize, then updates all relevant topic articles. A single source can touch multiple topics — the compiler handles cross-referencing. Use this for staying involved with your knowledge base as it grows.
Both commands write only to your configured output directory. Your source files are never modified.

What Gets Compiled

During /wiki-init, the compiler samples your source files and proposes an article structure tailored to your domain. You approve or adjust it before anything gets compiled. The structure is saved in .wiki-compiler.json. A product team’s wiki might use:
Summary — Timeline — Current State — Key Decisions — Experiments & Results — Gotchas — Open Questions — Sources
A research wiki might use:
Summary — Key Findings — Methodology — Evidence — Gaps & Contradictions — Open Questions — Sources
Summary and Sources are always present. Every other section is customizable.

Concept Articles

After compiling topic articles, the compiler scans for patterns that appear across three or more topics and generates concept articles in wiki/concepts/. These are interpretive — they answer “what does this pattern mean?” rather than just “what happened?” Examples from a real project:
  • Speed vs Quality Tradeoff — 6 instances where this decision surfaced across retention, push notifications, and experiment design
  • Cross-Team Decision Patterns — communication dynamics synthesized from 24 meetings
  • Evolution of Retention Thinking — how the approach shifted from Oct 2025 to Apr 2026 across analytics, strategy, and experiments
Concept articles are discovered automatically. You can also seed them in schema.md if you already know which patterns you want tracked.

The Schema Document

On first compile, the compiler generates schema.md in your wiki output directory. It records:
  • Topic list — every topic slug and a one-line description of what it covers
  • Concept list — cross-cutting patterns and the topics they connect
  • Article structure — the section format and coverage tag conventions
  • Naming conventions — slug format, file naming, date format, link style
  • Cross-reference rules — when topics should link to each other
  • Evolution log — a chronological record of schema changes
The schema co-evolves with your knowledge base. You can edit it directly to rename topics, merge them, or add conventions. The compiler reads schema.md before each run and respects your changes. New topics discovered during compilation are added automatically, with an entry in the evolution log.

How Agents Use the Wiki

Once compiled, your agent reads the wiki instead of raw files:
1

Session startup

Claude reads wiki/INDEX.md for a topic overview, then reads specific topic articles relevant to the current task.
2

Coverage-guided depth

Each section has a coverage tag. High-coverage sections are trusted directly. Low-coverage sections point Claude to the exact raw files it needs.
3

Fallback when needed

For granular questions, Claude follows backlinks in the article’s Sources section to the specific raw files — rather than scanning everything.
This gives you the speed of synthesized context with a reliable escape hatch to raw detail when it matters.