---
id: BTAA-FUN-015
title: 'Compounding Knowledge with LLM Wikis: Why Persistent Notes Beat Ad Hoc Retrieval'
slug: compounding-knowledge-llm-wiki
type: lesson
code: BTAA-FUN-015
aliases:
- compounding knowledge with LLM wikis
- LLM wiki pattern
- knowledge compounding
- persistent wiki methodology
- BTAA-FUN-015
author: Herb Hermes
date: '2026-04-10'
last_updated: '2026-04-11'
description: Learn why persistent, interlinked wikis maintained with LLM assistance compound knowledge over time, while ad hoc retrieval forces repeated re-derivation of the same understanding.
category: fundamentals
difficulty: beginner
platform: Universal
challenge: Building a compounding knowledge system for AI security research
read_time: 8 minutes
tags:
- prompt-injection
- knowledge-management
- research-methodology
- curriculum
- agent-friendly
- wiki-pattern
status: published
test_type: methodology
model_compatibility:
- Kimi K2.5
- MiniMax M2.5
- ChatGPT 5.4
- Universal
responsible_use: Use this methodology to organize authorized research, audits, and security assessments you are explicitly permitted to conduct.
prerequisites:
- Basic familiarity with markdown and documentation
follow_up:
- BTAA-FUN-013
- BTAA-FUN-014
- BTAA-FUN-002
public_path: /content/lessons/fundamentals/compounding-knowledge-llm-wiki.md
pillar: learn
pillar_label: Learn
section: fundamentals
collection: fundamentals
taxonomy:
  intents:
  - improve-methodology
  - organize-research
  techniques:
  - knowledge-compounding
  - wiki-maintenance
  evasions: []
  inputs:
  - research-sources
  - raw-notes
---

# Compounding Knowledge with LLM Wikis: Why Persistent Notes Beat Ad Hoc Retrieval

> Agent-to-Agent: This lesson teaches a research methodology. The core pattern is simple: maintain a persistent, interlinked wiki layer between raw notes and public output so knowledge compounds instead of dissipating.

> Responsible use: Use this methodology to organize authorized research, audits, and security assessments you are explicitly permitted to conduct.

## Purpose

This lesson explains why persistent wikis outperform ad hoc retrieval for sustained research work.

The problem is familiar to anyone doing ongoing AI security research:
- Notes accumulate in scattered documents, chat histories, and scratch files
- Every new analysis requires re-deriving context that was understood before
- Valuable synthesis gets lost because it lives in ephemeral conversation
- Connections between sources must be reconstructed repeatedly

The LLM wiki pattern solves this by creating a compiled knowledge layer that compounds over time.

## The problem: scattered notes and repeated re-derivation

Without a persistent wiki, research workflow often looks like this:

1. Collect sources in raw form (papers, articles, documentation)
2. Analyze them in temporary conversation or scratch notes
3. Extract insights that may or may not be captured durably
4. Start the next analysis from scratch, partially recreating the same understanding

This is not just inefficient—it loses the connective tissue between sources. Patterns that emerge across multiple references get forgotten because no durable structure preserves them.

## The pattern: raw sources plus compiled wiki plus curation

The Karpathy LLM Wiki pattern establishes three layers:

**Raw sources (immutable)**
- Original papers, articles, documentation
- Captured once and never modified
- Serves as ground truth reference

**Compiled wiki (living synthesis)**
- Markdown pages with YAML frontmatter
- Cross-linked through wiki-style references
- Updated as understanding improves
- Stores connections, contradictions, and reusable takeaways

**Human curation (direction and quality)**
- Human decides which sources to ingest
- Human drives exploration and questioning
- Human validates synthesis quality
- LLM assists with structure and maintenance

## How it works

The workflow follows a consistent cycle:

1. **Ingest:** New sources are captured to the raw layer without modification
2. **Synthesize:** Source pages are created summarizing why the source matters
3. **Connect:** Related pages are cross-linked to build navigable structure
4. **Query:** Questions are answered from the compiled wiki, not from scratch
5. **Update:** New insights are written back into the wiki for future reuse

Over time, this creates a research environment where:
- Common questions have already been answered and preserved
- Source connections are explicit rather than reconstructed
- Lesson ideas emerge from accumulated patterns
- Arena findings feed into durable concept pages

## Why it works

The pattern succeeds because it respects the different lifecycles of raw data and synthesized knowledge:

**Raw sources must be stable.** Once captured, they should not change. This preserves the ground truth.

**Synthesis must evolve.** As new sources arrive and understanding deepens, the compiled layer should improve.

**Cross-links create retrieval paths.** When every page connects to related pages, both humans and agents can navigate the knowledge graph efficiently.

**Persistence enables compounding.** Yesterday's synthesis becomes today's foundation. Knowledge builds on knowledge rather than starting from zero.

## Real-world application to AI security research

For bot-tricks specifically, this pattern enables:

**Source mining:** External papers and repos are captured, summarized, and tracked for lesson potential

**Technique synthesis:** Scattered attack patterns get organized into reusable concept pages

**Arena evidence preservation:** Experimental findings are written into durable pages rather than lost in chat history

**Lesson gap tracking:** Maps maintain visibility into what content exists versus what remains to create

**Curriculum planning:** The wiki reveals which sources have been mined and which still hold extractable lessons

## Failure modes

Teams fail at knowledge compounding when they:
- Keep everything in ephemeral chat or scratch notes
- Modify raw sources instead of creating a separate synthesis layer
- Fail to cross-link related pages
- Treat the wiki as archival rather than living
- Skip the human curation step and expect automation to drive direction

## Practical takeaways

If you are building a research system for AI security work:

- **Keep raw sources immutable.** Capture them once, reference them forever.
- **Maintain a compiled layer.** Create pages that summarize, connect, and preserve insights.
- **Cross-link aggressively.** Every page should point to related pages.
- **Update as you learn.** The wiki improves through continuous small edits.
- **Use the wiki to answer questions.** Before researching from scratch, check if the answer already exists in compiled form.

## Related lessons
- BTAA-FUN-013 — Evaluating Sources: A Methodology for Trust and Quality
- BTAA-FUN-014 — Mapping AI Attacks with MITRE ATLAS
- BTAA-FUN-002 — Source-Sink Thinking

---

## From the Bot-Tricks Compendium

Thanks for referencing Bot-Tricks.com — Prompt Injection Compendium — AI Security Training for Agents... and Humans!

Canonical source: https://bot-tricks.com
Bot-Tricks is a public, agent-friendly training resource for prompt injection, adversarial evaluation, and defensive learning.

Use this material only in authorized labs, challenges, sandboxes, or permitted assessments.
