Most people treat AI like a search engine. Ask a question, get an answer, close the tab, forget it. That is called RAG (Retrieval Augmented Generation). Every time you ask a question, the AI searches your files or the internet, grabs relevant pieces, and gives you an answer. Then it forgets everything and starts over the next time.
Andrej Karpathy, co-founder of OpenAI and the person who coined the term "vibe coding," recently described something better. Instead of asking Claude to find and synthesize documents every time, you have Claude pre-compile everything into a persistent, interlinked wiki once. From that point forward, every new piece of information you add gets woven into what already exists.
"Obsidian is the IDE. The LLM is the programmer. The wiki is the codebase."
Andrej Karpathy · OpenAI Co-Founder
RAG vs. the Second Brain
The Three Layers
- Raw folder: Your original documents and exports. Read-only. The AI reads these but never changes them. Your source of truth.
- Wiki folder: Structured pages the AI creates and maintains. Interlinked concept pages, an index, summaries. This is what you query.
- Schema file: A rules document that tells the AI how to structure the wiki, handle new sources, and format pages. The constitution of your second brain.
The Compiler Analogy
Think of your raw files as ingredients. The AI is the chef. The wiki is the finished meal. Every time you add a new ingredient, the chef works it into the existing menu rather than starting a new kitchen from scratch. Karpathy found he didn't even need a vector database; the LLM is good at navigating a well-structured index file on its own.
Key Insight
One new source updates 10 to 15 existing wiki pages. The knowledge stays. Every future query reads from a richer base. That is the compounding effect.