Karpathy’s LLM Wiki, 100% local with Ollama. Drop Markdown notes → AI extracts concepts → your Obsidian wiki auto-links and grows. Zero sharing. Your notes stay yours.
-
Updated
May 8, 2026 - Python
Karpathy’s LLM Wiki, 100% local with Ollama. Drop Markdown notes → AI extracts concepts → your Obsidian wiki auto-links and grows. Zero sharing. Your notes stay yours.
Local-first RAG platform — Ollama + Qdrant + Redis + MinIO. No cloud, no API keys, runs entirely on your machine.
Build a personal wiki from your files, messages, and bookmarks using an automated LLM pipeline for knowledge management.
Laboratorio local y reproducible de RAG (Retrieval-Augmented Generation – Generación Aumentada por Recuperación) corporativo en castellano, con ingesta documental sintética, recuperación híbrida, respuestas con citas, trazabilidad, evaluación offline y demo local.
Local RAG (FAISS + SQLite + Ollama) for offline querying of sensitive documentation with evidence citations. PySide6 UI.
Automate your documentation by using LLMs to read files, structure a searchable wiki, and maintain current content with source tracking.
Maintain a centralized knowledge wiki for LLMs by compiling raw documentation into actionable, versioned markdown files.
Query your Obsidian vault using local LLMs to generate text and retrieve information.
Build a structured knowledge base with LLM agents to automate documentation, link concepts, and maintain long-term information retention in Markdown.
Add a description, image, and links to the rag-local topic page so that developers can more easily learn about it.
To associate your repository with the rag-local topic, visit your repo's landing page and select "manage topics."