llm-wiki is an FS-first wiki and memory interface for coding agents.
It keeps remote nodes in an IC canister and exposes the same VFS through canister queries, a CLI, shared client library, and validation workflows.
It also includes a DB-backed Skill Knowledge Base for teams that want to find, evaluate, and grow agent skills from real task evidence.
flowchart LR
A["Agent or CLI"] --> B["kinic-vfs-cli / shared client"]
B --> C["IC canister"]
C --> D["SQLite store + FTS"]
Detailed structure map:
- Source of truth: remote
/Wiki/...and/Sources/...nodes - Conflict control: file-level
etag - Search: SQLite FTS on current node content
- Agent memory: task-scoped context, provenance, and local graph queries
- FS-first remote node API backed by the IC
- Rust CLI for direct path-based operations
- Search, snapshot export, and delta reads
- Link graph and node-context queries for wiki navigation
- Agent Memory API v1 for canister-backed long-term context reads
- Skill Knowledge Base paths for private/team
SKILL.mdpackages plus public catalog nodes - Benchmark and validation workflows for VFS behavior
Current scope:
- single-tenant
- text-first
/Wiki/...as the primary durable wiki root/Sources/...for raw and session source nodes
Storage constraints:
- User databases consume stable-memory mount IDs
11..=32767, so one canister has 32757 lifetime database slots in v1. - Archived or deleted databases clear their active mount ID, but v1 does not recycle historical mount IDs.
- Deleting, archiving, and restoring databases still consume cumulative lifetime mount IDs.
- See
docs/DB_LIFECYCLE.mdfor DB status, slot reuse, archive, and restore behavior. - Link graph queries are backed by
fs_links; SQLite size grows with stored link edges and two link indexes. - Node writes update the link index in the same transaction as node content and FTS updates.
cargo test --workspace
cargo clippy --workspace --all-targets -- -D warningsbash scripts/build-vfs-canister.sh
icp network start -d -e local-wiki
icp deploy -e local-wikiIf you need to install the Rust target manually first, use rustup target add wasm32-unknown-unknown.
Resolve the target canister with one of:
--canister-idVFS_CANISTER_ID~/.config/kinic-vfs-cli/config.toml~/.kinic-vfs-cli.toml
Minimal config:
canister_id = "aaaaa-aa"Use --local to target http://127.0.0.1:8000, or --replica-host http://127.0.0.1:8001 for a project-local network on another port. Otherwise the default host is https://icp0.io.
Authenticated CLI commands require icp-cli on PATH and use icp identity default. --identity-mode auto is the default: private reads and member public reads use the selected identity, while public non-member reads stay anonymous.
The fastest product path is the Skill KB quickstart:
CANISTER_ID=<canister-id> scripts/demo_skill_kb.shFor a local replica:
CANISTER_ID=<canister-id> LOCAL=1 scripts/demo_skill_kb.shSee docs/QUICKSTART_SKILL_KB.md for the manual 5 minute flow.
The sample under examples/skill-kb shows the intended loop: upload a skill package, find it from task context, inspect package files and evidence, record run evidence, then promote it.
The demo script can be rerun; if the database already exists, it links and continues.
DB-backed commands require --database-id or VFS_DATABASE_ID; no production default DB is created implicitly. Older single-DB commands such as kinic-vfs-cli read-node --path /Wiki/index.md must now select a DB:
cargo run -p kinic-vfs-cli --bin kinic-vfs-cli -- --canister-id <canister-id> database create
cargo run -p kinic-vfs-cli --bin kinic-vfs-cli -- --canister-id <canister-id> --database-id <database-id> write-node --path /Wiki/index.md --input index.md
cargo run -p kinic-vfs-cli --bin kinic-vfs-cli -- --canister-id <canister-id> database grant <database-id> 2vxsx-fae readerdatabase create prints the generated DB ID. Use that ID for --database-id and grants. Public browser reads use the anonymous principal 2vxsx-fae, so public DBs must grant that principal reader. Public readable DBs also expose the database member list, including principals and roles, through the public dashboard.
Use kinic-vfs-cli when working from a shell or script.
The Browser is the primary public UI. kinic-vfs-cli is the single operator and power-user binary for two command surfaces:
- wiki/database operations: database setup, grants, scripted node reads and writes, search, and archive/restore.
- skill registry operations: package upsert/import, discovery, inspection, run evidence, proposals, status changes, and lockfile-only install.
See docs/CLI.md for wiki/database flags, search preview modes, and operator examples.
See docs/SKILL_REGISTRY.md for kinic-vfs-cli skill ..., Skill Knowledge Base layout, manifest fields, database-role access, and Browser support.
See docs/RELEASE.md for GitHub Release artifacts, Homebrew install, and fallback Cargo install. See docs/PUBLIC_SMOKE.md for the local public-read smoke flow.
Main commands:
Wiki and database operations:
rebuild-indexrebuild-scope-indexread-noderead-node-contextlist-nodeswrite-nodeappend-nodeedit-nodedelete-nodedelete-treemkdir-nodemove-nodeglob-nodesrecent-nodesgraph-neighborhoodgraph-linksincoming-linksoutgoing-linksmulti-edit-nodesearch-remotesearch-path-remotestatusdatabase archive-exportdatabase archive-restoredatabase archive-canceldatabase restore-cancel
Skill registry operations:
skill upsertskill findskill inspectskill installskill import githubskill propose-improvementskill approve-proposalskill record-runskill set-statusgithub ingest
Use the shared Rust library when embedding VFS tool calling into an OpenAI-compatible client. This is not shelling out to the CLI. It uses the same canister-backed VFS through the shared client and tool dispatcher.
use anyhow::Result;
use vfs_cli::agent_tools::{create_openai_tools, handle_openai_tool_call};
use vfs_client::CanisterVfsClient;
async fn run() -> Result<()> {
let client = CanisterVfsClient::new(
"http://127.0.0.1:8000",
"aaaaa-aa",
)
.await?;
let tools = create_openai_tools();
// Pass `tools` into your OpenAI-compatible SDK request.
// When the model returns a tool call:
let result = handle_openai_tool_call(
&client,
"append",
r#"{"path":"/Wiki/memory.md","content":"remember this"}"#,
)
.await?;
println!("{}", result.text);
Ok(())
}Current tool names:
readread_contextwriteappendeditlsmkdirmvglobrecentgraph_neighborhoodgraph_linksincoming_linksoutgoing_linksmulti_editrmsearchsearch_pathsskill_findskill_inspectskill_readskill_record_run
Skill discovery and read tools are read-only runtime helpers.
Agents should call skill_find at task start, inspect promising candidates, read SKILL.md and package-local helper files, then apply those instructions to the current task.
They do not require shelling out to the CLI.
skill_record_run is a write tool for agent-side evidence capture and is excluded from read-only tool sets.
Use the CLI for operational writes such as skill upsert, database link, imports, and improvement proposal approval.
Use the read-only Agent Memory API when an agent talks directly to the canister rather than through CLI commands.
Primary methods:
memory_manifest: discover roots, capabilities, limits, and memory API policyquery_context: primary task-scoped context bundle with search hits, canonical pages, local graph, and optional evidencesource_evidence: source-path evidence lookup for a known wiki node
Auxiliary methods:
read_node_contextsearch_nodessearch_node_pathsgraph_neighborhoodrecent_nodes
recent_changes and memory_summary are not part of v1. Use recent_nodes for recent live nodes, and use query_context with a summary-style task for maintained overview context.
The public validation docs live under docs/validation/.
- overview: docs/validation/VFS_VALIDATION_PLAN.md
- coverage matrix: docs/validation/VFS_CORRECTNESS_CHECKLIST.md
- deployed canister benchmark contract: docs/validation/VFS_DEPLOYED_CANISTER_BENCHMARKS.md
Minimum validation commands:
cargo test --workspace
bash scripts/build-vfs-canister-canbench.shIf the fixed canbench runtime is available, also run:
bash scripts/run_canbench_guard.sh- Public entry docs stay in English
- Validation docs describe VFS behavior, not product marketing
- Internal operating notes stay repo-local and are not part of the public entry path
- Historical or exploratory material is removed or archived instead of being linked from the README