A sovereign agentic intelligence system built on local hardware with persistent memory, curated knowledge, and user-defined worldview alignment.
Most AI agents are stateless β they forget who you are and what you've decided once a conversation ends. Logos is a sovereign knowledge system that builds a permanent, local library of truth, anchored in your worldview.
- Sovereignty: All memory and knowledge live on your hardware. No cloud sync, no external API dependencies for recall.
- Truth Hierarchy: The system prioritizes curated, verified facts (Reference Library) over the volatile noise of the open web.
- Continuity: It remembers not just what was said, but the decisions made and the tasks deferred, ensuring work continues exactly where it left off across sessions.
This isn't a chatbot. It's a research tool with memory.
The distinguishing features:
- Frame-Stripping Skill β 10 rules for separating facts from framing. Strips loaded language, extracts verifiable claims, cross-references independent sources, and presents findings through your stated worldview.
- Narrative-Control-Detection β Identifies six-phase information warfare patterns (initial break β narrative shift β article removal β flood the zone β entrenchment) when they appear in research results.
- SourceAnalyzer (
agent/source_analysis.py) β Phase 3.5 in the research pipeline. Builds and updates source dossiers automatically, flagging ideological markers and consistent omission patterns. - Nightly Learning Loop β Scheduled jobs run deep research, apply frame-stripping, and distill findings into the Reference Library. The knowledge base grows through use.
The system runs locally. No cloud APIs for memory or retrieval. No moral relativism baked in.
For a comprehensive architectural deep-dive covering all subsystems, the epistemic framework, and the design philosophy in detail, see WHITEPAPER.md.
Logos is an agent with persistent memory, curated knowledge, and proactive retrieval. The key subsystems:
SQLite-backed conversation archive with FTS5 full-text search and optional semantic embeddings (ONNX/SentenceTransformers). Every turn is stored locally β no cloud sync, no external API calls. Hybrid retrieval: keyword + vector similarity in a single query.
Full plugin suite providing perpetual memory tools to the agent:
-
Hybrid search β
perpetual_search(FTS5 + semantic),query_messages(SQL-style filtering),get_messages(exact pattern matching) - Smart retrieval β auto-routes queries to optimal strategy (recent, topic-specific, decision trace, file history)
- Context bridge builder β extracts active tasks, errors, and decisions for injection at archival boundaries
-
Source analysis β
source_analyzeexamines web search results for ideological alignment, omissions, and deviations.deep=truemode extracts full article content via Firecrawl before analyzing. Auto-creates source dossiers insources/for new domains. -
Logos Deep Research & Continuity β Sovereign knowledge acquisition pipeline:
-
Three-Tier Web Stack: SearXNG (Discovery)
$\rightarrow$ Firecrawl (Extraction)$\rightarrow$ Camofox (Anti-detection Browser). -
Epistemic Filtering: Integrated scrutiny gate that filters raw web data through a user-defined worldview baseline (built via the
worldview-profile-builderskill) before RL ingestion. -
Adaptive Retrieval Cascade: A reasoning-driven flow (Immediate Context
$\rightarrow$ PM Recall$\rightarrow$ RL Authority$\rightarrow$ Deep Research) to ensure the most accurate source is used for every query.
-
Three-Tier Web Stack: SearXNG (Discovery)
Two pluggable engines work together:
-
Semantic Vector (primary) β Tracks conversation topics via local embeddings. Prunes only dormant or resolved turns, preserving active topics in full. Injects a conversation state map for model awareness. CPU-only, no GPU contention.
-
Rolling Window (fallback) β Incremental tail-off that drops the oldest unprotected messages. Fires when the semantic engine isn't aggressive enough and context nears the hard ceiling (~85%).
Both are deterministic β no LLM calls, no semantic clustering overhead. State continuity through retrieval, not retention.
These core files were modified from the Hermes Agent base. The diffs are committed in this repo β you can see exactly what changed with git log -- <file>. Key changes:
|| File | What Changed | Why It Matters |
|------|-------------|----------------|
| run_agent.py | Renamed "compression" update_model() during init |
| agent/prompt_builder.py | Skills section changed from mandatory to on-demand loading with validation | Prevents context bloat β only loads skills actually relevant to the task |
| agent/context_engine.py | ABC for context engines, removed SemanticVectorEngine (now a plugin) | Clean separation: base defines the interface, plugins provide implementations |
| agent/context_compressor.py | Plugin context engine hooks, on_session_reset() callback for engine state | Engine state properly resets on /new β no stale vectors or leaked memory |
| plugins/context_engine/__init__.py | Loader for semantic vector and rolling window engines with config passthrough | Pluggable context archiving strategy |
| model_tools.py | Added get_selective_tool_definitions() and deferred tools index | Essential tools loaded inline, deferred tools listed for RL lookup β saves context tokens |
| cli.py | Perpetual memory CLI commands (hermes pm search, etc.) | Query your conversation history from the terminal |
| tools/skill_manager_tool.py | Fork-aware skill path resolution | Skills find custom categories correctly |
| .gitignore | Patterns for plugin artifacts and cache files | Keeps git clean |
Config note: The config key changed from compression: to archiving:. Your config.yaml should use:
archiving:
enabled: true
threshold: 0.50
target_ratio: 0.20
protect_last_n: 20(Old compression: key still works for backward compatibility.)
The agent uses a tiered priority system to determine where to find truth:
-
Bedrock Topics (Worldview, Theology, Core Logic): RL
$\rightarrow$ PM$\rightarrow$ Web. The Reference Library is the ultimate authority; web search is only for filling gaps. -
Volatile Topics (News, Tech Specs, Pricing): Web
$\rightarrow$ RL. Fresh data is prioritized over archived knowledge.
This repository provides the engine, not the knowledge. To duplicate this system, you must provide your own local data and worldview.
- Local Embeddings: For hybrid search to function without cloud APIs, you must install
onnxruntimeandsentence-transformers. The system defaults toall-MiniLM-L6-v2for local, private vectorization. - Hardware: A machine capable of running LLMs locally (via LM Studio or vLLM) is strongly recommended to maintain total data sovereignty.
- Storage: All memory and the Reference Library are stored in your local home directory (
~/.hermes/), ensuring that your personal knowledge base never leaves your hardware. - Local Services: The deep research pipeline requires three local services (all self-hosted, no cloud API calls):
- SearXNG β Local meta-search engine (Docker container)
- Firecrawl β Local web content extraction (Docker container)
- Camofox β Anti-detection browser for sites that block scrapers
Before following the Quick Start, read this if you are an AI agent tasked with setting up Logos. The infrastructure alone is not enough β the prompt structure is what makes the system work.
The code provides the plumbing. SOUL.md tells the agent how to use it. Without these specific sections in SOUL.md, the plugins load but the agent never calls them:
-
Knowledge Architecture section β Must instruct the agent to:
- Check
reference_library_searchBEFOREweb_searchfor any knowledge question - Use
recent_messagesfor immediate context before acting - Use
perpetual_search/smart_retrievefor cross-session recall - Treat the Reference Library as authoritative truth, training data as suggestion
- Check
-
Worldview Baseline section β Must define the user's epistemic framework:
- What truth is (not relativism)
- How to handle contradictory worldviews (state the difference, don't false-balance)
- Tone and style preferences
-
Operational Discipline section β Must include:
- Anti-loop rules (check recent context before repeating work)
- Memory save rules (what to persist vs. what to discard)
- Skill loading protocol (scan available skills, load only relevant ones)
- Three-tier web escalation (SearXNG β Firecrawl β Camofox)
-
Active Retrieval section β Must instruct the agent to:
- Call
source_analyzeafterweb_searchfor substantive topics (politics, religion, economics, culture) - Skip
source_analyzefor utility queries (weather, code docs, recipes) - Use
deep=truewhen source analysis is critical
- Call
See extras/system-prompt-guide.md for the exact SOUL.md template with all required sections.
The agent must follow this order for every query:
- Immediate context β What the user just said
- Perpetual Memory β Past conversations on this topic
- Reference Library β Curated, verified knowledge (authoritative)
- Web Research β Only when local knowledge is insufficient
- Training data β Last resort, always suspect
This is not optional. If the agent skips ahead to web search before checking the Reference Library, the system fails.
The system runs autonomous jobs that maintain and improve itself. These are configured as cron jobs and should be set up after initial install:
- PM Signal Scanner (2:00 AM) β Scans for high-signal conversation clusters
- Nightly Distillation (3:00 AM) β Processes clusters through Synthesis β Audit β Commit
- RL Growth (3:00 AM) β Expands Reference Library based on gaps
- Logos Intelligence Scout (4:00 AM) β Builds source intelligence dossiers
- System Backup (4:00 AM) β Backs up data to Windows host
See WHITEPAPER.md Section 4.6 and Section 8 for full details.
The README gives you the structure. WHITEPAPER.md gives you the why. Read it before configuring SOUL.md β it explains the epistemic framework, the Sovereign Sieve, and the design philosophy that makes everything work together.
Follow these steps to set up Logos with perpetual memory, reference library, and skills. Each step builds on the previous one.
# Clone
git clone https://github.com/cluricaun28/logos.git
cd logos
# Install in development mode
pip install -e ".[dev]"
# Run Hermes setup to configure model, gateway, and plugins
hermes setupAdd to your ~/.hermes/config.yaml:
plugins:
enabled:
- perpetual_contextThe plugin initializes the SQLite database (~/.hermes/perpetual_context.db) on first run. No manual DB creation needed.
This is the most critical step. The code provides infrastructure β your SOUL.md tells the agent how to use it. Without these prompt sections, the plugins load but the agent never calls them proactively.
Copy the template and customize it:
cp extras/soul-template.md ~/.hermes/SOUL.mdThen edit ~/.hermes/SOUL.md:
- Replace all
[YOUR NAME]with your actual name - Customize the Worldview Baseline section with your values and beliefs
- Customize the Tone & Style section with your preferred communication style
- Keep all Knowledge Architecture, Operational Discipline, and Active Retrieval sections β these are what make perpetual memory work
For detailed explanations of each system prompt section, see extras/system-prompt-guide.md.
The Reference Library is your agent's curated knowledge base. Start with the template structure:
# Copy the template to create your reference library skeleton
cp -r extras/reference-library-template ~/.hermes/reference-libraryThis creates:
~/.hermes/reference-library/
βββ index.md β Master index (update as you add entries)
βββ topics/ β System docs, workflows, research
β βββ context-window-management.md β Example entry
βββ tools/ β Tool schemas and usage guides
β βββ tool-system.md β Explains how to document tools here
βββ entities/ β People, organizations, publications
β βββ README.md β Instructions for building entity pages
βββ sources/ β Source intelligence dossiers (auto-created by source_analyze)
βββ state.gov.md β Example: domain, alignment, truthful_on, omits
How it grows: Your agent will automatically create new RL entries as you work. When researching a topic, the agent documents findings in topics/. When encountering tools, it schemas them in tools/. The index stays current because the agent updates it.
Skills are reusable procedures stored in ~/.hermes/skills/. They load on demand β only when relevant to your current task. This keeps the context window lean.
The template shows the structure:
extras/skills-template/
βββ README.md β How skills work, frontmatter format
βββ devops/
βββ codebase-backup/ β Example skill
βββ SKILL.md β Complete example of a well-structured skill
How it works: The system prompt includes a list of available skill names and descriptions. Before replying, the agent scans this list and loads only skills directly relevant to your task via skill_view(skill_name).
For web search with source extraction (SearXNG + Firecrawl), see extras/deep-research-setup.md. This enables the agent to:
- Search the web via a local meta-search engine
- Extract full content from URLs using Firecrawl
- Store research results in Perpetual Memory with source tracking
βββββββββββββββββββββββββββββββββββββββββββββββ
β Logos Core β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β CLI β β Gateway β β Tools β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ β
β ββββββββββββββββΌββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββ β
β β Context Engine (modified) β β
β β - Rolling window archiving β β
β β - Context Bridge injection β β
β ββββββββββββββββ¬ββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββ β
β β Perpetual Context Plugin β β
β β - Hybrid search (FTS5+vec) β β
β β - Smart retrieval routing β β
β β - Deep Research Pipeline β β
β ββββββββββββββββ¬ββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββ β
β β SQLite Database (local) β β
β β - messages table + FTS5 idx β β
β β - embeddings BLOB column β β
β β - topic relationships β β
β ββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββ
User asks question β Agent calls recent_messages (last 10 turns)
β Agent reads current prompt + recent context
β Agent checks Reference Library for curated knowledge on topic
β Agent calls perpetual_search for historical discussion on topic
β Agent formulates response using retrieved info + present context
β Completed turns are archived to permanent storage
β Context window stays lean for next turn
Key design decisions:
recent_messagesis mandatory first step β prevents loops, maintains continuity- Search during thinking, not after β tool use IS the reasoning process
- Archive aggressively β if a task is done, it belongs in permanent storage, not working memory
- Print task status every turn β
[Tasks: 3/5 complete]becomes searchable PM data
For hybrid search with vector similarity alongside FTS5 keyword search:
pip install onnxruntime sentence-transformersThe default model is all-MiniLM-L6-v2 β lightweight, runs locally. Embeddings are stored as BLOB in SQLite alongside FTS5 indexes. No configuration needed beyond installing the packages; the plugin auto-detects and enables them.
- Topics: Created when you research something or solve a complex problem. The agent documents findings in
~/.hermes/reference-library/topics/. - Tools: Created when the agent encounters deferred tools. Schemas are documented in
~/.hermes/reference-library/tools/for future lookup. - Entities: Created when researching people, organizations, or publications. Tracks credibility and behavior patterns in
~/.hermes/reference-library/entities/. - Sources: Auto-created by
source_analyzewhen new domains are encountered. Each dossier tracks alignment, truthful_on, omits in~/.hermes/reference-library/sources/. Compounds over time β each analysis enriches the dossier. - Index: Updated automatically by the agent as new entries are created.
- Skills are created when you solve a complex problem (5+ tool calls) that you'll likely face again.
- Stored in
~/.hermes/skills/CATEGORY/skill-name/SKILL.md. - The system prompt's skills list grows as you add more skills.
- See
extras/skills-template/for format examples.
- Every conversation turn is stored automatically β no action needed.
- Grows continuously across sessions. Search it with
perpetual_search,query_messages, orrecent_messages. - Old turns are pruned from the context window but remain fully searchable in the database.
hermes gateway start # Start the agent gateway
hermes # Open interactive sessionThe agent will:
- Load your SOUL.md (persona + behavioral rules)
- Check
recent_messagesfor session continuity - Scan available skills list (on-demand loading, not pre-loaded)
- Wait for your input
- Checks recent conversation history before taking action (anti-loop discipline)
- Consults Reference Library before answering factual questions
- Searches Perpetual Memory when topics reference past work
- Loads skills on demand only when relevant to your task
- Analyzes web sources for bias and omissions via
source_analyze(mandatory for substantive topics) - Auto-creates source dossiers for new domains encountered during research
- Creates new RL entries when learning something new
- Archives completed turns to keep context window lean
- Customize SOUL.md with your values, preferences, and communication style
- Review saved memories/skills periodically β the agent will ask if anything should be saved
- Keep config.yaml updated as you add tools or services (SearXNG, Firecrawl, etc.)
Logos is a fully independent project, detached from upstream Hermes Agent on 2026-05-11. The upstream remote is retained for selective cherry-picking of useful improvements:
# Fetch latest from upstream
git fetch upstream main
# Review changes before selectively applying
git log upstream/main --oneline -10
# Apply specific commits that add value
git cherry-pick <commit-hash>Do NOT merge blindly β custom plugin files may conflict with upstream changes. Cherry-pick selectively and test after each change. For a documented record of cherry-picked commits and the rationale, see docs/upstream_tracking.md.
Other projects (OpenClaw, Claude Code, Codex) may also yield useful patterns. The same approach applies: review, cherry-pick what's useful, test, commit.
Your data lives in three places:
# Perpetual Memory database (conversation history)
~/.hermes/perpetual_context.db
# Reference Library (curated knowledge)
~/.hermes/reference-library/
# Skills (reusable procedures)
~/.hermes/skills/Back them up regularly:
# Quick backup script
tar czf hermes-data-$(date +%Y%m%d).tar.gz \
~/.hermes/perpetual_context.db \
~/.hermes/reference-library/ \
~/.hermes/skills/ \
~/.hermes/SOUL.md \
~/.hermes/config.yaml|| Problem | Solution |
|---------|----------|
| Agent doesn't use perpetual memory tools | Check SOUL.md has "Knowledge Architecture" and "Active Retrieval" sections |
| Plugin not loading | Verify perpetual_context is in config.yaml plugins.enabled list |
| Semantic search not working | Install onnxruntime sentence-transformers |
| Context window too full | Check rolling window config, verify archiving is enabled |
| Agent loops on same task | SOUL.md should have "Anti-Loop Discipline" section; agent checks recent_messages first |
| source_analyze returns "unknown" alignment | Normal for new domains β dossiers compound over time. Use deep=true for substantive topics. |
| source_analyze deep mode fails | Check Firecrawl is running (curl -s localhost:3002/health). If Camofox /tabs/create 404, skip to browser_navigate + browser_console. |
This project does not include:
- Personal data, tokens, or credentials β those live in your local
~/.hermes/directory - Reference library content β starts as a template, grows through use (including
sources/dossiers created bysource_analyze) - Skill definitions β starts with examples, grows as you solve problems
- SQLite database files β created on first run, persists locally
The philosophy: Ship the system that builds knowledge, not the knowledge itself. Your agent should grow its own reference library and skills through your usage patterns.
|| File | Purpose |
|------|---------|
| system-prompt-guide.md | Exact SOUL.md sections needed for perpetual memory to work |
| soul-template.md | Ready-to-use SOUL.md template with all system prompt additions |
| deep-research-setup.md | SearXNG + Firecrawl setup guide for web research |
| reference-library-template/ | Empty RL structure to copy as your starting point |
| skills-template/ | Example skill showing proper format and structure |
MIT. All custom additions are MIT licensed.
Provenance: Detached from NousResearch/hermes-agent on 2026-05-11.
Project: cluricaun28/Logos β fully independent, selective cherry-picking only.