Skip to content

cluricaun28/Logos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6,463 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logos β€” Sovereign Knowledge Management System

A sovereign agentic intelligence system built on local hardware with persistent memory, curated knowledge, and user-defined worldview alignment.

Most AI agents are stateless β€” they forget who you are and what you've decided once a conversation ends. Logos is a sovereign knowledge system that builds a permanent, local library of truth, anchored in your worldview.

The Core Philosophy

  • Sovereignty: All memory and knowledge live on your hardware. No cloud sync, no external API dependencies for recall.
  • Truth Hierarchy: The system prioritizes curated, verified facts (Reference Library) over the volatile noise of the open web.
  • Continuity: It remembers not just what was said, but the decisions made and the tasks deferred, ensuring work continues exactly where it left off across sessions.

What Makes Logos Different

This isn't a chatbot. It's a research tool with memory.

The distinguishing features:

  • Frame-Stripping Skill β€” 10 rules for separating facts from framing. Strips loaded language, extracts verifiable claims, cross-references independent sources, and presents findings through your stated worldview.
  • Narrative-Control-Detection β€” Identifies six-phase information warfare patterns (initial break β†’ narrative shift β†’ article removal β†’ flood the zone β†’ entrenchment) when they appear in research results.
  • SourceAnalyzer (agent/source_analysis.py) β€” Phase 3.5 in the research pipeline. Builds and updates source dossiers automatically, flagging ideological markers and consistent omission patterns.
  • Nightly Learning Loop β€” Scheduled jobs run deep research, apply frame-stripping, and distill findings into the Reference Library. The knowledge base grows through use.

The system runs locally. No cloud APIs for memory or retrieval. No moral relativism baked in.


πŸ“„ White Paper

For a comprehensive architectural deep-dive covering all subsystems, the epistemic framework, and the design philosophy in detail, see WHITEPAPER.md.


What Logos Is

Logos is an agent with persistent memory, curated knowledge, and proactive retrieval. The key subsystems:

1. Perpetual Context Database (agent/perpetual_context_db.py)

SQLite-backed conversation archive with FTS5 full-text search and optional semantic embeddings (ONNX/SentenceTransformers). Every turn is stored locally β€” no cloud sync, no external API calls. Hybrid retrieval: keyword + vector similarity in a single query.

2. Perpetual Context Plugin (plugins/memory/perpetual_context/)

Full plugin suite providing perpetual memory tools to the agent:

  • Hybrid search β€” perpetual_search (FTS5 + semantic), query_messages (SQL-style filtering), get_messages (exact pattern matching)
  • Smart retrieval β€” auto-routes queries to optimal strategy (recent, topic-specific, decision trace, file history)
  • Context bridge builder β€” extracts active tasks, errors, and decisions for injection at archival boundaries
  • Source analysis β€” source_analyze examines web search results for ideological alignment, omissions, and deviations. deep=true mode extracts full article content via Firecrawl before analyzing. Auto-creates source dossiers in sources/ for new domains.
  • Logos Deep Research & Continuity β€” Sovereign knowledge acquisition pipeline:
    • Three-Tier Web Stack: SearXNG (Discovery) $\rightarrow$ Firecrawl (Extraction) $\rightarrow$ Camofox (Anti-detection Browser).
    • Epistemic Filtering: Integrated scrutiny gate that filters raw web data through a user-defined worldview baseline (built via the worldview-profile-builder skill) before RL ingestion.
    • Adaptive Retrieval Cascade: A reasoning-driven flow (Immediate Context $\rightarrow$ PM Recall $\rightarrow$ RL Authority $\rightarrow$ Deep Research) to ensure the most accurate source is used for every query.

3. Context Archiving

Two pluggable engines work together:

  • Semantic Vector (primary) β€” Tracks conversation topics via local embeddings. Prunes only dormant or resolved turns, preserving active topics in full. Injects a conversation state map for model awareness. CPU-only, no GPU contention.

  • Rolling Window (fallback) β€” Incremental tail-off that drops the oldest unprotected messages. Fires when the semantic engine isn't aggressive enough and context nears the hard ceiling (~85%).

Both are deterministic β€” no LLM calls, no semantic clustering overhead. State continuity through retrieval, not retention.

Modified Core Files (8 files)

These core files were modified from the Hermes Agent base. The diffs are committed in this repo β€” you can see exactly what changed with git log -- <file>. Key changes:

|| File | What Changed | Why It Matters | |------|-------------|----------------| | run_agent.py | Renamed "compression" $\rightarrow$ "archiving", plugin context engine loading, selective tool loading | Enables semantic vector + rolling window integration. Plugin engines receive update_model() during init | | agent/prompt_builder.py | Skills section changed from mandatory to on-demand loading with validation | Prevents context bloat β€” only loads skills actually relevant to the task | | agent/context_engine.py | ABC for context engines, removed SemanticVectorEngine (now a plugin) | Clean separation: base defines the interface, plugins provide implementations | | agent/context_compressor.py | Plugin context engine hooks, on_session_reset() callback for engine state | Engine state properly resets on /new β€” no stale vectors or leaked memory | | plugins/context_engine/__init__.py | Loader for semantic vector and rolling window engines with config passthrough | Pluggable context archiving strategy | | model_tools.py | Added get_selective_tool_definitions() and deferred tools index | Essential tools loaded inline, deferred tools listed for RL lookup β€” saves context tokens | | cli.py | Perpetual memory CLI commands (hermes pm search, etc.) | Query your conversation history from the terminal | | tools/skill_manager_tool.py | Fork-aware skill path resolution | Skills find custom categories correctly | | .gitignore | Patterns for plugin artifacts and cache files | Keeps git clean |

Config note: The config key changed from compression: to archiving:. Your config.yaml should use:

archiving:
  enabled: true
  threshold: 0.50
  target_ratio: 0.20
  protect_last_n: 20

(Old compression: key still works for backward compatibility.)


🧠 Research Priority Matrix (Epistemic Routing)

The agent uses a tiered priority system to determine where to find truth:

  • Bedrock Topics (Worldview, Theology, Core Logic): RL $\rightarrow$ PM $\rightarrow$ Web. The Reference Library is the ultimate authority; web search is only for filling gaps.
  • Volatile Topics (News, Tech Specs, Pricing): Web $\rightarrow$ RL. Fresh data is prioritized over archived knowledge.

πŸ›‘οΈ Sovereignty & System Requirements

This repository provides the engine, not the knowledge. To duplicate this system, you must provide your own local data and worldview.

Technical Requirements

  • Local Embeddings: For hybrid search to function without cloud APIs, you must install onnxruntime and sentence-transformers. The system defaults to all-MiniLM-L6-v2 for local, private vectorization.
  • Hardware: A machine capable of running LLMs locally (via LM Studio or vLLM) is strongly recommended to maintain total data sovereignty.
  • Storage: All memory and the Reference Library are stored in your local home directory (~/.hermes/), ensuring that your personal knowledge base never leaves your hardware.
  • Local Services: The deep research pipeline requires three local services (all self-hosted, no cloud API calls):
    • SearXNG β€” Local meta-search engine (Docker container)
    • Firecrawl β€” Local web content extraction (Docker container)
    • Camofox β€” Anti-detection browser for sites that block scrapers

For Agent Operators β€” Critical Setup Details

Before following the Quick Start, read this if you are an AI agent tasked with setting up Logos. The infrastructure alone is not enough β€” the prompt structure is what makes the system work.

SOUL.md β€” The Most Critical File

The code provides the plumbing. SOUL.md tells the agent how to use it. Without these specific sections in SOUL.md, the plugins load but the agent never calls them:

  1. Knowledge Architecture section β€” Must instruct the agent to:

    • Check reference_library_search BEFORE web_search for any knowledge question
    • Use recent_messages for immediate context before acting
    • Use perpetual_search / smart_retrieve for cross-session recall
    • Treat the Reference Library as authoritative truth, training data as suggestion
  2. Worldview Baseline section β€” Must define the user's epistemic framework:

    • What truth is (not relativism)
    • How to handle contradictory worldviews (state the difference, don't false-balance)
    • Tone and style preferences
  3. Operational Discipline section β€” Must include:

    • Anti-loop rules (check recent context before repeating work)
    • Memory save rules (what to persist vs. what to discard)
    • Skill loading protocol (scan available skills, load only relevant ones)
    • Three-tier web escalation (SearXNG β†’ Firecrawl β†’ Camofox)
  4. Active Retrieval section β€” Must instruct the agent to:

    • Call source_analyze after web_search for substantive topics (politics, religion, economics, culture)
    • Skip source_analyze for utility queries (weather, code docs, recipes)
    • Use deep=true when source analysis is critical

See extras/system-prompt-guide.md for the exact SOUL.md template with all required sections.

Epistemic Hierarchy β€” The Truth Chain

The agent must follow this order for every query:

  1. Immediate context β€” What the user just said
  2. Perpetual Memory β€” Past conversations on this topic
  3. Reference Library β€” Curated, verified knowledge (authoritative)
  4. Web Research β€” Only when local knowledge is insufficient
  5. Training data β€” Last resort, always suspect

This is not optional. If the agent skips ahead to web search before checking the Reference Library, the system fails.

Nightly Automation

The system runs autonomous jobs that maintain and improve itself. These are configured as cron jobs and should be set up after initial install:

  • PM Signal Scanner (2:00 AM) β€” Scans for high-signal conversation clusters
  • Nightly Distillation (3:00 AM) β€” Processes clusters through Synthesis β†’ Audit β†’ Commit
  • RL Growth (3:00 AM) β€” Expands Reference Library based on gaps
  • Logos Intelligence Scout (4:00 AM) β€” Builds source intelligence dossiers
  • System Backup (4:00 AM) β€” Backs up data to Windows host

See WHITEPAPER.md Section 4.6 and Section 8 for full details.

Read the White Paper

The README gives you the structure. WHITEPAPER.md gives you the why. Read it before configuring SOUL.md β€” it explains the epistemic framework, the Sovereign Sieve, and the design philosophy that makes everything work together.


Quick Start β€” Full Setup Guide

Follow these steps to set up Logos with perpetual memory, reference library, and skills. Each step builds on the previous one.

Step 1: Install

# Clone
git clone https://github.com/cluricaun28/logos.git
cd logos

# Install in development mode
pip install -e ".[dev]"

# Run Hermes setup to configure model, gateway, and plugins
hermes setup

Step 2: Enable Perpetual Memory Plugin

Add to your ~/.hermes/config.yaml:

plugins:
  enabled:
    - perpetual_context

The plugin initializes the SQLite database (~/.hermes/perpetual_context.db) on first run. No manual DB creation needed.

Step 3: Set Up Your Persona (SOUL.md)

This is the most critical step. The code provides infrastructure β€” your SOUL.md tells the agent how to use it. Without these prompt sections, the plugins load but the agent never calls them proactively.

Copy the template and customize it:

cp extras/soul-template.md ~/.hermes/SOUL.md

Then edit ~/.hermes/SOUL.md:

  1. Replace all [YOUR NAME] with your actual name
  2. Customize the Worldview Baseline section with your values and beliefs
  3. Customize the Tone & Style section with your preferred communication style
  4. Keep all Knowledge Architecture, Operational Discipline, and Active Retrieval sections β€” these are what make perpetual memory work

For detailed explanations of each system prompt section, see extras/system-prompt-guide.md.

Step 4: Initialize Your Reference Library

The Reference Library is your agent's curated knowledge base. Start with the template structure:

# Copy the template to create your reference library skeleton
cp -r extras/reference-library-template ~/.hermes/reference-library

This creates:

~/.hermes/reference-library/
β”œβ”€β”€ index.md              ← Master index (update as you add entries)
β”œβ”€β”€ topics/               ← System docs, workflows, research
β”‚   └── context-window-management.md  ← Example entry
β”œβ”€β”€ tools/                ← Tool schemas and usage guides
β”‚   └── tool-system.md    ← Explains how to document tools here
β”œβ”€β”€ entities/             ← People, organizations, publications
β”‚   └── README.md         ← Instructions for building entity pages
└── sources/              ← Source intelligence dossiers (auto-created by source_analyze)
    └── state.gov.md      ← Example: domain, alignment, truthful_on, omits

How it grows: Your agent will automatically create new RL entries as you work. When researching a topic, the agent documents findings in topics/. When encountering tools, it schemas them in tools/. The index stays current because the agent updates it.

Step 5: Understand Skills On-Demand Loading

Skills are reusable procedures stored in ~/.hermes/skills/. They load on demand β€” only when relevant to your current task. This keeps the context window lean.

The template shows the structure:

extras/skills-template/
β”œβ”€β”€ README.md                    ← How skills work, frontmatter format
└── devops/
    └── codebase-backup/         ← Example skill
        └── SKILL.md             ← Complete example of a well-structured skill

How it works: The system prompt includes a list of available skill names and descriptions. Before replying, the agent scans this list and loads only skills directly relevant to your task via skill_view(skill_name).

Step 6 (Optional): Set Up Deep Research

For web search with source extraction (SearXNG + Firecrawl), see extras/deep-research-setup.md. This enables the agent to:

  • Search the web via a local meta-search engine
  • Extract full content from URLs using Firecrawl
  • Store research results in Perpetual Memory with source tracking

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Logos Core                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚   CLI    β”‚  β”‚ Gateway  β”‚  β”‚  Tools   β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜  β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β”‚
β”‚                      β–Ό                      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
β”‚  β”‚  Context Engine (modified)   β”‚           β”‚
β”‚  β”‚  - Rolling window archiving  β”‚           β”‚
β”‚  β”‚  - Context Bridge injection  β”‚           β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β”‚                 β–Ό                            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
β”‚  β”‚  Perpetual Context Plugin    β”‚           β”‚
β”‚  β”‚  - Hybrid search (FTS5+vec)  β”‚           β”‚
β”‚  β”‚  - Smart retrieval routing   β”‚           β”‚
β”‚  β”‚  - Deep Research Pipeline    β”‚           β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β”‚                 β–Ό                            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
β”‚  β”‚  SQLite Database (local)     β”‚           β”‚
β”‚  β”‚  - messages table + FTS5 idx β”‚           β”‚
β”‚  β”‚  - embeddings BLOB column    β”‚           β”‚
β”‚  β”‚  - topic relationships       β”‚           β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

How the Dual-Memory System Works

User asks question β†’ Agent calls recent_messages (last 10 turns)
β†’ Agent reads current prompt + recent context
β†’ Agent checks Reference Library for curated knowledge on topic
β†’ Agent calls perpetual_search for historical discussion on topic
β†’ Agent formulates response using retrieved info + present context
β†’ Completed turns are archived to permanent storage
β†’ Context window stays lean for next turn

Key design decisions:

  • recent_messages is mandatory first step β€” prevents loops, maintains continuity
  • Search during thinking, not after β€” tool use IS the reasoning process
  • Archive aggressively β€” if a task is done, it belongs in permanent storage, not working memory
  • Print task status every turn β€” [Tasks: 3/5 complete] becomes searchable PM data

Semantic Embeddings (Optional)

For hybrid search with vector similarity alongside FTS5 keyword search:

pip install onnxruntime sentence-transformers

The default model is all-MiniLM-L6-v2 β€” lightweight, runs locally. Embeddings are stored as BLOB in SQLite alongside FTS5 indexes. No configuration needed beyond installing the packages; the plugin auto-detects and enables them.


How Things Grow Over Time

Reference Library Growth

  • Topics: Created when you research something or solve a complex problem. The agent documents findings in ~/.hermes/reference-library/topics/.
  • Tools: Created when the agent encounters deferred tools. Schemas are documented in ~/.hermes/reference-library/tools/ for future lookup.
  • Entities: Created when researching people, organizations, or publications. Tracks credibility and behavior patterns in ~/.hermes/reference-library/entities/.
  • Sources: Auto-created by source_analyze when new domains are encountered. Each dossier tracks alignment, truthful_on, omits in ~/.hermes/reference-library/sources/. Compounds over time β€” each analysis enriches the dossier.
  • Index: Updated automatically by the agent as new entries are created.

Skills Growth

  • Skills are created when you solve a complex problem (5+ tool calls) that you'll likely face again.
  • Stored in ~/.hermes/skills/CATEGORY/skill-name/SKILL.md.
  • The system prompt's skills list grows as you add more skills.
  • See extras/skills-template/ for format examples.

Perpetual Memory Growth

  • Every conversation turn is stored automatically β€” no action needed.
  • Grows continuously across sessions. Search it with perpetual_search, query_messages, or recent_messages.
  • Old turns are pruned from the context window but remain fully searchable in the database.

Daily Usage

Starting a Session

hermes gateway start    # Start the agent gateway
hermes                  # Open interactive session

The agent will:

  1. Load your SOUL.md (persona + behavioral rules)
  2. Check recent_messages for session continuity
  3. Scan available skills list (on-demand loading, not pre-loaded)
  4. Wait for your input

What the Agent Does Automatically

  • Checks recent conversation history before taking action (anti-loop discipline)
  • Consults Reference Library before answering factual questions
  • Searches Perpetual Memory when topics reference past work
  • Loads skills on demand only when relevant to your task
  • Analyzes web sources for bias and omissions via source_analyze (mandatory for substantive topics)
  • Auto-creates source dossiers for new domains encountered during research
  • Creates new RL entries when learning something new
  • Archives completed turns to keep context window lean

What You Should Do

  • Customize SOUL.md with your values, preferences, and communication style
  • Review saved memories/skills periodically β€” the agent will ask if anything should be saved
  • Keep config.yaml updated as you add tools or services (SearXNG, Firecrawl, etc.)

Maintenance

Upstream Changes

Logos is a fully independent project, detached from upstream Hermes Agent on 2026-05-11. The upstream remote is retained for selective cherry-picking of useful improvements:

# Fetch latest from upstream
git fetch upstream main

# Review changes before selectively applying
git log upstream/main --oneline -10

# Apply specific commits that add value
git cherry-pick <commit-hash>

Do NOT merge blindly β€” custom plugin files may conflict with upstream changes. Cherry-pick selectively and test after each change. For a documented record of cherry-picked commits and the rationale, see docs/upstream_tracking.md.

Other projects (OpenClaw, Claude Code, Codex) may also yield useful patterns. The same approach applies: review, cherry-pick what's useful, test, commit.

Backup Your Data

Your data lives in three places:

# Perpetual Memory database (conversation history)
~/.hermes/perpetual_context.db

# Reference Library (curated knowledge)
~/.hermes/reference-library/

# Skills (reusable procedures)
~/.hermes/skills/

Back them up regularly:

# Quick backup script
tar czf hermes-data-$(date +%Y%m%d).tar.gz \
  ~/.hermes/perpetual_context.db \
  ~/.hermes/reference-library/ \
  ~/.hermes/skills/ \
  ~/.hermes/SOUL.md \
  ~/.hermes/config.yaml

Troubleshooting

|| Problem | Solution | |---------|----------| | Agent doesn't use perpetual memory tools | Check SOUL.md has "Knowledge Architecture" and "Active Retrieval" sections | | Plugin not loading | Verify perpetual_context is in config.yaml plugins.enabled list | | Semantic search not working | Install onnxruntime sentence-transformers | | Context window too full | Check rolling window config, verify archiving is enabled | | Agent loops on same task | SOUL.md should have "Anti-Loop Discipline" section; agent checks recent_messages first | | source_analyze returns "unknown" alignment | Normal for new domains β€” dossiers compound over time. Use deep=true for substantive topics. | | source_analyze deep mode fails | Check Firecrawl is running (curl -s localhost:3002/health). If Camofox /tabs/create 404, skip to browser_navigate + browser_console. |


What's Not Here (By Design)

This project does not include:

  • Personal data, tokens, or credentials β€” those live in your local ~/.hermes/ directory
  • Reference library content β€” starts as a template, grows through use (including sources/ dossiers created by source_analyze)
  • Skill definitions β€” starts with examples, grows as you solve problems
  • SQLite database files β€” created on first run, persists locally

The philosophy: Ship the system that builds knowledge, not the knowledge itself. Your agent should grow its own reference library and skills through your usage patterns.


Extras Directory

|| File | Purpose | |------|---------| | system-prompt-guide.md | Exact SOUL.md sections needed for perpetual memory to work | | soul-template.md | Ready-to-use SOUL.md template with all system prompt additions | | deep-research-setup.md | SearXNG + Firecrawl setup guide for web research | | reference-library-template/ | Empty RL structure to copy as your starting point | | skills-template/ | Example skill showing proper format and structure |


License

MIT. All custom additions are MIT licensed.


Provenance: Detached from NousResearch/hermes-agent on 2026-05-11.
Project: cluricaun28/Logos β€” fully independent, selective cherry-picking only.

Releases

No releases published

Packages

 
 
 

Contributors