Skip to content

HiAi-gg/hiai-opencode

Repository files navigation

hiai-opencode

CI npm License: MIT

hiai-opencode is an OpenCode plugin that turns vanilla OpenCode into an opinionated multi-agent cockpit.

What you get on top of plain OpenCode:

  • 14-agent canonical model (9 visible + 5 hidden) with peer-aware prompts — Bob orchestrates, Coder/Sub implement, Strategist plans, Critic gates, Researcher discovers via Context7/Firecrawl/grep_app/MemPalace, Designer drives Stitch UI generation, Writer owns copy/SEO, Vision extracts PDFs/images, Manager keeps memory, Quality Guardian reviews, Sandbox sandboxes bash.
  • Mode → agent routing for task() delegation — quick/bounded/unspecified-low → Sub, deep/cross-module → Coder, ultrabrain → Strategist, visual-engineering/artistry → Designer, writing → Writer, git-ops → Manager. No more "everything routes to coder".
  • MCP wiring out of the box — Stitch, Firecrawl, Context7, grep_app, MemPalace, Sequential-Thinking. Each agent's prompt knows which MCP servers it owns.
  • LSP defaults for TypeScript, Svelte, ESLint, Bash, Pyright. Coder must run lsp_diagnostics after every edit.
  • Permission discipline — read-only agents cannot delegate; write-capable agents have explicit file-scope limits.

This repository is intended to be usable by someone who clones it from GitHub without any internal context. External MCP servers, skills, model providers, and auxiliary OpenCode plugins remain their own upstream projects; this plugin only provides OpenCode wiring, defaults, prompts, launchers, and documentation around them.

Why This Exists

I wanted the "one install, give me the whole cockpit" setup. More agents, more MCP servers, more skills, better defaults, fewer tiny config islands. 🚀

The problem: all those great tools do not magically become friends just because you installed them. Some are npm packages, some are Python tools, some need API keys, some need a local service, and some only wake up after the first run. Meanwhile your main agent can waste half the context window just reading everything you bolted on. Not chill. 😅

So hiai-opencode is my attempt to wire the best pieces I use into one OpenCode-friendly shape: agents, prompts, skills, MCP launchers, LSP defaults, and a clean config surface. It does not claim ownership of the upstream tools. It just tries to make them cooperate.

After the first install, a few MCP services may still need local dependencies or keys. You have two options:

Read AGENTS.md and finish hiai-opencode setup for this workspace.

Keep OpenCode plugins separate from MCP servers. Do not add MCP server packages to the OpenCode plugin list.

Check which MCP services can run on this machine, update hiai-opencode.json, install only missing user-level or project-local dependencies, and report missing API keys without printing secret values.

Then run `hiai-opencode doctor`, `hiai-opencode mcp-status`, and `opencode debug config`.

For the full operator playbook, see AGENTS.md. 🤖

Agents

9 Visible Primary Agents:

Agent Role When to use
Bob Orchestrator, router, distributor Entry point; complex tasks needing multi-agent coordination
Coder Deep implementation, focused execution Complex features, refactors, deep work
Strategist Planning, architecture, pre-check Scope definition, architectural decisions
Manager Delegation orchestrator, TODO tracker Coordinates specialists, maintains session continuity
Critic Review gate, high-accuracy verification Plan review, code review, regression catch
Designer UI/visual direction via Stitch MCP Visual problems, design systems, screen generation
Researcher Local + external search Codebase exploration, documentation discovery
Writer Content, copy, positioning, SEO Website copy, product messaging, naming
Vision PDF/image/diagram extraction, browser UI verification Multimodal analysis, live browser verification

5 Hidden System Agents (used internally, not for direct selection):

  • Agent Skills — Skill registry, discovery, and capability orchestration
  • Sub — Compatibility wrapper for bounded execution (folded into Coder's contour)
  • build — Default executor (OpenCode system agent)
  • plan — Plan mode agent (OpenCode system agent)
  • Quality Guardian — Post-implementation review and bug investigation (compatibility alias for Critic)

Modes (Task Routing)

Mode determines prompt append, variant, and reasoning effort. The executor agent is selected via mode → agent mapping.

Mode Agent Prompt variant When to use
quick coder (via fast bounded contour) Fast bounded Small targeted changes
writing writer Docs/prose Content, i18n, copy
deep coder Deep reasoning Complex implementation
ultrabrain strategist Plan-only Architecture, hard logic
visual-engineering designer UI/visual Visual problems
artistry designer Creative Brand, SEO, creative
git platform-manager Git ops Version control operations
bounded coder (via fast bounded contour) Mid-tier bounded Moderate effort changes
cross-module coder Deep substantial Multi-component changes
unspecified-low coder (via fast bounded contour) Bounded Unclassified small tasks
unspecified-high coder Deep Unclassified substantial tasks

Integrations

MCP integrations and which agents use them:

Service Key env var Agent(s) What it's for
Stitch STITCH_AI_API_KEY Designer UI generation, design systems, screen variants
Firecrawl FIRECRAWL_API_KEY Researcher CLI skill (not MCP) for web scraping, crawl, extract, search
Context7 CONTEXT7_API_KEY Researcher, Coder Library API documentation
grep_app Researcher GitHub OSS code pattern search
MemPalace MEMPALACE_PYTHON (optional) Manager (primary), all agents Project memory and past decisions
Sequential-Thinking Strategist, Critic Deep reasoning for planning/review

What You Get

  • 9 visible primary agents + 5 hidden system agents (Agent Skills, Sub, build, plan, Quality Guardian)
  • Mode-based task routing via task(category=..., ...) or task(mode=..., ...)
  • Skill materialization into OpenCode's skills/ view
  • MCP wiring for stitch, sequential-thinking, firecrawl-cli, mempalace, context7, and grep_app
  • LSP wiring for TypeScript, Svelte, Python, Bash, and ESLint

Continuation, Ralph-Loop, And Auto-Start

The plugin runs three layered mechanisms so a session does not give up halfway through a TODO list.

Mechanism Trigger What it does
Todo continuation enforcer session.idle while open todos remain Injects a continuation prompt after a 2s countdown. Backs off on stagnation, abort, token-limit, and pending-question.
Ralph-loop /ralph-loop <goal> or /ulw-loop <goal> Runs an explicit completion loop. Stops on <promise>DONE</promise>. Cancel with /cancel-ralph.
Auto ralph-loop N+ open todos in one session Auto-starts ralph-loop in ULTRAWORK mode so each iteration is forced to delegate to specialist agents (researcher / strategist / coder / critic).

Tune the auto-start threshold in hiai-opencode.json:

{
  "ralph_loop": {
    "enabled": true,
    "auto_start_threshold": 5
  }
}

auto_start_threshold: 0 disables auto-start. The enforcer always yields to ralph-loop while it owns the session, so the two never inject duplicate prompts.

Requirements

Minimum:

  • OpenCode installed
  • Node.js 18+
  • Bun 1.1+

Usually required:

  • at least one model provider connected in OpenCode for the model IDs you configure

Optional, depending on which services you want:

  • FIRECRAWL_API_KEY for Firecrawl
  • STITCH_AI_API_KEY for Stitch
  • CONTEXT7_API_KEY for Context7
  • Python 3.9+ or uv for MemPalace
  • local language servers if you want LSP beyond the npm-bootstrapped helpers

Install

1. Register the OpenCode plugin

opencode plugin @hiai-gg/hiai-opencode@latest --global

Optional Dynamic Context Pruning plugin:

opencode plugin @tarquinen/opencode-dcp@latest --global

Do not put MCP server packages such as @modelcontextprotocol/server-sequential-thinking into the OpenCode plugin array. They are MCP servers, not OpenCode plugins. Install the CLI skill separately using opencode plugin or npm install -g @mendableai/firecrawl.

Manual OpenCode config equivalent:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["@hiai-gg/hiai-opencode",
    "@tarquinen/opencode-dcp@latest"]
}

The packaged minimal OpenCode example lives in config/opencode.json.

2. Create project config

Create a project-level config file at hiai-opencode.json in the project root or at .opencode/hiai-opencode.json.

Bash:

mkdir -p .opencode
cp hiai-opencode.json .opencode/hiai-opencode.json

PowerShell:

New-Item -ItemType Directory -Force .opencode
Copy-Item .\hiai-opencode.json .\.opencode\hiai-opencode.json

If you installed only from npm/OpenCode and do not have this repository checked out, create .opencode/hiai-opencode.json with the shape below and adjust it later.

{
  "models": {
    "bob": { "model": "kimi-for-coding/k2p6", "recommended": "xhigh" },
    "coder": { "model": "minimax-coding-plan/MiniMax-M2.7", "recommended": "high" },
    "strategist": { "model": "deepseek/deepseek-v4-pro", "recommended": "high" },
    "manager": { "model": "opencode-go/qwen3.6-plus", "recommended": "middle" },
    "critic": { "model": "opencode-go/mimo-v2.5-pro", "recommended": "high" },
    "designer": { "model": "openrouter/google/gemini-3.1-pro-preview", "recommended": "design" },
    "researcher": { "model": "openrouter/deepseek/deepseek-v4-flash", "recommended": "fast" },
    "writer": { "model": "openrouter/mistralai/mistral-small-2603", "recommended": "writing" },
    "vision": { "model": "openrouter/google/gemma-4-26b-a4b-it", "recommended": "vision" }
  },
  "mcp": {
    "sequential-thinking": { "enabled": true },
    "mempalace": { "enabled": true, "pythonPath": "{env:MEMPALACE_PYTHON:-./.venv/bin/python}" },
    "stitch": { "enabled": false },
    "context7": { "enabled": true }
  }
}

By default, skill discovery is deterministic: hiai-opencode skills plus project-local .opencode/skills only. Global Claude/OpenCode/Agents skill folders are opt-in.

3. Connect models and add service keys

Model provider credentials belong to OpenCode Connect, not to hiai-opencode. This plugin only reads the 10 model IDs in models. Internal routing derives hidden agents and task categories from those 10 choices.

Use OpenCode Connect to authorize the providers behind your configured model IDs. Then add only the service keys for MCP or search integrations you actually use:

opencode models

Use the exact model IDs printed by OpenCode in hiai-opencode.json. For example, openrouter/minimax/minimax-m2.7 routes through OpenRouter, while minimax/minimax-m2.7 routes through a direct Minimax provider only if that provider is connected in OpenCode.

export FIRECRAWL_API_KEY=...
export STITCH_AI_API_KEY=...
export CONTEXT7_API_KEY=...

See Environment Variables And Keys for the full list.

4. Start and verify

opencode
hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFO

opencode mcp list reads static .mcp.json files in many OpenCode versions. Runtime MCP servers launched by plugins may work but not appear there. If you want opencode mcp list visibility, run hiai-opencode export-mcp .opencode/.mcp.json first.

hiai-opencode mcp-status is the fastest visibility check. It does not change OpenCode config; it reports config location, enabled MCP services, missing keys, and basic local runtime availability.

hiai-opencode doctor is the broader install/runtime diagnostic. It includes MCP status, static .mcp.json freshness, OpenCode Connect visibility, skill materialization, agent naming/count checks, LSP runtime checks, MemPalace Python source selection, and real MCP tool probes.

Development Install

Direct npm install is only needed for development or inspection:

bun install

Local development:

git clone https://github.com/HiAi-gg/hiai-opencode.git
cd hiai-opencode
bun install
bun run build

Post-Install Bootstrap Prompt

After installing the plugin, you can ask OpenCode to finish local setup with this prompt:

Inspect this OpenCode workspace and finish hiai-opencode setup.

Do not add MCP server packages to the OpenCode plugin list. Keep OpenCode plugins separate from MCP servers.

Check that @hiai-gg/hiai-opencode is registered. If Dynamic Context Pruning is requested, install @tarquinen/opencode-dcp as a separate OpenCode plugin.

Find or create hiai-opencode.json in the project root or .opencode/. Use its mcp object as the single switchboard for enabling or disabling MCP services.

Keep skill discovery deterministic unless I explicitly ask for external skills. Leave global_opencode, project_claude, global_claude, project_agents, and global_agents disabled by default.

Enable only services that can run on this machine:
- sequential-thinking: requires node/npx.
- firecrawl-cli: uses CLI skill at `skills/firecrawl-cli/` and requires `FIRECRAWL_API_KEY` for web scraping and extraction tasks.
- mempalace: requires uv or Python 3.9+ with pip; set `mcp.mempalace.pythonPath` (or `MEMPALACE_PYTHON`) if needed. Leave `HIAI_MCP_AUTO_INSTALL` enabled unless the user forbids package installation.
- stitch: requires STITCH_AI_API_KEY.
- context7: works without a key but use CONTEXT7_API_KEY if available.

Check .env.example, report missing keys without printing secret values, and never invent or hardcode API keys.

Run verification commands where available:
- opencode debug config
- hiai-opencode mcp-status
- hiai-opencode export-mcp .opencode/.mcp.json
- opencode mcp list --print-logs --log-level INFO

If a dependency is missing, install only user-level or project-local dependencies, explain every command before running it, and do not use sudo/admin rights unless the user explicitly asks.

Where To Change Things

Models

Canonical user-facing config for 10 primary agent models, MCP/LSP switches, service auth placeholders, and skill discovery:

Runtime loader for the bundled canonical config:

If you want to change model selection, edit the 10 entries in models. Do not add category-specific model choices unless you are intentionally developing the plugin internals.

Use fully qualified model IDs. Do not introduce local aliases like hiai-fast, sonnet, fast, or high. After connecting providers in OpenCode, run opencode models and copy the exact model IDs from that output into hiai-opencode.json.

Prompting

Prompting has two main layers:

  1. source prompt files in src/agents
  2. runtime assembly in src/plugin-handlers/agent-config-handler.ts

Important prompt entrypoints:

Name mapping and visibility:

Skills

Project skill definitions live under:

Skill materialization logic:

Built-in helper skills include browser automation, frontend UI/UX, review, git workflow, hiai-opencode setup, AI slop cleanup, and website-copywriting.

Website/product copy should use:

task(subagent_type="writer", load_skills=["website-copywriting"], ...)

writer, copywriter, and content-writer are aliases for writer.

Manager memory stewardship:

  • Use task(subagent_type="platform-manager", ...) or task(subagent_type="manager", ...) for MemPalace cleanup, session ledgers, TODO hygiene, and architecture decision handoff.
  • Manager writes only durable decisions and important project state. It should not dump raw chat logs into memory.

Skill discovery defaults:

{
  "skill_discovery": {
    "config_sources": true,
    "project_opencode": true,
    "global_opencode": false,
    "project_claude": false,
    "global_claude": false,
    "project_agents": false,
    "global_agents": false
  }
}

This keeps clean installs reproducible and avoids accidentally mixing Codex, Claude, Antigravity, and global OpenCode skill collections. To opt into external skill folders, enable the specific source you want instead of turning everything on.

MCP

Default MCP registry:

OpenCode MCP config assembly:

Runtime helper launchers:

LSP

LSP defaults are defined in:

Environment Variables And Keys

Model provider keys are handled by OpenCode Connect. Do not add OPENROUTER_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY to hiai-opencode config for normal model usage.

Important service variables:

  • STITCH_AI_API_KEY

  • FIRECRAWL_API_KEY

  • CONTEXT7_API_KEY

  • OLLAMA_BASE_URL

  • OLLAMA_MODEL

  • MEMPALACE_PYTHON

  • MEMPALACE_PALACE_PATH

  • HIAI_MCP_AUTO_INSTALL

  • HIAI_OPENCODE_AUTO_EXPORT_MCP

  • HIAI_OPENCODE_MCP_EXPORT_PATH

Optional headless or non-Connect fallback variables are documented in .env.example, but they are not required for normal OpenCode model auth.

Use .env.example as the reference template. Create a local .env in your OpenCode environment or export these variables in your shell before startup.

MCP Service Notes

The user-facing MCP switchboard is the mcp object in hiai-opencode.json:

{
  "mcp": {
    "mempalace": { "enabled": false },
    "grep_app": { "enabled": true }
  }
}

The source of truth for default MCP wiring is src/mcp/registry.ts. Change that file when adding a new MCP integration or changing its launch command, required env vars, or install strategy.

Works well as remote MCP

  • stitch
  • context7
  • grep_app

Works with local helper bootstrap

  • sequential-thinking: launches @modelcontextprotocol/server-sequential-thinking through the helper npm runner.
  • firecrawl: uses the CLI skill at skills/firecrawl-cli/ and requires FIRECRAWL_API_KEY for web scraping and extraction tasks.

Browser Automation

For browser automation, use the /agent-browser skill instead of an MCP server. The CLI uses native Chrome via CDP — no Playwright.

Install (Bun):

bun add -g agent-browser && agent-browser install

Or via npm:

npm i -g agent-browser && agent-browser install

Repo: https://github.com/vercel-labs/agent-browser

Key environment variables (AGENT_BROWSER_*):

  • AGENT_BROWSER_HEADED=1 — show browser window
  • AGENT_BROWSER_SESSION=name — isolated session
  • AGENT_BROWSER_PROFILE=path — persistent profile
  • AGENT_BROWSER_PROVIDER=name — cloud provider (browserbase, browseruse, kernel)
  • AGENT_BROWSER_AUTO_CONNECT=1 — auto-discover running Chrome
  • AGENT_BROWSER_EXECUTABLE_PATH — custom browser binary

Use /agent-browser skill in OpenCode for browser tasks — navigation, snapshots, screenshots, form filling, console/network inspection.

Needs upstream runtime or extra setup

  • mempalace: prefers uv; otherwise uses Python. You can force interpreter selection via mcp.mempalace.pythonPath or MEMPALACE_PYTHON. If HIAI_MCP_AUTO_INSTALL is not 0, false, or no, the launcher can run python -m pip install --user mempalace on first start.

Important Windows note

On some Windows/OpenCode environments, local MCP process spawning can fail with EPERM for cmd or node. If you see that:

  • the plugin config is likely correct
  • the remaining issue is the host runtime's local process spawn behavior

This most often affects sequential-thinking and mempalace, and sometimes local npx-backed tools.

Troubleshooting MCP Servers

Diagnostics

The plugin now emits startup warnings for common misconfiguration, including:

  • .opencode/opencode.json containing plugin: ["list"]
  • enabled MCP integrations with missing required env vars such as FIRECRAWL_API_KEY or STITCH_AI_API_KEY

Available CLI:

hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.json

By default, the plugin auto-exports .opencode/.mcp.json at workspace startup when the file is missing. This closes the visibility gap where runtime plugin MCP works but opencode mcp list only reads static files. Control it with:

export HIAI_OPENCODE_AUTO_EXPORT_MCP=if-missing  # default
export HIAI_OPENCODE_AUTO_EXPORT_MCP=always      # overwrite only managed hiai-opencode exports
export HIAI_OPENCODE_AUTO_EXPORT_MCP=force       # force overwrite even non-managed files
export HIAI_OPENCODE_AUTO_EXPORT_MCP=0           # disable auto-export
export HIAI_OPENCODE_MCP_EXPORT_PATH=.opencode/.mcp.json   # override path
export HIAI_OPENCODE_EXPORT_MCP_MODE=safe        # export-mcp command mode: safe|force

Inside OpenCode, use the slash command:

/doctor
/mcp-status

hiai-opencode export-mcp writes a standard .mcp.json so hosts whose mcp list ignores plugin runtime MCP can still show the same servers statically. Exports are marker-tagged as hiai-managed; by default, the command avoids overwriting non-managed files unless HIAI_OPENCODE_EXPORT_MCP_MODE=force is set.

Use:

hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFO

Core Components And Upstream Projects

hiai-opencode wires these projects and ideas into an OpenCode-friendly setup. Upstream projects remain independent; this table is attribution and orientation, not an ownership claim.

Component Upstream Notes
OpenCode host/runtime anomalyco/opencode plugin host and runtime target
Core orchestration influences code-yeongyu/oh-my-openagent architectural influence
Planning / workflow influences obra/superpowers planning, review, and debugging ideas
Specialist / platform influences vtemian/micode platform-style specialist behavior
Agent skill ecosystem addyosmani/agent-skills tactical workflow skill ideas
Supabase Postgres skill supabase/agent-skills Postgres best practices skill
Browser automation vercel-labs/agent-browser CLI-based browser automation via CDP
Optional external plugin Opencode-DCP/opencode-dynamic-context-pruning installed separately
MemPalace MemPalace/mempalace external MCP/runtime
Sequential Thinking modelcontextprotocol/servers external MCP
Firecrawl CLI skill firecrawl/firecrawl CLI-based web scraping, crawl, extract, search
Context7 MCP upstash/context7 external MCP
bun-pty / PTY ecosystem shekohex/opencode-pty PTY/runtime integration influence

Build And Publish

Build:

bun run build

Typecheck:

bun run typecheck

Prompt snapshots:

bun run prompts:measure

Before publishing:

  1. run bun run build
  2. run npm pack --dry-run
  3. verify debug config
  4. run hiai-opencode export-mcp .opencode/.mcp.json if you need static mcp list visibility

Publish:

npm publish --access public

User install after publish:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["@hiai-gg/hiai-opencode"]
}

Documentation Map

  • AGENTS.md: instructions for autonomous agents or tooling that need to install or modify the plugin
  • ARCHITECTURE.md: runtime wiring, prompting layers, and modification map
  • LICENSE.md: licensing and attribution

About

hiai-opencode turns vanilla OpenCode into a powerful opinionated multi-agent cockpit. 14 specialized agents, smart task routing, built-in MCP integrations (Stitch, MemPalace, Context7, etc.), and production-ready defaults - all in one install.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors