hiai-opencode is an OpenCode plugin that turns vanilla OpenCode into an opinionated multi-agent cockpit.
What you get on top of plain OpenCode:
- 14-agent canonical model (9 visible + 5 hidden) with peer-aware prompts — Bob orchestrates, Coder/Sub implement, Strategist plans, Critic gates, Researcher discovers via Context7/Firecrawl/grep_app/MemPalace, Designer drives Stitch UI generation, Writer owns copy/SEO, Vision extracts PDFs/images, Manager keeps memory, Quality Guardian reviews, Sandbox sandboxes bash.
- Mode → agent routing for
task()delegation —quick/bounded/unspecified-low→ Sub,deep/cross-module→ Coder,ultrabrain→ Strategist,visual-engineering/artistry→ Designer,writing→ Writer,git-ops→ Manager. No more "everything routes to coder". - MCP wiring out of the box — Stitch, Firecrawl, Context7, grep_app, MemPalace, Sequential-Thinking. Each agent's prompt knows which MCP servers it owns.
- LSP defaults for TypeScript, Svelte, ESLint, Bash, Pyright. Coder must run
lsp_diagnosticsafter every edit. - Permission discipline — read-only agents cannot delegate; write-capable agents have explicit file-scope limits.
This repository is intended to be usable by someone who clones it from GitHub without any internal context. External MCP servers, skills, model providers, and auxiliary OpenCode plugins remain their own upstream projects; this plugin only provides OpenCode wiring, defaults, prompts, launchers, and documentation around them.
I wanted the "one install, give me the whole cockpit" setup. More agents, more MCP servers, more skills, better defaults, fewer tiny config islands. 🚀
The problem: all those great tools do not magically become friends just because you installed them. Some are npm packages, some are Python tools, some need API keys, some need a local service, and some only wake up after the first run. Meanwhile your main agent can waste half the context window just reading everything you bolted on. Not chill. 😅
So hiai-opencode is my attempt to wire the best pieces I use into one OpenCode-friendly shape: agents, prompts, skills, MCP launchers, LSP defaults, and a clean config surface. It does not claim ownership of the upstream tools. It just tries to make them cooperate.
After the first install, a few MCP services may still need local dependencies or keys. You have two options:
- Follow the setup sections below: Install, Environment, and MCP Service Notes.
- Or ask OpenCode to do the boring part for you. Paste this after installing:
Read AGENTS.md and finish hiai-opencode setup for this workspace.
Keep OpenCode plugins separate from MCP servers. Do not add MCP server packages to the OpenCode plugin list.
Check which MCP services can run on this machine, update hiai-opencode.json, install only missing user-level or project-local dependencies, and report missing API keys without printing secret values.
Then run `hiai-opencode doctor`, `hiai-opencode mcp-status`, and `opencode debug config`.
For the full operator playbook, see AGENTS.md. 🤖
9 Visible Primary Agents:
| Agent | Role | When to use |
|---|---|---|
Bob |
Orchestrator, router, distributor | Entry point; complex tasks needing multi-agent coordination |
Coder |
Deep implementation, focused execution | Complex features, refactors, deep work |
Strategist |
Planning, architecture, pre-check | Scope definition, architectural decisions |
Manager |
Delegation orchestrator, TODO tracker | Coordinates specialists, maintains session continuity |
Critic |
Review gate, high-accuracy verification | Plan review, code review, regression catch |
Designer |
UI/visual direction via Stitch MCP | Visual problems, design systems, screen generation |
Researcher |
Local + external search | Codebase exploration, documentation discovery |
Writer |
Content, copy, positioning, SEO | Website copy, product messaging, naming |
Vision |
PDF/image/diagram extraction, browser UI verification | Multimodal analysis, live browser verification |
5 Hidden System Agents (used internally, not for direct selection):
Agent Skills— Skill registry, discovery, and capability orchestrationSub— Compatibility wrapper for bounded execution (folded into Coder's contour)build— Default executor (OpenCode system agent)plan— Plan mode agent (OpenCode system agent)Quality Guardian— Post-implementation review and bug investigation (compatibility alias for Critic)
Mode determines prompt append, variant, and reasoning effort. The executor agent is selected via mode → agent mapping.
| Mode | Agent | Prompt variant | When to use |
|---|---|---|---|
quick |
coder (via fast bounded contour) |
Fast bounded | Small targeted changes |
writing |
writer |
Docs/prose | Content, i18n, copy |
deep |
coder |
Deep reasoning | Complex implementation |
ultrabrain |
strategist |
Plan-only | Architecture, hard logic |
visual-engineering |
designer |
UI/visual | Visual problems |
artistry |
designer |
Creative | Brand, SEO, creative |
git |
platform-manager |
Git ops | Version control operations |
bounded |
coder (via fast bounded contour) |
Mid-tier bounded | Moderate effort changes |
cross-module |
coder |
Deep substantial | Multi-component changes |
unspecified-low |
coder (via fast bounded contour) |
Bounded | Unclassified small tasks |
unspecified-high |
coder |
Deep | Unclassified substantial tasks |
MCP integrations and which agents use them:
| Service | Key env var | Agent(s) | What it's for |
|---|---|---|---|
| Stitch | STITCH_AI_API_KEY |
Designer | UI generation, design systems, screen variants |
| Firecrawl | FIRECRAWL_API_KEY |
Researcher | CLI skill (not MCP) for web scraping, crawl, extract, search |
| Context7 | CONTEXT7_API_KEY |
Researcher, Coder | Library API documentation |
| grep_app | — | Researcher | GitHub OSS code pattern search |
| MemPalace | MEMPALACE_PYTHON (optional) | Manager (primary), all agents | Project memory and past decisions |
| Sequential-Thinking | — | Strategist, Critic | Deep reasoning for planning/review |
- 9 visible primary agents + 5 hidden system agents (Agent Skills, Sub, build, plan, Quality Guardian)
- Mode-based task routing via
task(category=..., ...)ortask(mode=..., ...) - Skill materialization into OpenCode's
skills/view - MCP wiring for
stitch,sequential-thinking,firecrawl-cli,mempalace,context7, andgrep_app - LSP wiring for TypeScript, Svelte, Python, Bash, and ESLint
The plugin runs three layered mechanisms so a session does not give up halfway through a TODO list.
| Mechanism | Trigger | What it does |
|---|---|---|
| Todo continuation enforcer | session.idle while open todos remain |
Injects a continuation prompt after a 2s countdown. Backs off on stagnation, abort, token-limit, and pending-question. |
| Ralph-loop | /ralph-loop <goal> or /ulw-loop <goal> |
Runs an explicit completion loop. Stops on <promise>DONE</promise>. Cancel with /cancel-ralph. |
| Auto ralph-loop | N+ open todos in one session | Auto-starts ralph-loop in ULTRAWORK mode so each iteration is forced to delegate to specialist agents (researcher / strategist / coder / critic). |
Tune the auto-start threshold in hiai-opencode.json:
{
"ralph_loop": {
"enabled": true,
"auto_start_threshold": 5
}
}auto_start_threshold: 0 disables auto-start. The enforcer always yields to ralph-loop while it owns the session, so the two never inject duplicate prompts.
Minimum:
- OpenCode installed
- Node.js 18+
- Bun 1.1+
Usually required:
- at least one model provider connected in OpenCode for the model IDs you configure
Optional, depending on which services you want:
FIRECRAWL_API_KEYfor FirecrawlSTITCH_AI_API_KEYfor StitchCONTEXT7_API_KEYfor Context7- Python 3.9+ or
uvfor MemPalace - local language servers if you want LSP beyond the npm-bootstrapped helpers
opencode plugin @hiai-gg/hiai-opencode@latest --globalOptional Dynamic Context Pruning plugin:
opencode plugin @tarquinen/opencode-dcp@latest --globalDo not put MCP server packages such as @modelcontextprotocol/server-sequential-thinking into the OpenCode plugin array. They are MCP servers, not OpenCode plugins. Install the CLI skill separately using opencode plugin or npm install -g @mendableai/firecrawl.
Manual OpenCode config equivalent:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@hiai-gg/hiai-opencode",
"@tarquinen/opencode-dcp@latest"]
}The packaged minimal OpenCode example lives in config/opencode.json.
Create a project-level config file at hiai-opencode.json in the project root or at .opencode/hiai-opencode.json.
Bash:
mkdir -p .opencode
cp hiai-opencode.json .opencode/hiai-opencode.jsonPowerShell:
New-Item -ItemType Directory -Force .opencode
Copy-Item .\hiai-opencode.json .\.opencode\hiai-opencode.jsonIf you installed only from npm/OpenCode and do not have this repository checked out, create .opencode/hiai-opencode.json with the shape below and adjust it later.
{
"models": {
"bob": { "model": "kimi-for-coding/k2p6", "recommended": "xhigh" },
"coder": { "model": "minimax-coding-plan/MiniMax-M2.7", "recommended": "high" },
"strategist": { "model": "deepseek/deepseek-v4-pro", "recommended": "high" },
"manager": { "model": "opencode-go/qwen3.6-plus", "recommended": "middle" },
"critic": { "model": "opencode-go/mimo-v2.5-pro", "recommended": "high" },
"designer": { "model": "openrouter/google/gemini-3.1-pro-preview", "recommended": "design" },
"researcher": { "model": "openrouter/deepseek/deepseek-v4-flash", "recommended": "fast" },
"writer": { "model": "openrouter/mistralai/mistral-small-2603", "recommended": "writing" },
"vision": { "model": "openrouter/google/gemma-4-26b-a4b-it", "recommended": "vision" }
},
"mcp": {
"sequential-thinking": { "enabled": true },
"mempalace": { "enabled": true, "pythonPath": "{env:MEMPALACE_PYTHON:-./.venv/bin/python}" },
"stitch": { "enabled": false },
"context7": { "enabled": true }
}
}By default, skill discovery is deterministic: hiai-opencode skills plus project-local .opencode/skills only. Global Claude/OpenCode/Agents skill folders are opt-in.
Model provider credentials belong to OpenCode Connect, not to hiai-opencode.
This plugin only reads the 10 model IDs in models. Internal routing derives hidden agents and task categories from those 10 choices.
Use OpenCode Connect to authorize the providers behind your configured model IDs. Then add only the service keys for MCP or search integrations you actually use:
opencode modelsUse the exact model IDs printed by OpenCode in hiai-opencode.json. For example, openrouter/minimax/minimax-m2.7 routes through OpenRouter, while minimax/minimax-m2.7 routes through a direct Minimax provider only if that provider is connected in OpenCode.
export FIRECRAWL_API_KEY=...
export STITCH_AI_API_KEY=...
export CONTEXT7_API_KEY=...See Environment Variables And Keys for the full list.
opencode
hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFOopencode mcp list reads static .mcp.json files in many OpenCode versions. Runtime MCP servers launched by plugins may work but not appear there. If you want opencode mcp list visibility, run hiai-opencode export-mcp .opencode/.mcp.json first.
hiai-opencode mcp-status is the fastest visibility check. It does not change OpenCode config; it reports config location, enabled MCP services, missing keys, and basic local runtime availability.
hiai-opencode doctor is the broader install/runtime diagnostic. It includes MCP status, static .mcp.json freshness, OpenCode Connect visibility, skill materialization, agent naming/count checks, LSP runtime checks, MemPalace Python source selection, and real MCP tool probes.
Direct npm install is only needed for development or inspection:
bun installLocal development:
git clone https://github.com/HiAi-gg/hiai-opencode.git
cd hiai-opencode
bun install
bun run buildAfter installing the plugin, you can ask OpenCode to finish local setup with this prompt:
Inspect this OpenCode workspace and finish hiai-opencode setup.
Do not add MCP server packages to the OpenCode plugin list. Keep OpenCode plugins separate from MCP servers.
Check that @hiai-gg/hiai-opencode is registered. If Dynamic Context Pruning is requested, install @tarquinen/opencode-dcp as a separate OpenCode plugin.
Find or create hiai-opencode.json in the project root or .opencode/. Use its mcp object as the single switchboard for enabling or disabling MCP services.
Keep skill discovery deterministic unless I explicitly ask for external skills. Leave global_opencode, project_claude, global_claude, project_agents, and global_agents disabled by default.
Enable only services that can run on this machine:
- sequential-thinking: requires node/npx.
- firecrawl-cli: uses CLI skill at `skills/firecrawl-cli/` and requires `FIRECRAWL_API_KEY` for web scraping and extraction tasks.
- mempalace: requires uv or Python 3.9+ with pip; set `mcp.mempalace.pythonPath` (or `MEMPALACE_PYTHON`) if needed. Leave `HIAI_MCP_AUTO_INSTALL` enabled unless the user forbids package installation.
- stitch: requires STITCH_AI_API_KEY.
- context7: works without a key but use CONTEXT7_API_KEY if available.
Check .env.example, report missing keys without printing secret values, and never invent or hardcode API keys.
Run verification commands where available:
- opencode debug config
- hiai-opencode mcp-status
- hiai-opencode export-mcp .opencode/.mcp.json
- opencode mcp list --print-logs --log-level INFO
If a dependency is missing, install only user-level or project-local dependencies, explain every command before running it, and do not use sudo/admin rights unless the user explicitly asks.
Canonical user-facing config for 10 primary agent models, MCP/LSP switches, service auth placeholders, and skill discovery:
Runtime loader for the bundled canonical config:
If you want to change model selection, edit the 10 entries in models. Do not add category-specific model choices unless you are intentionally developing the plugin internals.
Use fully qualified model IDs. Do not introduce local aliases like hiai-fast, sonnet, fast, or high.
After connecting providers in OpenCode, run opencode models and copy the exact model IDs from that output into hiai-opencode.json.
Prompting has two main layers:
- source prompt files in src/agents
- runtime assembly in src/plugin-handlers/agent-config-handler.ts
Important prompt entrypoints:
Bob: src/agents/bob.ts andsrc/agents/bob/*Coder:src/agents/coder/*Strategist:src/agents/strategist/*Manager:src/agents/manager/*Critic:src/agents/critic/*Vision: src/agents/ui.tsManager: src/agents/platform-manager.tsResearcher: src/agents/researcher.tsWriter/Writer: src/agents/writer.ts
Name mapping and visibility:
Project skill definitions live under:
Skill materialization logic:
Built-in helper skills include browser automation, frontend UI/UX, review, git workflow, hiai-opencode setup, AI slop cleanup, and website-copywriting.
Website/product copy should use:
task(subagent_type="writer", load_skills=["website-copywriting"], ...)
writer, copywriter, and content-writer are aliases for writer.
Manager memory stewardship:
- Use
task(subagent_type="platform-manager", ...)ortask(subagent_type="manager", ...)for MemPalace cleanup, session ledgers, TODO hygiene, and architecture decision handoff. - Manager writes only durable decisions and important project state. It should not dump raw chat logs into memory.
Skill discovery defaults:
{
"skill_discovery": {
"config_sources": true,
"project_opencode": true,
"global_opencode": false,
"project_claude": false,
"global_claude": false,
"project_agents": false,
"global_agents": false
}
}This keeps clean installs reproducible and avoids accidentally mixing Codex, Claude, Antigravity, and global OpenCode skill collections. To opt into external skill folders, enable the specific source you want instead of turning everything on.
Default MCP registry:
OpenCode MCP config assembly:
Runtime helper launchers:
LSP defaults are defined in:
Model provider keys are handled by OpenCode Connect. Do not add OPENROUTER_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY to hiai-opencode config for normal model usage.
Important service variables:
-
STITCH_AI_API_KEY -
FIRECRAWL_API_KEY -
CONTEXT7_API_KEY -
OLLAMA_BASE_URL -
OLLAMA_MODEL -
MEMPALACE_PYTHON -
MEMPALACE_PALACE_PATH -
HIAI_MCP_AUTO_INSTALL -
HIAI_OPENCODE_AUTO_EXPORT_MCP -
HIAI_OPENCODE_MCP_EXPORT_PATH
Optional headless or non-Connect fallback variables are documented in .env.example, but they are not required for normal OpenCode model auth.
Use .env.example as the reference template. Create a local .env in your OpenCode environment or export these variables in your shell before startup.
The user-facing MCP switchboard is the mcp object in hiai-opencode.json:
{
"mcp": {
"mempalace": { "enabled": false },
"grep_app": { "enabled": true }
}
}The source of truth for default MCP wiring is src/mcp/registry.ts. Change that file when adding a new MCP integration or changing its launch command, required env vars, or install strategy.
stitchcontext7grep_app
sequential-thinking: launches@modelcontextprotocol/server-sequential-thinkingthrough the helper npm runner.firecrawl: uses the CLI skill atskills/firecrawl-cli/and requiresFIRECRAWL_API_KEYfor web scraping and extraction tasks.
For browser automation, use the /agent-browser skill instead of an MCP server. The CLI uses native Chrome via CDP — no Playwright.
Install (Bun):
bun add -g agent-browser && agent-browser installOr via npm:
npm i -g agent-browser && agent-browser installRepo: https://github.com/vercel-labs/agent-browser
Key environment variables (AGENT_BROWSER_*):
AGENT_BROWSER_HEADED=1— show browser windowAGENT_BROWSER_SESSION=name— isolated sessionAGENT_BROWSER_PROFILE=path— persistent profileAGENT_BROWSER_PROVIDER=name— cloud provider (browserbase, browseruse, kernel)AGENT_BROWSER_AUTO_CONNECT=1— auto-discover running ChromeAGENT_BROWSER_EXECUTABLE_PATH— custom browser binary
Use /agent-browser skill in OpenCode for browser tasks — navigation, snapshots, screenshots, form filling, console/network inspection.
mempalace: prefersuv; otherwise uses Python. You can force interpreter selection viamcp.mempalace.pythonPathorMEMPALACE_PYTHON. IfHIAI_MCP_AUTO_INSTALLis not0,false, orno, the launcher can runpython -m pip install --user mempalaceon first start.
On some Windows/OpenCode environments, local MCP process spawning can fail with EPERM for cmd or node. If you see that:
- the plugin config is likely correct
- the remaining issue is the host runtime's local process spawn behavior
This most often affects sequential-thinking and mempalace, and sometimes local npx-backed tools.
The plugin now emits startup warnings for common misconfiguration, including:
.opencode/opencode.jsoncontainingplugin: ["list"]- enabled MCP integrations with missing required env vars such as
FIRECRAWL_API_KEYorSTITCH_AI_API_KEY
Available CLI:
hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.jsonBy default, the plugin auto-exports .opencode/.mcp.json at workspace startup when the file is missing. This closes the visibility gap where runtime plugin MCP works but opencode mcp list only reads static files. Control it with:
export HIAI_OPENCODE_AUTO_EXPORT_MCP=if-missing # default
export HIAI_OPENCODE_AUTO_EXPORT_MCP=always # overwrite only managed hiai-opencode exports
export HIAI_OPENCODE_AUTO_EXPORT_MCP=force # force overwrite even non-managed files
export HIAI_OPENCODE_AUTO_EXPORT_MCP=0 # disable auto-export
export HIAI_OPENCODE_MCP_EXPORT_PATH=.opencode/.mcp.json # override path
export HIAI_OPENCODE_EXPORT_MCP_MODE=safe # export-mcp command mode: safe|forceInside OpenCode, use the slash command:
/doctor
/mcp-status
hiai-opencode export-mcp writes a standard .mcp.json so hosts whose mcp list ignores plugin runtime MCP can still show the same servers statically. Exports are marker-tagged as hiai-managed; by default, the command avoids overwriting non-managed files unless HIAI_OPENCODE_EXPORT_MCP_MODE=force is set.
Use:
hiai-opencode mcp-status
hiai-opencode export-mcp .opencode/.mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFOhiai-opencode wires these projects and ideas into an OpenCode-friendly setup. Upstream projects remain independent; this table is attribution and orientation, not an ownership claim.
| Component | Upstream | Notes |
|---|---|---|
| OpenCode host/runtime | anomalyco/opencode | plugin host and runtime target |
| Core orchestration influences | code-yeongyu/oh-my-openagent | architectural influence |
| Planning / workflow influences | obra/superpowers | planning, review, and debugging ideas |
| Specialist / platform influences | vtemian/micode | platform-style specialist behavior |
| Agent skill ecosystem | addyosmani/agent-skills | tactical workflow skill ideas |
| Supabase Postgres skill | supabase/agent-skills | Postgres best practices skill |
| Browser automation | vercel-labs/agent-browser | CLI-based browser automation via CDP |
| Optional external plugin | Opencode-DCP/opencode-dynamic-context-pruning | installed separately |
| MemPalace | MemPalace/mempalace | external MCP/runtime |
| Sequential Thinking | modelcontextprotocol/servers | external MCP |
| Firecrawl CLI skill | firecrawl/firecrawl | CLI-based web scraping, crawl, extract, search |
| Context7 MCP | upstash/context7 | external MCP |
| bun-pty / PTY ecosystem | shekohex/opencode-pty | PTY/runtime integration influence |
Build:
bun run buildTypecheck:
bun run typecheckPrompt snapshots:
bun run prompts:measureBefore publishing:
- run
bun run build - run
npm pack --dry-run - verify
debug config - run
hiai-opencode export-mcp .opencode/.mcp.jsonif you need staticmcp listvisibility
Publish:
npm publish --access publicUser install after publish:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@hiai-gg/hiai-opencode"]
}- AGENTS.md: instructions for autonomous agents or tooling that need to install or modify the plugin
- ARCHITECTURE.md: runtime wiring, prompting layers, and modification map
- LICENSE.md: licensing and attribution