Skip to content

MCP bridge: Windows support stubbed — needs named-pipe or TCP-loopback transport #1184

@benhoverter

Description

@benhoverter

Summary

The openfang-mcp-bridge crate and the daemon-side bridge_ipc module are structurally unix-only: their IPC contract is a unix domain socket on a filesystem path. On Windows, all UnixStream/UnixListener-touching code is currently #[cfg(unix)]-gated so the workspace compiles cleanly, but the bridge is functionally absent: the daemon boots, but any MCP-routed tool invocation has no transport.

Net effect for Windows users: daemon runs; MCP tools do not.

Current state (post-fix)

  • crates/openfang-mcp-bridge/src/main.rs — entire body gated #[cfg(unix)]. Non-unix fn main() prints an unsupported-platform message and exits with code 1.
  • crates/openfang-api/src/lib.rspub mod bridge_ipc is #[cfg(unix)].
  • crates/openfang-api/src/server.rsBridgeIpcServer::start() is #[cfg(unix)]; the non-unix shim logs an info line and returns, leaving the daemon's existing "no bridge socket" fallthrough to handle downstream callers.
  • Tests: gated #[cfg(all(test, unix))]. No coverage runs on Windows for these crates.

CI: Windows Check + Test now green; Linux/macOS unchanged.

Why this is a stub, not a fix

Unix domain sockets don't exist on Windows. The current code:

use tokio::net::{UnixStream, UnixListener};

…has no Windows analog at the tokio::net level. Two real options:

Option 1 — Named pipes (Windows-native)

Use tokio::net::windows::named_pipe::{NamedPipeServer, NamedPipeClient}. This is the closest semantic match to unix domain sockets:

  • Filesystem-namespace-like addressing (\\.\pipe\openfang-bridge)
  • Per-pipe ACLs (security model parallel to unix socket file perms)
  • Stream semantics; existing length-prefixed framing codec works unchanged

Requires abstracting the transport behind a trait (e.g. BridgeTransport: AsyncRead + AsyncWrite + Unpin + Send) with two backends:

  • unix::UdsTransport wrapping UnixStream
  • windows::PipeTransport wrapping NamedPipeClient/NamedPipeServer

Connection establishment differs slightly (named pipes use ClientOptions::open with retry rather than connect), so the connect path needs platform branches. Server-side: NamedPipeServer requires re-creating the server instance after each accept, vs. UnixListener::accept reusing the listener — known named-pipe quirk worth modeling explicitly.

Option 2 — TCP loopback (127.0.0.1:<ephemeral>)

Single-backend: TcpListener::bind("127.0.0.1:0"), write the bound port to a known file (~/.openfang/run/bridge.port or registry equivalent), bridge clients read it.

Pros: one code path, no cfg branches; works on every platform tokio supports.

Cons: weaker auth posture (any local process can connect() to the loopback port — unix sockets get filesystem-perm-based isolation, named pipes get ACLs, loopback gets neither without app-layer auth); port-coordination protocol (write port to file, handle stale files, race conditions on concurrent daemons); a token-handshake on connect would mitigate the auth gap but adds protocol surface.

Recommendation

We'd lean Option 1 (named pipes), because auth-model parity with unix sockets keeps the trust story consistent across platforms, and abstracting transport behind a trait is the right shape regardless — it also leaves the door open to remote-bridge transports later (Option 3 territory: TCP+TLS for cross-host bridging) without further refactor. Happy to defer to maintainers' call if there's context we're missing.

Acceptance criteria

  • Bridge transport abstracted behind a BridgeTransport trait (or equivalent).
  • Windows backend uses tokio::net::windows::named_pipe (or alternative chosen by maintainers).
  • Unix backend behavior unchanged.
  • openfang-mcp-bridge Windows build is functional, not stubbed; #[cfg(unix)] gates added in the prior fix are removed.
  • Test coverage runs on Windows for the bridge crate (smoke test: spawn daemon, spawn bridge client, exchange a frame).
  • Daemon's non-unix BridgeIpcServer::start shim removed.
  • CI: Windows Check + Test green with bridge tests enabled.

Out of scope

  • Cross-host / remote bridge transport (separate concern; would be Option 3).
  • MCP server/client semantics changes — this issue is purely about transport.

Context

The current #[cfg(unix)] gates were introduced in PRs #1182 and #1183 to unblock Windows CI. Those PRs' primary scope was capability enforcement and file policy, not platform support; the gates are intentionally a stub. This issue tracks turning the stub into actual Windows support.

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-designNeeds architecture discussion

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions