Summary
The openfang-mcp-bridge crate and the daemon-side bridge_ipc module are structurally unix-only: their IPC contract is a unix domain socket on a filesystem path. On Windows, all UnixStream/UnixListener-touching code is currently #[cfg(unix)]-gated so the workspace compiles cleanly, but the bridge is functionally absent: the daemon boots, but any MCP-routed tool invocation has no transport.
Net effect for Windows users: daemon runs; MCP tools do not.
Current state (post-fix)
crates/openfang-mcp-bridge/src/main.rs — entire body gated #[cfg(unix)]. Non-unix fn main() prints an unsupported-platform message and exits with code 1.
crates/openfang-api/src/lib.rs — pub mod bridge_ipc is #[cfg(unix)].
crates/openfang-api/src/server.rs — BridgeIpcServer::start() is #[cfg(unix)]; the non-unix shim logs an info line and returns, leaving the daemon's existing "no bridge socket" fallthrough to handle downstream callers.
- Tests: gated
#[cfg(all(test, unix))]. No coverage runs on Windows for these crates.
CI: Windows Check + Test now green; Linux/macOS unchanged.
Why this is a stub, not a fix
Unix domain sockets don't exist on Windows. The current code:
use tokio::net::{UnixStream, UnixListener};
…has no Windows analog at the tokio::net level. Two real options:
Option 1 — Named pipes (Windows-native)
Use tokio::net::windows::named_pipe::{NamedPipeServer, NamedPipeClient}. This is the closest semantic match to unix domain sockets:
- Filesystem-namespace-like addressing (
\\.\pipe\openfang-bridge)
- Per-pipe ACLs (security model parallel to unix socket file perms)
- Stream semantics; existing length-prefixed framing codec works unchanged
Requires abstracting the transport behind a trait (e.g. BridgeTransport: AsyncRead + AsyncWrite + Unpin + Send) with two backends:
unix::UdsTransport wrapping UnixStream
windows::PipeTransport wrapping NamedPipeClient/NamedPipeServer
Connection establishment differs slightly (named pipes use ClientOptions::open with retry rather than connect), so the connect path needs platform branches. Server-side: NamedPipeServer requires re-creating the server instance after each accept, vs. UnixListener::accept reusing the listener — known named-pipe quirk worth modeling explicitly.
Option 2 — TCP loopback (127.0.0.1:<ephemeral>)
Single-backend: TcpListener::bind("127.0.0.1:0"), write the bound port to a known file (~/.openfang/run/bridge.port or registry equivalent), bridge clients read it.
Pros: one code path, no cfg branches; works on every platform tokio supports.
Cons: weaker auth posture (any local process can connect() to the loopback port — unix sockets get filesystem-perm-based isolation, named pipes get ACLs, loopback gets neither without app-layer auth); port-coordination protocol (write port to file, handle stale files, race conditions on concurrent daemons); a token-handshake on connect would mitigate the auth gap but adds protocol surface.
Recommendation
We'd lean Option 1 (named pipes), because auth-model parity with unix sockets keeps the trust story consistent across platforms, and abstracting transport behind a trait is the right shape regardless — it also leaves the door open to remote-bridge transports later (Option 3 territory: TCP+TLS for cross-host bridging) without further refactor. Happy to defer to maintainers' call if there's context we're missing.
Acceptance criteria
Out of scope
- Cross-host / remote bridge transport (separate concern; would be Option 3).
- MCP server/client semantics changes — this issue is purely about transport.
Context
The current #[cfg(unix)] gates were introduced in PRs #1182 and #1183 to unblock Windows CI. Those PRs' primary scope was capability enforcement and file policy, not platform support; the gates are intentionally a stub. This issue tracks turning the stub into actual Windows support.
Summary
The
openfang-mcp-bridgecrate and the daemon-sidebridge_ipcmodule are structurally unix-only: their IPC contract is a unix domain socket on a filesystem path. On Windows, allUnixStream/UnixListener-touching code is currently#[cfg(unix)]-gated so the workspace compiles cleanly, but the bridge is functionally absent: the daemon boots, but any MCP-routed tool invocation has no transport.Net effect for Windows users: daemon runs; MCP tools do not.
Current state (post-fix)
crates/openfang-mcp-bridge/src/main.rs— entire body gated#[cfg(unix)]. Non-unixfn main()prints an unsupported-platform message and exits with code 1.crates/openfang-api/src/lib.rs—pub mod bridge_ipcis#[cfg(unix)].crates/openfang-api/src/server.rs—BridgeIpcServer::start()is#[cfg(unix)]; the non-unix shim logs an info line and returns, leaving the daemon's existing "no bridge socket" fallthrough to handle downstream callers.#[cfg(all(test, unix))]. No coverage runs on Windows for these crates.CI: Windows Check + Test now green; Linux/macOS unchanged.
Why this is a stub, not a fix
Unix domain sockets don't exist on Windows. The current code:
…has no Windows analog at the
tokio::netlevel. Two real options:Option 1 — Named pipes (Windows-native)
Use
tokio::net::windows::named_pipe::{NamedPipeServer, NamedPipeClient}. This is the closest semantic match to unix domain sockets:\\.\pipe\openfang-bridge)Requires abstracting the transport behind a trait (e.g.
BridgeTransport: AsyncRead + AsyncWrite + Unpin + Send) with two backends:unix::UdsTransportwrappingUnixStreamwindows::PipeTransportwrappingNamedPipeClient/NamedPipeServerConnection establishment differs slightly (named pipes use
ClientOptions::openwith retry rather thanconnect), so the connect path needs platform branches. Server-side:NamedPipeServerrequires re-creating the server instance after each accept, vs.UnixListener::acceptreusing the listener — known named-pipe quirk worth modeling explicitly.Option 2 — TCP loopback (
127.0.0.1:<ephemeral>)Single-backend:
TcpListener::bind("127.0.0.1:0"), write the bound port to a known file (~/.openfang/run/bridge.portor registry equivalent), bridge clients read it.Pros: one code path, no
cfgbranches; works on every platform tokio supports.Cons: weaker auth posture (any local process can
connect()to the loopback port — unix sockets get filesystem-perm-based isolation, named pipes get ACLs, loopback gets neither without app-layer auth); port-coordination protocol (write port to file, handle stale files, race conditions on concurrent daemons); a token-handshake on connect would mitigate the auth gap but adds protocol surface.Recommendation
We'd lean Option 1 (named pipes), because auth-model parity with unix sockets keeps the trust story consistent across platforms, and abstracting transport behind a trait is the right shape regardless — it also leaves the door open to remote-bridge transports later (Option 3 territory: TCP+TLS for cross-host bridging) without further refactor. Happy to defer to maintainers' call if there's context we're missing.
Acceptance criteria
BridgeTransporttrait (or equivalent).tokio::net::windows::named_pipe(or alternative chosen by maintainers).openfang-mcp-bridgeWindows build is functional, not stubbed;#[cfg(unix)]gates added in the prior fix are removed.BridgeIpcServer::startshim removed.Out of scope
Context
The current
#[cfg(unix)]gates were introduced in PRs #1182 and #1183 to unblock Windows CI. Those PRs' primary scope was capability enforcement and file policy, not platform support; the gates are intentionally a stub. This issue tracks turning the stub into actual Windows support.