Skip to content

OpenAI-compatible custom base_url strips openai/ from Featherless model IDs #1195

@agentcypot

Description

@agentcypot

Description

When configuring Featherless as an OpenAI-compatible provider, OpenFang appears to send gpt-oss-120b to the provider instead of the exact configured model ID openai/gpt-oss-120b.

Featherless requires the full namespaced model ID, so the request fails with:

Error: Model not found: {"error":{"message":"The model `gpt-oss-120b` does not exist.","type":"invalid_request_error","code":"model_not_found"}}

Configuration:

[default_model]
provider = "openai"
model = "openai/gpt-oss-120b"
api_key_env = "FEATHERLESS_API_KEY"
base_url = "https://api.featherless.ai/v1"

Confirmed workaround:

Using a Featherless model ID that does not start with openai/, such as zai-org/GLM-5.1, works with the same provider, base_url, and api_key_env settings.

This may be related to #856, which also involved OpenFang rewriting provider-specific model IDs. This report is specifically about provider = "openai" with a custom base_url, where model IDs should be treated as opaque strings and passed through exactly.

Expected Behavior

For OpenAI-compatible providers with a custom base_url, OpenFang should preserve the configured model string exactly, including slashes.

The value sent to the provider should be:

openai/gpt-oss-120b

not:

gpt-oss-120b

Steps to Reproduce

  1. Configure OpenFang with Featherless as an OpenAI-compatible provider:
[default_model]
provider = "openai"
model = "openai/gpt-oss-120b"
api_key_env = "FEATHERLESS_API_KEY"
base_url = "https://api.featherless.ai/v1"
  1. Start OpenFang.

  2. Send any request that uses the default model.

  3. Observe that Featherless returns a model-not-found error for gpt-oss-120b.

  4. Change only the model value to a Featherless model ID that does not start with openai/, for example:

model = "zai-org/GLM-5.1"
  1. Repeat the request.

  2. Observe that the request works with the same provider, base_url, and api_key_env settings.

OpenFang Version

0.6.9

Operating System

Linux (x86_64)

Logs / Screenshots

The relevant error is:

Error: Model not found: {"error":{"message":"The model `gpt-oss-120b` does not exist.","type":"invalid_request_error","code":"model_not_found"}}

The configured model was:

openai/gpt-oss-120b

The provider error indicates the request was made with:

gpt-oss-120b

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions