Description
When configuring Featherless as an OpenAI-compatible provider, OpenFang appears to send gpt-oss-120b to the provider instead of the exact configured model ID openai/gpt-oss-120b.
Featherless requires the full namespaced model ID, so the request fails with:
Error: Model not found: {"error":{"message":"The model `gpt-oss-120b` does not exist.","type":"invalid_request_error","code":"model_not_found"}}
Configuration:
[default_model]
provider = "openai"
model = "openai/gpt-oss-120b"
api_key_env = "FEATHERLESS_API_KEY"
base_url = "https://api.featherless.ai/v1"
Confirmed workaround:
Using a Featherless model ID that does not start with openai/, such as zai-org/GLM-5.1, works with the same provider, base_url, and api_key_env settings.
This may be related to #856, which also involved OpenFang rewriting provider-specific model IDs. This report is specifically about provider = "openai" with a custom base_url, where model IDs should be treated as opaque strings and passed through exactly.
Expected Behavior
For OpenAI-compatible providers with a custom base_url, OpenFang should preserve the configured model string exactly, including slashes.
The value sent to the provider should be:
not:
Steps to Reproduce
- Configure OpenFang with Featherless as an OpenAI-compatible provider:
[default_model]
provider = "openai"
model = "openai/gpt-oss-120b"
api_key_env = "FEATHERLESS_API_KEY"
base_url = "https://api.featherless.ai/v1"
-
Start OpenFang.
-
Send any request that uses the default model.
-
Observe that Featherless returns a model-not-found error for gpt-oss-120b.
-
Change only the model value to a Featherless model ID that does not start with openai/, for example:
model = "zai-org/GLM-5.1"
-
Repeat the request.
-
Observe that the request works with the same provider, base_url, and api_key_env settings.
OpenFang Version
0.6.9
Operating System
Linux (x86_64)
Logs / Screenshots
The relevant error is:
Error: Model not found: {"error":{"message":"The model `gpt-oss-120b` does not exist.","type":"invalid_request_error","code":"model_not_found"}}
The configured model was:
The provider error indicates the request was made with:
Description
When configuring Featherless as an OpenAI-compatible provider, OpenFang appears to send
gpt-oss-120bto the provider instead of the exact configured model IDopenai/gpt-oss-120b.Featherless requires the full namespaced model ID, so the request fails with:
Configuration:
Confirmed workaround:
Using a Featherless model ID that does not start with
openai/, such aszai-org/GLM-5.1, works with the same provider,base_url, andapi_key_envsettings.This may be related to #856, which also involved OpenFang rewriting provider-specific model IDs. This report is specifically about
provider = "openai"with a custombase_url, where model IDs should be treated as opaque strings and passed through exactly.Expected Behavior
For OpenAI-compatible providers with a custom
base_url, OpenFang should preserve the configured model string exactly, including slashes.The value sent to the provider should be:
not:
Steps to Reproduce
Start OpenFang.
Send any request that uses the default model.
Observe that Featherless returns a model-not-found error for
gpt-oss-120b.Change only the model value to a Featherless model ID that does not start with
openai/, for example:Repeat the request.
Observe that the request works with the same
provider,base_url, andapi_key_envsettings.OpenFang Version
0.6.9
Operating System
Linux (x86_64)
Logs / Screenshots
The relevant error is:
The configured model was:
The provider error indicates the request was made with: