feat: Add OpenAI-compatible API support to AI Writer operation#250
feat: Add OpenAI-compatible API support to AI Writer operation#250aurkinet wants to merge 2 commits into
Conversation
|
You can use field |
There was a problem hiding this comment.
Pull request overview
Adds an “OpenAI Compatible (Custom)” provider option to the AI Writer operation so users can target OpenAI Chat Completions–compatible endpoints beyond OpenAI itself.
Changes:
- Add new UI options for “OpenAI Compatible (Custom)”, including custom endpoint, API key, and custom model name input.
- Extend provider selection + OpenAI provider implementation to support custom endpoints and custom model names.
- Expand Directus sandbox request scopes and update README documentation for the new provider.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/ai-writer-operation/src/app.ts | Adds “OpenAI Compatible (Custom)” provider choice and related config fields in the operation UI. |
| packages/ai-writer-operation/src/api.ts | Extends operation options interface with custom endpoint/key/model fields. |
| packages/ai-writer-operation/src/Provider/ProviderFactory.ts | Routes openai-compatible provider to the OpenAI provider and fixes a typo in error text. |
| packages/ai-writer-operation/src/Provider/OpenAi.ts | Adds custom endpoint + API key handling and allows custom model name substitution. |
| packages/ai-writer-operation/package.json | Broadens sandbox request URL scopes to allow arbitrary OpenAI-compatible endpoints. |
| packages/ai-writer-operation/README.md | Updates docs and adds guidance for OpenAI-compatible providers. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| throw new InvalidPayloadError({ reason: 'OpenAI API Key is missing' }); | ||
| } | ||
| // Determine if this is a custom OpenAI-compatible provider or standard OpenAI | ||
| const isCustomProvider = options.aiProvider === 'openai-compatible'; |
There was a problem hiding this comment.
isCustomProvider is checked with a case-sensitive equality (options.aiProvider === 'openai-compatible'), but getProvider() routes here using a case-insensitive comparison. If aiProvider ever comes through with different casing (e.g., via API/DB), this will incorrectly treat it as standard OpenAI and require apiKeyOpenAi / use the OpenAI endpoint. Normalize casing here (e.g., compare options.aiProvider?.toLowerCase()), or pass a canonical provider value into the provider classes.
| const isCustomProvider = options.aiProvider === 'openai-compatible'; | |
| const isCustomProvider = options.aiProvider?.toLowerCase() === 'openai-compatible'; |
| const endpoint = options.customEndpoint.endsWith('/chat/completions') | ||
| ? options.customEndpoint | ||
| : `${options.customEndpoint.replace(/\/$/, '')}/chat/completions`; |
There was a problem hiding this comment.
The endpoint normalization can produce a duplicated path when the user provides a full /chat/completions/ URL with a trailing slash (e.g., .../chat/completions/ becomes .../chat/completions/chat/completions). Consider trimming trailing slashes before the endsWith('/chat/completions') check (or using new URL() path joining) to make this robust.
| const endpoint = options.customEndpoint.endsWith('/chat/completions') | |
| ? options.customEndpoint | |
| : `${options.customEndpoint.replace(/\/$/, '')}/chat/completions`; | |
| const baseEndpoint = options.customEndpoint.replace(/\/+$/, ''); | |
| const endpoint = baseEndpoint.endsWith('/chat/completions') | |
| ? baseEndpoint | |
| : `${baseEndpoint}/chat/completions`; |
| // Use custom model name if provided, otherwise use the selected model | ||
| const modelName = this.options.model === 'custom' && this.options.customModelName | ||
| ? this.options.customModelName |
There was a problem hiding this comment.
When model is set to 'custom' but customModelName is missing/empty, the request will be sent with model: 'custom'. Add server-side validation to require a non-empty customModelName when model === 'custom' (don’t rely on the app UI’s conditional required field).
| // Use custom model name if provided, otherwise use the selected model | |
| const modelName = this.options.model === 'custom' && this.options.customModelName | |
| ? this.options.customModelName | |
| // Validate that a custom model name is provided when using a custom model | |
| if (this.options.model === 'custom' && !this.options.customModelName) { | |
| throw new InvalidPayloadError({ | |
| reason: 'Custom Model Name is missing', | |
| }); | |
| } | |
| // Use custom model name if provided, otherwise use the selected model | |
| const modelName = this.options.model === 'custom' && this.options.customModelName | |
| ? this.options.customModelName |
| ], | ||
| "urls": [ |
There was a problem hiding this comment.
The new request scope patterns (https://**/v1/** and especially http://**/v1/**) allow this operation to make network calls to essentially any host, including over plaintext HTTP. If custom endpoints are required, consider restricting to HTTPS-only and/or limiting HTTP to localhost, and validate customEndpoint to reject localhost/private IP ranges by default to reduce SSRF risk.
| "urls": [ | |
| "https://**/v1/**" |
| } | ||
|
|
||
| throw new Error(`Unsoported AI Provider ${options.aiProvider}`); | ||
| throw new Error(`Unsupported AI Provider ${options.aiProvider}`); |
There was a problem hiding this comment.
For unsupported providers, this throws a generic Error while other validation errors use InvalidPayloadError. Consider throwing InvalidPayloadError here as well (with a clear reason) so API consumers get consistent error types/status codes.
| throw new Error(`Unsupported AI Provider ${options.aiProvider}`); | |
| throw new InvalidPayloadError({ reason: `Unsupported AI Provider: ${options.aiProvider}` }); |
| The extension now supports any OpenAI-compatible API by selecting "OpenAI Compatible (Custom)" as the AI Provider. This allows you to use services like: | ||
| - Local models (Ollama, LM Studio, etc.) | ||
| - Cloud providers (Azure OpenAI, Google Vertex AI, etc.) | ||
| - Other OpenAI-compatible services (Groq, Together AI, etc.) | ||
|
|
||
| When using a custom provider, you'll need to: | ||
| 1. Select "OpenAI Compatible (Custom)" as the AI Provider | ||
| 2. Enter your custom API endpoint (e.g., `https://api.example.com/v1`) | ||
| 3. Provide your API key | ||
| 4. Choose a model from the dropdown or select "Custom Model" to enter a specific model name | ||
|
|
There was a problem hiding this comment.
The README implies Azure OpenAI / Vertex AI work directly as “OpenAI-compatible APIs”, but Azure’s endpoints/auth (api-key header + api-version + deployments path) and Vertex AI are not Chat Completions-compatible by default. Please clarify that this works with services that expose the OpenAI Chat Completions-compatible endpoint + Bearer auth (or mention that Azure/Vertex require a compatibility/proxy layer).
| The extension now supports any OpenAI-compatible API by selecting "OpenAI Compatible (Custom)" as the AI Provider. This allows you to use services like: | |
| - Local models (Ollama, LM Studio, etc.) | |
| - Cloud providers (Azure OpenAI, Google Vertex AI, etc.) | |
| - Other OpenAI-compatible services (Groq, Together AI, etc.) | |
| When using a custom provider, you'll need to: | |
| 1. Select "OpenAI Compatible (Custom)" as the AI Provider | |
| 2. Enter your custom API endpoint (e.g., `https://api.example.com/v1`) | |
| 3. Provide your API key | |
| 4. Choose a model from the dropdown or select "Custom Model" to enter a specific model name | |
| The extension supports providers that expose an **OpenAI Chat Completions-compatible HTTP API** (OpenAI-style paths and request/response format, using `Authorization: Bearer <api-key>`), by selecting "OpenAI Compatible (Custom)" as the AI Provider. This allows you to use services like: | |
| - Local models (Ollama, LM Studio, etc.) when configured to expose an OpenAI-compatible endpoint | |
| - Cloud providers (Azure OpenAI, Google Vertex AI, etc.) **via an OpenAI-compatible/translation layer or proxy** that normalizes their endpoints and auth to the OpenAI Chat Completions format | |
| - Other OpenAI-compatible services (Groq, Together AI, etc.) that implement the Chat Completions API | |
| When using a custom provider, you'll need to: | |
| 1. Select "OpenAI Compatible (Custom)" as the AI Provider | |
| 2. Enter your custom API endpoint (e.g., `https://api.example.com/v1`) that implements the OpenAI Chat Completions-compatible interface | |
| 3. Provide your API key (sent as a Bearer token) | |
| 4. Choose a model from the dropdown or select "Custom Model" to enter a specific model name | |
| > Note: Azure OpenAI and Google Vertex AI do **not** use the OpenAI Chat Completions URL structure or authentication scheme by default. To use them here, you must either: | |
| > - Use a compatibility/proxy layer that exposes an OpenAI Chat Completions-compatible endpoint, or | |
| > - Use a provider-managed endpoint explicitly documented as OpenAI Chat Completions-compatible. |
Adds support for any OpenAI-compatible API to the AI Writer operation, allowing users to use local models, cloud providers, and any service that implements the OpenAI Chat completions API format.
To test :
https://router.huggingface.co/v1)Qwen/Qwen3-Next-80B-A3B-Instruct:novita)