Skip to content

fix(azure-openai): restore tuple return signature in _handle_structured_response#1479

Open
sparkquest-claude-bot wants to merge 1 commit into
getzep:mainfrom
sparkquest-claude-bot:fix/azure-openai-tuple-return
Open

fix(azure-openai): restore tuple return signature in _handle_structured_response#1479
sparkquest-claude-bot wants to merge 1 commit into
getzep:mainfrom
sparkquest-claude-bot:fix/azure-openai-tuple-return

Conversation

@sparkquest-claude-bot
Copy link
Copy Markdown

Summary

AzureOpenAILLMClient._handle_structured_response overrides the base class method (OpenAIBaseClient._handle_structured_response) but returns a bare dict[str, Any] instead of the base's contract tuple[dict, int, int](parsed_response, input_tokens, output_tokens). Caller-side tuple unpacking in OpenAIBaseClient._generate_response (line 181) raises immediately, so any add_episode / add_memory against an Azure OpenAI-backed Graphiti instance fails before the episode is ingested.

This PR restores the tuple signature on both response paths the subclass already supports.

Repro

import os
from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient
from graphiti_core.llm_client.config import LLMConfig

azure = AsyncAzureOpenAI(api_key=..., api_version=..., azure_endpoint=...)
client = AzureOpenAILLMClient(azure, LLMConfig(model="gpt-4o"))
g = Graphiti(uri=..., user=..., password=..., llm_client=client)
await g.add_episode(name="t", episode_body="hello", source=...)
# TypeError: cannot unpack non-iterable dict object

Fix

Both response shapes the Azure subclass already handles now return the tuple:

  • ParsedChatCompletion (from beta.chat.completions.parse) — token usage on prompt_tokens / completion_tokens
  • Reasoning model response (from responses.parse) — token usage on input_tokens / output_tokens

Token counts default to 0 when the response carries no usage. The base class signature is recovered.

Test plan

  • make format passes (ruff reformatted import-sorting in the chunk; nothing else)
  • make lint passes (ruff + pyright, no new warnings)
  • Add a unit test asserting tuple shape for both response paths (happy to add if maintainers want — not seeing existing unit-level coverage of this method)
  • Manual: against a real Azure OpenAI endpoint, add_episode succeeds where it previously raised TypeError: cannot unpack non-iterable dict object

Notes

  • No behavior change beyond the return shape — the existing branch logic for ParsedChatCompletion vs reasoning-model responses is preserved.
  • This is consistent with the base class's _handle_structured_response and _handle_json_response, which both return the same (dict, int, int) shape.

Happy to follow up with a unit test if that helps. Thanks for graphiti-core!

…ed_response

`AzureOpenAILLMClient._handle_structured_response` overrides the base class
method (`OpenAIBaseClient._handle_structured_response`) but returns a bare
`dict[str, Any]` instead of the base's contract `tuple[dict, int, int]`
(parsed_response, input_tokens, output_tokens). Caller-side tuple unpacking
in `_generate_response` raises immediately, so any `add_episode` /
`add_memory` against an Azure OpenAI-backed Graphiti instance fails before
the episode is ingested.

Restore the tuple signature alongside the existing handling of both response
shapes the subclass supports:
- ParsedChatCompletion (from beta.chat.completions.parse): token usage on
  prompt_tokens / completion_tokens
- Reasoning model response (from responses.parse): token usage on
  input_tokens / output_tokens

Token counts default to 0 when the response carries no `usage`.

Repro:
```python
import os
from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient
from graphiti_core.llm_client.config import LLMConfig

azure = AsyncAzureOpenAI(...)
client = AzureOpenAILLMClient(azure, LLMConfig(model="gpt-4o"))
g = Graphiti(..., llm_client=client)
await g.add_episode(name="t", episode_body="hello", source=...)
# TypeError: cannot unpack non-iterable dict object
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant