fix(azure-openai): restore tuple return signature in _handle_structured_response#1479
Open
sparkquest-claude-bot wants to merge 1 commit into
Open
Conversation
…ed_response `AzureOpenAILLMClient._handle_structured_response` overrides the base class method (`OpenAIBaseClient._handle_structured_response`) but returns a bare `dict[str, Any]` instead of the base's contract `tuple[dict, int, int]` (parsed_response, input_tokens, output_tokens). Caller-side tuple unpacking in `_generate_response` raises immediately, so any `add_episode` / `add_memory` against an Azure OpenAI-backed Graphiti instance fails before the episode is ingested. Restore the tuple signature alongside the existing handling of both response shapes the subclass supports: - ParsedChatCompletion (from beta.chat.completions.parse): token usage on prompt_tokens / completion_tokens - Reasoning model response (from responses.parse): token usage on input_tokens / output_tokens Token counts default to 0 when the response carries no `usage`. Repro: ```python import os from openai import AsyncAzureOpenAI from graphiti_core import Graphiti from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient from graphiti_core.llm_client.config import LLMConfig azure = AsyncAzureOpenAI(...) client = AzureOpenAILLMClient(azure, LLMConfig(model="gpt-4o")) g = Graphiti(..., llm_client=client) await g.add_episode(name="t", episode_body="hello", source=...) # TypeError: cannot unpack non-iterable dict object ```
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
AzureOpenAILLMClient._handle_structured_responseoverrides the base class method (OpenAIBaseClient._handle_structured_response) but returns a baredict[str, Any]instead of the base's contracttuple[dict, int, int]—(parsed_response, input_tokens, output_tokens). Caller-side tuple unpacking inOpenAIBaseClient._generate_response(line 181) raises immediately, so anyadd_episode/add_memoryagainst an Azure OpenAI-backed Graphiti instance fails before the episode is ingested.This PR restores the tuple signature on both response paths the subclass already supports.
Repro
Fix
Both response shapes the Azure subclass already handles now return the tuple:
ParsedChatCompletion(frombeta.chat.completions.parse) — token usage onprompt_tokens/completion_tokensresponses.parse) — token usage oninput_tokens/output_tokensToken counts default to
0when the response carries nousage. The base class signature is recovered.Test plan
make formatpasses (ruff reformatted import-sorting in the chunk; nothing else)make lintpasses (ruff + pyright, no new warnings)add_episodesucceeds where it previously raisedTypeError: cannot unpack non-iterable dict objectNotes
_handle_structured_responseand_handle_json_response, which both return the same(dict, int, int)shape.Happy to follow up with a unit test if that helps. Thanks for graphiti-core!