Skip to content

fix: exclude thinking/reasoning chunks from chat title generation#12366

Open
brone1323 wants to merge 1 commit into
continuedev:mainfrom
brone1323:fix/chat-title-includes-thinking-content
Open

fix: exclude thinking/reasoning chunks from chat title generation#12366
brone1323 wants to merge 1 commit into
continuedev:mainfrom
brone1323:fix/chat-title-includes-thinking-content

Conversation

@brone1323
Copy link
Copy Markdown

@brone1323 brone1323 commented May 11, 2026

Summary

Fixes #12338 — chat titles were generated from thinking/reasoning content instead of the actual assistant response when using models with extended thinking (Qwen3, DeepSeek-R1, Anthropic extended thinking, etc.).

Root cause: BaseLLM.chat() in core/llm/index.ts streams all chunks and accumulates them with renderChatMessage(). This includes role: "thinking" chunks, so the returned completion string contained both the internal reasoning text and the actual response. ChatDescriber.describe() then passed this combined string to the title model, which would produce titles like "Here is Thinking Process" from the reasoning preamble.

Fix: Skip role: "thinking" chunks in chat() so callers receive only the visible assistant response. This is a one-liner change.

Test: Added a test case in chatDescriber.test.ts that verifies thinking chunks are excluded when ChatDescriber.describe() calls model.chat().

Changes

  • core/llm/index.ts — skip role: "thinking" messages in BaseLLM.chat()
  • core/util/chatDescriber.test.ts — new test for thinking-chunk exclusion

Test plan

  • Run core/util/chatDescriber.test.ts — new test should pass
  • In VS Code with a thinking-mode model (e.g. Qwen3 via Ollama), verify chat titles are based on the actual response, not reasoning text

Summary by cubic

Stop including internal “thinking/reasoning” chunks in chat title generation. Titles now come from the visible assistant reply, fixing cases with models that emit reasoning (e.g., Qwen3, DeepSeek-R1, Anthropic extended thinking).

  • Bug Fixes
    • Skip role: "thinking" messages in BaseLLM.chat() when building the completion string.
    • Add a test in core/util/chatDescriber.test.ts to ensure ChatDescriber.describe() ignores thinking content.

Written for commit 618b592. Summary will update on new commits.

The BaseLLM.chat() method accumulated output from all streamed chunks
including role:"thinking" messages. This caused ChatDescriber to send
thinking/reasoning text to the title-generation model, producing titles
like "Here is Thinking Process" instead of titles based on the actual
assistant response.

Fix: skip role:"thinking" chunks when building the completion string
inside chat(), so callers that use this non-streaming helper (such as
ChatDescriber) only see the visible assistant response.

Also adds a test case covering this scenario.

Fixes continuedev#12338
@brone1323 brone1323 requested a review from a team as a code owner May 11, 2026 18:45
@brone1323 brone1323 requested review from sestinj and removed request for a team May 11, 2026 18:45
@dosubot dosubot Bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label May 11, 2026
@github-actions
Copy link
Copy Markdown
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

Incorrect chat titles generated for reasoning/thinking mode chats

1 participant