Skip to content

Conversation

@homorunner
Copy link

@homorunner homorunner commented Dec 25, 2025

💻 Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • ✅ test
  • 📝 docs
  • 🔨 chore

🔗 Related Issue

Fixes #10864.

🔀 Description of Change

In chat.agent.AgentExecutor loop, sometimes message.content is empty for the last tool_call message when call_llm() is invoked. This PR adds a cache for tool_call results as a fallback.

🧪 How to Test

  • Tested locally
  • Added/updated tests
  • No tests needed

Summary by Sourcery

Bug Fixes:

  • Preserve and backfill tool_call message content using a per-call results cache to avoid empty content in the chat agent execution loop.
@vercel
Copy link

vercel bot commented Dec 25, 2025

@homorunner is attempting to deploy a commit to the LobeHub OSS Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Dec 25, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 25, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Adds caching of tool_call results in the chat AgentExecutor loop and backfills empty tool_call message content from this cache before invoking the LLM, preventing missing content issues.

Sequence diagram for tool_call result caching in AgentExecutor loop

sequenceDiagram
    participant AgentExecutor
    participant Tool
    participant ToolCallResultsCache
    participant LLM

    AgentExecutor->>Tool: executeTool(chatToolPayload)
    Tool-->>AgentExecutor: result
    AgentExecutor->>ToolCallResultsCache: set(chatToolPayload.id, result)
    AgentExecutor->>AgentExecutor: push tool_result event

    loop Later LLM invocation
        AgentExecutor->>AgentExecutor: build llmPayload.messages
        AgentExecutor->>AgentExecutor: filter messages to exclude assistantMessageId
        AgentExecutor->>ToolCallResultsCache: get(message.tool_call_id) for each message
        ToolCallResultsCache-->>AgentExecutor: storedContent or undefined
        AgentExecutor->>AgentExecutor: if content is empty and storedContent exists
        AgentExecutor->>AgentExecutor: set message.content = storedContent
        AgentExecutor->>LLM: call_llm(llmPayload)
        LLM-->>AgentExecutor: assistant response
    end
Loading

Flow diagram for backfilling empty tool_call message content before LLM call

flowchart TD
    A[Start AgentExecutor LLM step] --> B[Build llmPayload.messages]
    B --> C[Filter out message with assistantMessageId]
    C --> D[Iterate messages]
    D --> E{message.tool_call_id exists?}
    E -- No --> F[Keep message as is]
    E -- Yes --> G{message.content is empty?}
    G -- No --> F
    G -- Yes --> H[Get storedContent from toolCallResults using tool_call_id]
    H --> I{storedContent exists?}
    I -- No --> F
    I -- Yes --> J[Set message.content = storedContent]
    J --> F
    F --> K{More messages?}
    K -- Yes --> D
    K -- No --> L[Invoke call_llm with updated messages]
    L --> M[End]
Loading

File-Level Changes

Change Details Files
Cache tool_call results and backfill empty tool_call message content before LLM calls.
  • Introduce a Map to cache tool call results keyed by tool_call_id within createAgentExecutors.
  • Before calling the LLM, iterate over messages and for any message with a tool_call_id and empty content, populate the content from the cached result if present.
  • When handling tool execution, store each tool result into the cache using the tool payload id so it can be reused for subsequent LLM invocations.
src/store/chat/agents/createAgentExecutors.ts

Assessment against linked issues

Issue Objective Addressed Explanation
#10864 Ensure MCP/tool results (tool_result content) are correctly preserved and included in the messages sent to the API, instead of being empty.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gru-agent
Copy link
Contributor

gru-agent bot commented Dec 25, 2025

TestGru Assignment

Summary

Link CommitId Status Reason
Detail db11a42 ✅ Finished

History Assignment

Files

File Pull Request
src/store/chat/agents/createAgentExecutors.ts ❌ Failed (I failed to setup the environment.)

Tip

You can @gru-agent and leave your feedback. TestGru will make adjustments based on your input

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Consider aligning the type of toolCallResults with the actual result you store (or explicitly stringify it) so the Map<string, string> typing doesn’t drift from the real runtime shape.
  • The toolCallResults map is currently unbounded within the agent executor lifecycle; if agents can run long or process many tool calls, it may be worth adding a cleanup strategy (e.g., delete entries after they’re used or when a run completes) to avoid unnecessary memory growth.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider aligning the type of `toolCallResults` with the actual `result` you store (or explicitly stringify it) so the `Map<string, string>` typing doesn’t drift from the real runtime shape.
- The `toolCallResults` map is currently unbounded within the agent executor lifecycle; if agents can run long or process many tool calls, it may be worth adding a cleanup strategy (e.g., delete entries after they’re used or when a run completes) to avoid unnecessary memory growth.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S This PR changes 10-29 lines, ignoring generated files.

1 participant