-
-
Notifications
You must be signed in to change notification settings - Fork 14.3k
🐛 fix: Fix empty tool_call content in chat agent execution loop. #10969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
|
@homorunner is attempting to deploy a commit to the LobeHub OSS Team on Vercel. A member of the Team first needs to authorize it. |
Reviewer's guide (collapsed on small PRs)Reviewer's GuideAdds caching of tool_call results in the chat AgentExecutor loop and backfills empty tool_call message content from this cache before invoking the LLM, preventing missing content issues. Sequence diagram for tool_call result caching in AgentExecutor loopsequenceDiagram
participant AgentExecutor
participant Tool
participant ToolCallResultsCache
participant LLM
AgentExecutor->>Tool: executeTool(chatToolPayload)
Tool-->>AgentExecutor: result
AgentExecutor->>ToolCallResultsCache: set(chatToolPayload.id, result)
AgentExecutor->>AgentExecutor: push tool_result event
loop Later LLM invocation
AgentExecutor->>AgentExecutor: build llmPayload.messages
AgentExecutor->>AgentExecutor: filter messages to exclude assistantMessageId
AgentExecutor->>ToolCallResultsCache: get(message.tool_call_id) for each message
ToolCallResultsCache-->>AgentExecutor: storedContent or undefined
AgentExecutor->>AgentExecutor: if content is empty and storedContent exists
AgentExecutor->>AgentExecutor: set message.content = storedContent
AgentExecutor->>LLM: call_llm(llmPayload)
LLM-->>AgentExecutor: assistant response
end
Flow diagram for backfilling empty tool_call message content before LLM callflowchart TD
A[Start AgentExecutor LLM step] --> B[Build llmPayload.messages]
B --> C[Filter out message with assistantMessageId]
C --> D[Iterate messages]
D --> E{message.tool_call_id exists?}
E -- No --> F[Keep message as is]
E -- Yes --> G{message.content is empty?}
G -- No --> F
G -- Yes --> H[Get storedContent from toolCallResults using tool_call_id]
H --> I{storedContent exists?}
I -- No --> F
I -- Yes --> J[Set message.content = storedContent]
J --> F
F --> K{More messages?}
K -- Yes --> D
K -- No --> L[Invoke call_llm with updated messages]
L --> M[End]
File-Level Changes
Assessment against linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've left some high level feedback:
- Consider aligning the type of
toolCallResultswith the actualresultyou store (or explicitly stringify it) so theMap<string, string>typing doesn’t drift from the real runtime shape. - The
toolCallResultsmap is currently unbounded within the agent executor lifecycle; if agents can run long or process many tool calls, it may be worth adding a cleanup strategy (e.g., delete entries after they’re used or when a run completes) to avoid unnecessary memory growth.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider aligning the type of `toolCallResults` with the actual `result` you store (or explicitly stringify it) so the `Map<string, string>` typing doesn’t drift from the real runtime shape.
- The `toolCallResults` map is currently unbounded within the agent executor lifecycle; if agents can run long or process many tool calls, it may be worth adding a cleanup strategy (e.g., delete entries after they’re used or when a run completes) to avoid unnecessary memory growth.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
💻 Change Type
🔗 Related Issue
Fixes #10864.
🔀 Description of Change
In chat.agent.AgentExecutor loop, sometimes message.content is empty for the last tool_call message when
call_llm()is invoked. This PR adds a cache for tool_call results as a fallback.🧪 How to Test
Summary by Sourcery
Bug Fixes: