Skip to content

[AI SDK v5] I don't understand submit-tool-result follow-up requests #7449

@arielweinberger

Description

@arielweinberger

Description

I'm integrating with the AI SDK v5. Sometimes, my chat message finishes properly with the following:

data: {"type":"tool-output-available","toolCallId":"toolu_017dh84jSpAj4jLHKHUxh6hN","output":"Thanks for your summary!"}
data: {"type":"finish-step"}
data: {"type":"finish"}
data: [DONE]

And then there is a follow-up request of type submit-tool-result for tool toolu_017dh84jSpAj4jLHKHUxh6hN. I could not find helpful information on the v5 SDK docs (or maybe I missed it?) for what this call does.

Should I allow the generation to go all the way again? Sometimes I get an error along the lines of "found a use_tool call without a result" (Anthropic Claude) if I do. Why is this needed if the back-end has an execute function and it's already aware of the result of the tool call (as seen in the stream)?

It would make perfect sense to me in human-in-the-loop scenarios where a tool call triggers something in the UI, and then after the user confirming, the result would be reported back. But that's not the case here.

Maybe it's just a concept I do not properly understand, so I'd appreciate some help here.

AI SDK Version

  • ai v5 beta

Metadata

Metadata

Assignees

No one assigned

    Labels

    ai/uibugSomething isn't workingv5

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions