Skip to content

Streaming Output for Multi-Agent Systems #34474

@Serein-Lm

Description

@Serein-Lm

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

When developing using the framework at
https://docs.langchain.com/oss/python/langchain/multi-agent/router-knowledge-base,
I found that our internal call to the LangChain agent (mainly for quickly creating agents using LangChain's agent construction capabilities) is implemented as:

def query_notion(state: AgentInput) -> dict:
    """Query the Notion agent."""
    result = notion_agent.invoke({
        "messages": [{"role": "user", "content": state["query"]}]
    })
    return {"results": [{"source": "notion", "result": result["messages"][-1].content}]}

This itself is just a function; as a result, it is only a graph_node_start node rather than an agent node, which leads to the inability to access the internal tool-related calls during streaming output.

LangGraph's astream() cannot see the streaming events from the Agent that are called inside the node function.

Use Case

I encountered this situation while developing AI front-end and back-end applications. The main requirement is to use streaming output for SSE transmission, so that the calls of each agent can be returned to the front end to quickly see the overall multi-agent invocation process. However, when using the related streaming calls myself, I cannot get the internal agent call process. I hope it can be supported, or that a solution can be provided to directly obtain such an SSE structure

graph_start         
graph_node_start     
graph_node_end      
agent_start        
agent_tool_start    
agent_tool_end      
agent_content        
agent_end           
synthesis_start      
synthesis_content    
graph_end            
error              

Proposed Solution

No response

Alternatives Considered

Structure events of LangGraph: node start, end, etc.
Streaming events inside the Agent: tool calls, content streams, etc.
Concatenating the two forms a complete event stream.

The problem is that in the current architecture, the Agent is called inside the node function, so LangGraph cannot see the details inside the Agent.

Solution: manually yield the Agent's streaming events inside the query_xxx function, then pass them through the State, or directly concatenate them at the streaming processing level.

Additional Context

Actually, the core requirement here is observability; it can be quickly integrated into the frontend and transmitted in a streaming output format. The best solution is actually that some key APIs in Langsmith can be used directly (and they are already in a streaming format). This way, the required agent or LLM-related streaming output can be quickly obtained via the API in the overall runtime of Langgraph or Langchain, and then quickly integrated into the frontend.

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestRequest for an enhancement / additional functionalitylangchain`langchain` package issues & PRs

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions