From the course: Build AI Agents and Chatbots with LangGraph

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Debugging agents in Langchain

Debugging agents in Langchain

- [Instructor] When building any software application, debug ability is a key requirement and concern. How do we debug agents built with LangGraph? LangGraph provides a debug flag when creating agents. When this flag is used, it creates a verbose output that describes every step with all its inputs and outputs. Let's set up the agent_graph again with the debug flag set to True. Let's execute the first prompt again. On invoking the graph, a verbose log is printed. Walking through the output, we start with checkpoint 1, which is the start of the graph. The checkpoints indicate the state of the graph. Next, we see the input human message, an ID is assigned to the prompt that tracks its state. The next step is using the LLM to come up with the execution plan. This provides information about what the LLM has decided to do. In this case, it is to call the find_sum function. Also, it prints metrics like token usage too. I recommend reading through each of these remaining steps to understand…

Contents