From the course: Build with AI: Agentic Applications with LlamaIndex and MCP

Building a basic agent

Let's start with a very simple agent. We'll be building a tool calling agent that has access to one tool, which is an e-commerce tool that can search through clothing items. To start, we've already provided you with a helper function that creates a queryable ink memory vector store that is populated with the names, the descriptions, prices of clothing items. Our aim will be to create a shopping assistant agent that can access our e-commerce search tool when required. First, we'll define a tool. Let's start by defining a Python function called search e-commerce dataset. Let's also make it so that this function expects a query, which is a string, and let's also make it return a string, which will be the response to the query. Later on, we'll be providing this Python function as a tool to an agent class by Llama index called the function agent. When we do so, the function agent will use the description of the function as the tool description. This is pretty important because this description will be what allows the LLM to decide when and if it should run the tool. So here, let's add a description that says, useful to search clothing items and prices in an e-commerce data set. We can use our predefined query engine that searches through our vector store of clothing items to fill in this function. Now that we have a tool, the only thing left to do is to actually create our agent and provide this tool to it. We can start by deciding what model we want to use. Here I'll be using an OpenAI model, but you can pick anything else from the model providers available with LLAMA index. You can even use open source models via HuggingFace or host them locally with LLAMA. Next our agent will need a system prompt. You can think of this prompt as the instruction that sets the scene for the LLM. It allows us to tell the LLM what its role is and also allows us a way to define some instructions on how it should respond to users. In this case, we'll be telling our LLM that it can answer simple questions or search through clothing items. Finally, we create our function agent. We provide a list of tools, in this case just one, the system prompt and the LLM. I'm defining a simple chat loop here, which will allow us to iteratively chat with our agent, and this way we'll be able to continue a conversation with our agent until we exit the loop. We'll ask for user input in the terminal and we'll also make it so that if they type exit, they can exit the loop. While they are chatting, we can query the agent. Finally, we can print the response out. Try out chatting with your agent now, asking for various clothing suggestions. For example, let's ask what skirt might be appropriate for a wedding. I'll run this Python script. And when I'm asked for a query, I'll ask, what are some good skirts for a wedding? We have a recommendation here for the daisy chain swing skirt. But as a final step, try asking a follow-up question. For example, I'll ask what is the price? You'll notice that the agent is unable to respond to these as you'd expect. You can fix this by creating a chat buffer memory and providing it to the agent. We'll do exactly that. Let's try this again. Now we're able to continue the conversation where we left off. Coming up, we'll see how we can build the exact same agent, but instead of using the function agent, we will use the agent workflows. That way, we'll have control over all the decision steps instead of letting DLLM decide when to search the database at each iteration.

Contents