From the course: Hands-On AI: Building AI Agents with Model Context Protocol (MCP) and Agent2Agent (A2A)

Build the HR policy MCP server

- [Instructor] In this video, we will build the code for the MCP HR policy server. The code for this chapter is available in the folder, chapter three. The server code is available in the file HR policy server dot PY. The policies document is also available in the same folder. Let's now review the code. We begin by setting up the MCP server for fast MCP. Then we proceed to set up the vector store using in-memory vector store in Lang chain. We first load the PDF file using PYPDF loader. We will use all mini MLL6V2 embedding model to embed the documents. We will use the Lang chain helper function from documents to create the vector database, providing the policies document and the embedding model as parameters. This code runs only once when the MCP server is started up. Next we set up the MCP tool to query the vector database. We use the tool decorator for the function that provides the retriever functionality. Do note that the MCP server name is used as a part of the decorator. The function query policies takes the query as input. It does a similarity search on the vector database and returns the top three chunks. The results are then returned back to the client. We will then set up the prompt for the agent. Here we use the prompt decorator. This function takes as input the query from the user, the prompt template as a placeholder for the query. The get LLM prompt returns the final prompt after replacing the placeholder with the actual query. We also have some test code that can be run to execute the server standalone and retrieve the results. Finally, we run the MCP server using transport as STDIO. This server is now ready with the retriever tool and prompt.

Contents