From the course: Hands-On AI: Building AI Agents with Model Context Protocol (MCP) and Agent2Agent (A2A)

Build the code-of-conduct MCP client

���

- [Instructor] Let's now build the code of conduct MCP Assistant application with the MCP client. This code is available in the file, code_of_conduct_client.py, under the chapter2 directory. Here, we first load the .env environment as the client SDK is needed. We will be using langchain_mcp_adapters for building the client. First, we configure the MCP server connection for STD IO transport. For this, we use the STD IO server parameters method. The command parameter indicates the OS command that needs to be run to execute the MCP server. The args parameter contains the list of arguments that will be passed to the OS command. This contains the file path for the MCP server. If additional parameters are needed, they should also be provided in the arguments. Next, we set up the Azure Open AI model for LLM in this example. The endpoint, deployment, subscription key, and api_versions are configured in the .env file and will be reused across all examples in this course. Please replace them with configuration for your own account. We then create the Azure chat OpenAI instance using this configuration. To get the list of resources, we create an asynchronous function called fetch_resource_content. This is boilerplate code that can be used to get resources for any MCP server. We now proceed to create the MCP client. We start by first creating the STD IO client with the server parameters configured before. Then, we create a client session with the server. We then await for session initialization. These are boilerplate steps needed to connect to the MCP server. During initialization, the client will attempt to start the MCP server with the configuration provided as we are using STD IO transport. Once successfully started, it will complete the initialization. We can get the list of resources provided by an MCP server using the load MCP resources method. We can then print the metadata for all the resources. This will show all the information we configured when creating the server. The data attribute for each resource contains the actual content for the resource. This data can be of any type, including images and audio. This is then returned back to the calling code for the first resource found. Now, let's create the app. We begin by retrieving the resource content from the server. Resources in MCP can also take parameters from the client if needed, and adapt their behavior accordingly. Next, we create a user query, simulating user input. We ask the question, what are the privacy policies of the company? Then, we create a prompt that contains the user query and the retrieved content. We ask the LLM to answer the query using the content retrieved. Finally, we call the model and print the output. Let's now proceed to run this app and see MCP in action in the next video.

Contents