From the course: OpenAI API for Python Developers

Solution: Call functions and generate extended responses

From the course: OpenAI API for Python Developers

Solution: Call functions and generate extended responses

(bright upbeat music) - [Instructor] So now we're going to look at the solution for this challenge, which is to connect to an API. And we've made this function available already, which is get_current_weather from the open with a map.org API. And we've made this functions available right here in this object in their available functions. And let's look at the different steps because what we want is to first allow to check if function calling is necessary to then call the function. And what happens here is that we're going to iterate through the tool_calls if they exist. In order to get the function name and also the function arguments which are required in order to make the API call. So we need to have the information of the location and also the temperature unit. And you'll see that there is also here the possibility to specify which parameters are required. So for example, we can also add units like this. Alright, so let's go back to the utils here. File to check out how to perform an API request. So basically you're going to run this function, which is get_current_weather, and we're going to pass the information of the latitude and longitude in order to make the API call. But because it's not provided automatically, we're going to run this function, which is geo_code and we're going to use the information of the location. So when you make their user query. And after that you're going to get all this information and let's look at the documentations to understand how to make this API call like this. And we're going to look at the API response, which is a big object and we need to access to the information of the weather, then the description and also this key, which is this main object. And also then access nested the temp information, which is in Kelvin units. So we're going to need to convert that into either Celsius or Fahrenheit. So this is exactly what we do here, which is to get the current temperature and also the description. So whatever response you get from the API, we're going to need to map it to this, just an object to then send it in order to use it as an input and send it to the chat models in order to generate an extended response. Let's go back here in order to implement that. So that's going to be actually step four. So when we make sure that we actually get the function response. So actually let's try it first we're going to make first a quick demonstration Python, main.py. I'm going to ask, "What is the weather in," I'm going to ask this time, "Santa Monica." Let's try this. Okay, so here it looks like this is missing just one information, which is important, which is the API key. And the reason is because we are using this library, which is load_dotenv in order to load environment variables. So we're going to do the same and that's going to be from dotenv. And make sure that we load the environment variables in order to access this API key. Alright, so let's try that again. And we're going to ask again, "What is the weather like in Santa Monica," there we go. So this time it worked. You can see that we got this object back. So this is exactly this function response that we're going to use as an information. So the next time that we actually, that's not right here. So let's go back to the main.py to find here. So the function response, and we're going to add this to the list of messages and that's going to be added to the list of messages in order to allow the chat models to return a structured response the next time around. So what we want is to get something which is, that looks like natural language and we're going to read the information that's the temperature in Santa Monica is 16 degrees. So let's try that this time. So we're going to allow to generate an extended response. So let's go to the documentation for function calling. And here you're going to find step, I think it's step 4 right here. So that's going to be this last step. We're going to use this one in order to generate a second response. So right here. So after you've been able to make the API call, we're going to allow to generate a second response, an extended response, and we're going to actually print this one. So that's going to be print. And actually let's just grab it from line 101 and that's going to be coming from the second response. And we must remember also so that's going to be first choices. Then message consent this way. Okay, so let's try that again. I think it's going to work directly. So what is the weather. We're going to ask this time, "What is the weather in Paris?" Let's try that. So first we're going to get the function response and I think that it prints. So what I'm going to do is just make sure that we start again the app and we're going to ask the same question again. "What is the weather like in Paris?" Okay, let's try that. Okay, so that is printing here. So the function response, and here you can read the extended response from the bot, which is that, "Currently in Paris the weather is eight degrees with broken clouds." So what happened for the second response? So when we generated here the completion, the assistant could include the information that it gets after making the API call. So it included this information, this just an object as part of the response in order to be able to read it as natural language. And if we check the current weather in Paris, we can see that this is exactly 8 degrees. Excellent. This last example helped us understand how to use the chat completions API in combination with the feature function calling to extend the capabilities of the GPT models. It is also important to note about tokens that under the hood functions are also included in the count of tokens. So this will be built as input tokens. So this is important to keep that in mind about function calling. And there are plenty of other interesting use cases that include that you can perform multiple calls at the same times together. So let's say for example, that you want to know the weather in two locations at the same time. In New York, where you live, and in London where you're traveling for a vacation, then you'd have to execute multiple calls in one time. And there are plenty of other options to explore and to experiment with integration of function calling. Let's say for example, that you want to connect your assistants to a backend API, to train your assistants with your custom knowledge about your products and services to allow the chat models to generate a personalized response to your customers and provide with an enhanced user experience. And you can find more examples here in the OpenAI cookbook.

Contents