From the course: OpenAI API and MCP Development
Generating extending responses
From the course: OpenAI API and MCP Development
Generating extending responses
So now let's see how to generate extended responses. And we want to use the function calling feature. I'm going to start with a quick demonstration of this app. What I want is to display the information of the weather data, the current weather data. What is the weather in Paris, I'm going to ask. All right, so now we've got a response, an extended response. And we can read that the current weather in Paris is 10 degrees Celsius with a few clouds. And on top of that, we've got an image that illustrates the weather data, the current weather data in Paris. All right, so that's great. So that is actually much more different than the previous demonstration when we could read that, I'm sorry, I am just an AI assistant and I don't have access to real-time data, real-time information. So what happens here? So I'm going to take you to the code editor to look at the project, the final version of this project. So what we've done is to add the function calling feature on top of other features, and let's have a look. I'm gonna walk you through the different steps. So step one, line 38, we ask the language model a simple question, like what's the weather like? And then step two, the language model will detect if it needs to call a function. And the purpose of this function is to interact an interface with an external system. In this example, this is a public API. Let's go check out in this file. So remember that you have, so actually it's not this one, this is other functions, helper functions, but I want to take you actually here, utility functions where you have all the functions that allows to interact with the public API, and you need an API key. One quick information again, you've got all the instructions in the readme file in order to set up your secret keys, which is important to interact with external APIs. All right, so let's go back. I'm gonna continue with reviewing the final code and version of this challenge. And after that, what we want is to call the function. We wanna call the functions to then get information from the external system or the public API to then incorporate the data into the next language model response. So this is what we do here. So we append the results, line 109. We append the results after calling the function with arguments. And this is the function response, which is in a string format. And then we can take the function arguments. I'm gonna use that. This is just to, for different purposes. So let's go back to the bottom. Because on top of that, I have added other features and functionalities to this app. First, I'm gonna allow to generate an image that illustrates the weather data information. And what you can do also is to enable vocal features, which happens line 157. So the great things, I mean, the extra benefit here is that you can enable a few settings in order to change the behavior of your application. So let's go back. We're going to try this app again. And here, you're going to see that magic. You can click on it. And then you're going to see one settings sidebar appearing. So from here, you can select different language model to change the quality of the responses. And then from this dropdown list, you can also select DALI 3 to change the language model and choose to use the DALI 3 model to generate images. And the new thing here, I mean, this is something that we have already worked with and set up previously, it is the vocal feature. So you can allow to convert written text into speech. So to use the text-to-speech audio API, also provided by OpenAI. So you can select a female voice or a male voice. So let's select male for the next demo. And I'm going to ask what is the weather in London this time. We're going to wait. All right, so here is the result. The current weather in London is seven degrees Celsius with broken clouds. And on top of that, you could have the male voice reading the results of this weather data information. All right, so this is great. And on top of that, let's go back to the project. So there is this media folder which is available with all the images that were generated with AI that you can save and store here in this media directory. So this is up to you. You can feel free to develop more and further these projects. This is a live project. So you can use it as a personal applications to keep yourself informed about the weather data in your city and current location. And if you are planning on a trip, you can also choose to have the weather forecast on a longer period of time, like for example, for 14 days. So the great thing with this weather API is that it provides with different endpoints in order to have access to weather data. so for different time frames, so for 48 hours or 8 days or even longer. So you can always check out here the documentations in order to update and upgrade your applications to your liking and based on your needs.