From the course: OpenAI API and MCP Development
Challenge: Build a custom assistant and weather app
From the course: OpenAI API and MCP Development
Challenge: Build a custom assistant and weather app
Modern chatbots powered by AI are able to maintain a conversation with a user in natural language. They can also simulate the exact same way that a human would interact. These bots can understand and respond to customer queries in a very natural way. They can also provide instance assistance and even escalate issues to human agents whenever it is necessary. But despite being powerful, these bots also have limitations. They are pre-trained on general data only. So they don't have real-time awareness with no access to external and current information. So let's make a quick demonstration. This is actually the starter project that you're going to be working with. So we're going to make a quick demo. I'm going to ask, what is the weather in Paris? And we're going to see what is the answer. Let's hit Send. All right, so this give me an answer which I think is not completely right. And I'm going to check the weather in Paris right now. I'm going to show you. Actually, right now, currently, it is 8 degrees Celsius. This is not currently 12 degrees Celsius. So this is not accurate. Let's try to ask again, what's the weather? What is the weather in London? Because I just want to show you that the language model, despite being powerful, can also be limited. Here we go. So that's exactly what I wanted to show you. We can read that I'm sorry, but I am an AI assistant, and I don't have access to real-time information on the weather. You can check the weather in London by using a weather website or app. So this is what we wanna do. What we wanna do is to allow the language models to interact and to interface with external systems and public APIs. And so the goal for you will be to combine the power of the chat completions API with the ability to extend the responses by adding the function calling feature. So here are the steps. So you're gonna have access to a starter projects with instructions with a readme file. You're gonna need to define the secret keys also. So just like with the OpenAI API, you need to add an API and secret key in order to be able to interact with one public API that I'm gonna show you, which is very easy to use. And also you're gonna need to generate and display the outputs using the Streamlit application, the Python library to create a nice and clean user interface. So here it is. you have the documentations for function calling with instructions as to how it works with an example of how to set it up. But you also have everything that you need in the starter project. And of course you can always use the guides tutorials available with OpenAI to start building easy and fast. In order to make this example application the most interesting so you you have access to all the core concepts and endpoints that you can use, the models as well, to check out if you remember the knowledge cut of date, depending on what information you want to include. But of course, by adding function calling, by adding the function calling feature, you allow the language models to access current and specific data in order to enhance and leverage the power of the language model, the power, and the capabilities. This is the weather API, so remember that you're going to need to create an account in order to then create and add an API key to add it to your project in order to be authorized to interact with the language model by defining a request. And the endpoint that you're going to be using is this one, which is the current weather data. But again, everything is available in your starter project, but of course, you're going need to make it your own so you can feel free to develop this project as you wish so there is no limit for the creativity you can even if you'd like add vocal features to make this application speak and even why not allow to generate an image in order to illustrate the description of the weather data and afterwards of course you're going to have access to the final solution and we're going to make a live demonstration also afterwards, so good luck!