From the course: OpenAI API for Python Developers

Defining prompts and making requests

From the course: OpenAI API for Python Developers

Defining prompts and making requests

- [Instructor] Now we want to define a prompt and be able to make API calls. And we're going to do that right here under the else statement right here. So I already have my app, which is up and running. And for the moment we can only exit the program, which is under the if statement. And next we want to be able to ask questions. So we're going to define here a prompt. And we're going to find an example of how to define a prompt and generate completions right here on the documentation. And first, what you need to do is to define an OpenAI object and be able to define a prompt and generate completions like this example. So we're going to copy from line 2 to 9, go back to our projects, and I'm going to paste this right here. This, and for this part, I'm going to take it and actually put it at the top of my file. I'm going to put it right under this statement, load_dotenv. And I'm going to make a few adjustments because we no longer need this part. And we're going to call this one from the Open AI library, like this, openai library. Let's go back and we're going to continue to define our prompts and I'm going to save this one to the name completion. And this is the completion that we're going to print right here. And before we continue, let's go check out the documentation to understand how it works. So when the API call is successful, we're going to get here a response. So this is an example of a response that we get back from the language model right below. So this is how a response object looks like from the language model. You're going to have access to the key choices that has an array as a value. So line 7, choices, and inside you're going to have one object. And with the answer from the language model, which is the value of the key text right here, line 9. So we're going to do the same and access first choices, then the first object at position index 0, and then text. So let's try that and remember that for now, this is just a test and we're going to actually use the prompt test that we had defined earlier. We're going to run this. So I'm going to select the first option, ask a question, and it's going to prompt us to then ask our question. But for now, this is just a test. I'm going to type just test and we don't see anything. And that is because remember that you can always control how many tokens you can use for every prompt and every completion. So I'm going to make sure that I replace this value 7 with 100 to leave enough room to actually be able to see the completion. So let's try that again and I'm going to run it. Here we go. We're going to ask a question. I'm going to type tests and here we go. So now we can read, sure, this is a test prompt. Is there anything else you'd like me to say? So we're going to of course continue, but instead of keeping exiting the program and start it again like this, I'm going to allow the program to run in the loop. So I'm going to allow it to run forever. I'm going to show you how. So here, right before user_input, I'm going to write a while loop. So I'm going to create a loop with a while statement. So that's going to be all of it. We're going to allow this to run forever until the user types "x" to exit the program. So that's going to be more convenient. And the other thing we're going to do is to then allow to accept the user input as a query. So we're going to actually send the text input to whatever we type. It's going to be sent as the query to the language model. So let's try that again. So we're going to ask a question this time. So for example, "What is the capital of France?" An easy one, and we're going to get the answer back from the language model. Oh, looks like it didn't take into account, so maybe I needed to save again. So let me try that again. So I'm going to exit, run again. So I'm going to clear first my console, run again. I'm going to ask a question and we're going to start again. "What is the capital of France?" We're going to see that, and that is an easy one. The capital of France is Paris, and you see that the next step is to allow us to send another query. So what we're going to do also is to allow, so every time that we send a prompt, we're going to allow to also print the information of how many tokens we've been using. So that's going to be with this function that I have defined at the top with get tokens. I'm going to save this in response like this and pass this as a parameter, and I'm going to replace this as well with a response. So let's try that again. We're going to send another query, and this time I'm going to ask, "How many stars on the US flag?" For example. Let's see. So we know the answer. There are currently 50 stars on the US flag, and on top of that, it's going to give us like here a breakdown of how many tokens we've been using with a total of 22 tokens. So that is the end for our first example, which is the first of many examples to create Next Generation and AI driven applications.

Contents