From the course: Level up LLM applications development with LangChain and OpenAI

Create a chain and interface with LLM

- [Instructor] So let's begin with the first steps, which is to learn and understand how to interface with the language model. And we start with the basics of LangChain, Model In and Out. And so the purpose of every language model is to take an input in order to generate an output. So that is the model in and out. And here the steps are first to format and define instructions to send to the language model. And so in turn, the language model is going to be able to predict and generate an output that we can then format and parse. So let's look at one quick start example below to see how to first initialize the language model. Then we're going to provide with an input, a text input in natural language, and finally, be able to generate an output. So we're going to use the same example for our first project. But first I'd like you to start by setting up your project. And we're going to go first and create and activate a virtual environment. I'm going to use Python 3 because I work on a Mac. And once you see this directory which has been created, you can go ahead and activate your virtual environment. There we go. And the next step will be to then install the dependencies. So the dependencies and packages. So we're going to look at it. So we're going to install langchain, openai, and also python-dotenv, which is important in order to load environment variables from your project, including the secret key that we have just set up with OpenAI. And we also use Colorama, which is going to allow to add some colors to your program. So I'm going to actually show you that right now. So we're going to go to the main.py and you're going to see that I've created this little user interface and that's like a menu which is going to be displayed. We have two options, which is to either ask a question or exit. And when you go to ask a question, you're going to be able to type, provide with the user input, and then interact with here our program, which is to generate an output based on a user input. Okay, so let's go ahead. The first step will be to initialize the language model. So I'm going to do that with this one, this line, I'm going to copy it and add it here. So I'm going to use the OpenAI API. Next what we're going to do is to provide with the text input, and we're going to invoke, meaning that we're going to use this method provided by LangChain that's going to allow to then interface with the language model and then allow the language model to predict and generate an output. So let's copy it. The text is going to be passed as a parameter and I'm going to return the results. And one thing I need to tell you before we start the app is that we use load_dotenv() and this is used to load the environment variables and be able to access the OPENAI_API_KEY, which is here in this .env file. All right, so let's begin and start the app. And I'm going to do that with python main.py. I'm going to start with a basic question. So first we're going to make a selection, which is to ask a question. And I'm going to ask, "What are five vacation destinations to eat pasta." That's going to be my question to the language model. All right, so let's read the output, the generated output, and it is very long. So we have Rome in Italy, Bologna, again in Italy, Tuscany, Naples. Okay, so everything in Italy, and you can go ahead and ask as many questions as you'd like. So we have just seen an example that allows to understand how to interface with the language model. What we've done is to provide with an input, a text input in natural language. And so the language model could predict a response and generate the outputs. And so the next step will be to add more building blocks to this chain. We're going to learn to structure a prompt with a prompt template, bind the model to this prompt and see how to format the outputs.

Contents