From the course: OpenAI API for Python Developers
LangChain key concepts
- [Instructor] I'm going to give you an overview of the key concepts so you can get a high level understanding of how LangChain works. So first, let's begin with installation. So that's going to be simply PIP install LangChain, and I'm going to give you the usual instructions to get started with the next project. So let's jump straight to the quick start guide. The first step will be to set up LangChain and then we're going to discover the most basic and common components of the LangChain framework, that include the prompt templates in order to give the language models instructions. And we're going to see how to interface with the language models and also how to parse and format the outputs from the language models. And we're going to make use of the LangChain expression language, which is a built-in protocol to facilitate the chaining of all these components. And of course, we're going to apply this key concept by creating our first application with LangChain. So let's go to this section, building with LangChain and we're going to discover the two types of language models. So first here, this one that takes a string as an input and returns a string as an output and the chat model. So the same that we've been using from the Open AI API reference, that also takes a list of messages as an input and then returns a message object. And the message object format will be the same. So we're going to get here the response from the AI as the content, the value of the content. And for the role you can have either a system message, an assistance, or a user. And you're going to see that the LangChain provides with several objects that we're going to discover with the different examples. And right below we're going to find a first example with the list of messages that includes a human message. And that could also include a system message. And in order to trigger a response from the language model, we're going to use this method, this verb, which is invoke with the text inputs as a parameter in order to get the answer from the AI. So let's go to our project and you're going to find the instructions on the ReadMe files in order to get started. Of course, we got to make sure that all the packages are properly installed and that you've got an API key setup as well. So let's go back to the project. I'm going to give you a quick walkthrough. So first we set up a system prompt. This is to give instructions to the language model. We have a string parser, we're going to get back to it. And we define also an instance of the chat OpenAI. And we use this setting for the temperature. This is to allow the behavior of the language models to be deterministic. So we can expect the same responses between outputs with the value of 0.3. Next we have a system message prompt that we create with this convenient method, which is from template. And you see that we pass the system prompt as an argument. And for the human message prompt, we're going to take here a question. So here you see the syntax, which is in brackets, and this is to take an input variable. So let me show you. Right here, we've got this example, which is that we ask the AI to tell a joke and we want to tell a joke about the topic, which is here, define it in brackets, because this one will corresponds to the user input, which here is ice cream. And here you're going to get this response. All right, so let's go back. So we're going to do the same. And right below, we're going to get here the list of messages, which will be then available as a prompt template, which will be chat prompt. So let's see a quick example. So what we're going to do is to print the value of this prompt, of this chat prompt templates. So for now, what I'm going to do is to run with Python three, main.py, and you're going to be able to see that as an input we've got this list of message, with the system message and the human message, and we would like to see the response now. We want to trigger the response from the model. So we're going to do here model, and we're going to do the same, Invoke. We're going to use this method and pass messages as an input. And I'm going to print then the response from the language model. I'm just going to put that in comments so we don't have that in the console. So let's run this. You've got this long message that was returned by the AI, and you see that this is the string value of the content key that we want to return. So for that, we're going to use this string output parser that we have defined at the top. So I'm going to do content and I'm going to do the same. I'm going to do string parser, but I'm not going to do parse. I'm going to call invoke. Same with the response. And with that, we're going to be able to then print the content of the response. So we're going to parse the outputs and then be able to display the content response from the message object. Now with the string parser, we can see an output that looks more user-friendly, and more in a human-like fashion. Next, we're going to see how to use a much cleaner and shorter syntax by chaining all these components and by using the LangChain expression language.