From the course: OpenAI API for Python Developers
Chain components (LCEL)
- [Instructor] Now we see a basic and a common use case of chaining a prompt, a prompt template, and a model together. So let's go ahead and create a new chain object. So that's going to combine the chat prompt and a model. And so that's going to generate a response, and we're going to put that on one line, and that's going to return a message object, like this. And we want to access the contents of that message. So we're going to see the response from the AI, and I'm going to put those two lines in comments. I'm going to comment this one out. So we just want to see the result of this demo. And you'll see that I've changed here, the user input. Now we can read, do you ship to Europe? So we're going to see the outputs from the language model. Here we go, and now we can read that. I apologize for any confusions, but as an AI assistant, I don't have the capacity to ship any products, and that is because the AI assistant only has access to training data. So the way it works here is that we're going to run the chain and submit the user inputs. So it's going to be run in chain. We're going to chain every component. So every component is separated by a pipe operator. And so the output of every component is passed as an input to the next component, then to the language model, in order to generate an output. So here, let's go back a little bit. You're going to see that the user input is going to be added to the list of messages, right here. So we've got the list of messages, and inside this list of messages, you're going to find the human message prompts. That's going to take the question, as a user inputs. This input verbal corresponds to this one. So we're going to trigger here the chain with this user inputs and every information, every output is going to be then passed to the next component, one after another. And so we're going to add another component right here, which is the string parser. So in order to return a string, so we no longer need here to specify content. Automatically, the language model output is going to be passed to this component in order to return a string. So let's try again, the same example. We're going to get the same answer, actually, and here we go. I am sorry, but as an AI assistant, I don't have the capability to ship anything. The link chain expression language makes it super easy to build basic and also complex chains. And this is what makes long chain so versatile and adaptive. For example, we could use long chain in combination with other components, like a vector store, in order to expand the capabilities of the language model. Also with the ability to search custom knowledge. Another example is that we can make this AI assistant more knowledgeable about your products and services, for example, and be able to respond to user queries, like, "Do you ship to Europe?" And that's going to be the next step. We're going to load documents as a source of information in order to create a custom chat bot.