From the course: Level up LLM applications development with LangChain and OpenAI
Unlock the full course today
Join today to access over 24,800 courses taught by industry experts.
Create a runnable to combine a prompt, a model, and output
From the course: Level up LLM applications development with LangChain and OpenAI
Create a runnable to combine a prompt, a model, and output
- [Instructor] So let's create another route, another endpoint, so right below, I'm going to copy these lines from 41 to 45, but this time it's going to be with a runnable. We're going to compose a chain starting with a prompt. So to send instructions to the model, let's do that. So the prompt will be the one we defined. Let's go back up line 19. So we're going to ask the language models to tell a joke about a specific topic and then the model. And finally, we're going to parse the response using runnable Lambda. So, and then we pass parsed response. So I have provided this utility function here, all right? And this is going to allow to return the response, but more specifically the content, the text value, content value of the response. And so here you have the runnable Lambda, which is added to the scope. All right, so now that we have these new routes, let's see if we can try it. I'm going to start the server again, so my bad. So we have already one endpoint with the same name…
Contents
-
-
-
-
-
-
-
-
-
-
-
(Locked)
Introducing LangServe: Installation and setup3m 35s
-
(Locked)
Create a server49s
-
(Locked)
Create the routes and the endpoints5m 56s
-
(Locked)
Create a runnable to combine a prompt, a model, and output3m 35s
-
(Locked)
Challenge: Deploy a RESTful API1m 39s
-
(Locked)
Solution: Deploy a RESTful API2m 51s
-
(Locked)
-
-