From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Chain-of-thought prompting
From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Chain-of-thought prompting
Moving on to chain-of-thought prompting. Now, this isn't actually the first time we've talked about chain-of-thought prompting. We actually brought it up in an earlier section when we were looking at our RAG and our agent prompt a few lessons ago. But to define the concept a bit more formally, chain-of-thought prompting forces an LLM to generate a reasoning for an answer alongside the actual answer itself. And the goal of TANF on prompting, like most prompting techniques to be fair, is to lead to a more actionable and frankly just better, more accurate result. So to see this in action, we'll take a bit of a history lesson because this is a model that you can no longer actually use on OpenAI's Playground, but because I took the screenshot over a year ago, we can at least see the differences between then And now. So we're looking at a version of GPT-3 called DaVinci. There are similar models to this that you can use that OpenAI claim are just a little bit better. But just to show you…