From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson

Unlock this course with a free trial

Join today to access over 25,200 courses taught by industry experts.

Assessing an LLM's encoded knowledge level

Assessing an LLM's encoded knowledge level

Now, few-shot learning, chain-of-thought learning, batch prompting, prompt chaining, all of this is a lot. I know. But a lot of it boils down to you only need certain prompting techniques if you know your LLM's assessment of their built-in knowledge, specifically as it relates to your task. So usually, when you are asking yourself, does this LLM, this meaning Coherent, Anthropic, Flan T5, GPT-J, does it know enough for my task? Usually can be broken down into three categories. A bit of a simplification, but bear with me. Class A is yes, it has all the information encoded and it is ready to solve my task this is generally going to be something like all I wanted to do is classify this into a binary Classification that GPT-4 should be more than willing to do and should have seen plenty of examples of while pre-training done So really you only need to do some basic prompting some clear instructions and maybe you format the output to be a JSON because you can. This can be something like…

Contents