From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson

Unlock this course with a free trial

Join today to access over 25,200 courses taught by industry experts.

Input/output validation

Input/output validation

In this lesson, we are going to be building off of our previous lesson of introduction to prompt engineering. So I'm going to use some terms like just ask or few shot and expect that you more or less mostly know what that is. So if you need a refresher, go back. But for now, we are going to move on to our first section. Input and output validation. When you're dealing with LLMs, if you've ever used one, you might have noticed that it's pretty easy to make a large language model say something not super in line with what you expected it to do while solving a task. This is more true than it has ever been in the world of natural language processing. We have been building NLP models for decades, arguably, but we've never had the kind of AI models that were so free in their ability to generate. Now, we've done some prompting before where we got it to output our talks, trying to classify whether or not something is of a star rating by fine tuning an open AI model. We've done prompting to…

Contents