From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Preventing prompt injection attacks
From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Preventing prompt injection attacks
Now, prompting is all well and good and fun and games until someone gets hurt. And people can get hurt. By injecting information into our prompts, we are inviting or perhaps at least open to the possibility that a malicious actor might inject something not so helpful and potentially harmful. Now, the idea of injecting personality or style into a prompt is fine, right? If I were to ask an LLM to be a chatbot and answer a question as if they were a store attendant on the top there, I might ask a question like where are the carrots, and the carrots are in the produce section, you know, the onions and potatoes, I don't care if that's right or wrong right now, it's just a response to the question. I could ask it to be rude, in which case the attendant might say points over there. I could ask it to be excitable, where the attendant says right this way, follow me, and I'll show you to where the carrots are. They're just over there, ready for you to grab. Or I could be absolutely horrible and…