LinkedIn Learning Certificate of Completion
Certificate recipient
Completion date
Details
Skills covered
Mitigating Prompt Injection and Prompt Hacking
6m
Mitigating Prompt Injection and Prompt Hacking
By: Ray Villalobos
Course
As large language models like Chat GPT, Bard, Claude and others have penetrated the culture, hackers are busy attempting to manipulate the models they are based on like GPT, Palm2 and others in order to change how they respond. In this course, Ray Villalobos discusses the mechanisms behind prompt hacking and some of the mitigation techniques. In a world where companies are rushing to develop their own implementations of these popular models, it’s important to understand the concepts behind prompt hacking and some of the defenses that are used to address the potential consequences of its use.