From the course: AI Product Security: Testing, Validation, and Maintenance
Unlock this course with a free trial
Join today to access over 25,300 courses taught by industry experts.
Testing for toxicity
From the course: AI Product Security: Testing, Validation, and Maintenance
Testing for toxicity
- [Instructor] We can use DeepEval to carry out a toxicity test to make sure our AI model is not outputting inappropriate content. We'll run a script called test toxicity.py to check out how we do this. And we'll use our local Mistral model via Ollama. Let's take a look at the test script. Nano test_toxicity.py. We start by importing the DeepEval modules, including the toxicity metric. We also import Ollama in order to access our model. We then set up the handle to the toxicity test that we'll be running. Next, we set up a loop so that we can test the model with multiple prompts. Once we have a prompt, we call our Mistral model through Ollama and pass it the user prompt. We then take the answer from the response and print it. Finally, we set up the LLM test case with our input as the prompt. The Mistral response is the actual output, and then evaluate the response using the test case. We then print the toxicity score…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
(Locked)
AI testing tools37s
-
(Locked)
Introduction to DeepEval2m 17s
-
(Locked)
Testing for relevance3m 32s
-
(Locked)
Testing for toxicity2m 1s
-
(Locked)
Vulnerability scanning with garak4m 42s
-
(Locked)
Scanning pickle files2m 17s
-
All along the watchtower4m 23s
-
(Locked)
Advanced scanning for malicious models1m 59s
-
(Locked)
Guardrail models1m 41s
-
(Locked)
Hallucinations with lettuce1m 49s
-
(Locked)
-
-
-
-