From the course: AI Product Security: Testing, Validation, and Maintenance

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Testing for toxicity

Testing for toxicity

- [Instructor] We can use DeepEval to carry out a toxicity test to make sure our AI model is not outputting inappropriate content. We'll run a script called test toxicity.py to check out how we do this. And we'll use our local Mistral model via Ollama. Let's take a look at the test script. Nano test_toxicity.py. We start by importing the DeepEval modules, including the toxicity metric. We also import Ollama in order to access our model. We then set up the handle to the toxicity test that we'll be running. Next, we set up a loop so that we can test the model with multiple prompts. Once we have a prompt, we call our Mistral model through Ollama and pass it the user prompt. We then take the answer from the response and print it. Finally, we set up the LLM test case with our input as the prompt. The Mistral response is the actual output, and then evaluate the response using the test case. We then print the toxicity score…

Contents