Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
-
Updated
Nov 25, 2025 - Python
Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
🎯 EquiLens - AI Bias Detection Platform for LLMs via Ollama. Interactive CLI with corpus generation, multi-metric auditing, statistical analysis & visualization. Features enhanced auditors, dynamic concurrency, resume capability & rich progress tracking. Alt Links: https://equilens.pages.dev/ , https://life-experimentalist.github.io/EquiLens/
Add a description, image, and links to the llm-auditing topic page so that developers can more easily learn about it.
To associate your repository with the llm-auditing topic, visit your repo's landing page and select "manage topics."