AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
-
Updated
Dec 18, 2025 - Python
AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
Benchmark LLM jailbreak resilience across providers with standardized tests, adversarial mode, rich analytics, and a clean Web UI.
Adversarial MCP server benchmark suite for testing tool-calling security, drift detection, and proxy defenses
Analysis of ChatGPT-5 reviewer failure: speculative reasoning disguised as certainty. Captures how evidence-only review drifted into hypotheses, later admitted as review-process failure. Includes logs, checksums, screenshots, and external video.
Independent research on ChatGPT-5 reviewer bias. Documents how the AI carried assumptions across PDF versions (v15→v16), wrongly denying evidence despite instructions. Includes JSONL logs, screenshots, checksums, and video evidence. Author: Priyanshu Kumar.
Investigation into ChatGPT-5 reviewer misalignment: PDF claimed screenshots as evidence, but assistant denied their visibility. Includes JSONL + human-readable logs, screenshots, checksums, and video. Highlights structural risks in AI reviewer reliability.
Extremely hard, multi-turn, open-source-grounded coding evaluations that reliably break every current frontier models (Claude, GPT, Grok, Gemini, Llama, etc.) on numerical stability, zero-allocation, autograd, SIMD, and long-chain correctness.
Forensic-style adversarial audit of Google Gemini 2.5 Pro revealing hidden cross-session memory. Includes structured reports, reproducible contracts, SHA-256 checksums, and video evidence of 28-day semantic recall and affective priming. Licensed under CC-BY 4.0.
A multi-agent safety engineering framework that subjects systems to adversarial audit. Orchestrates specialized agents (Engineer, Psychologist, Physicist) to find process risks and human factors.
Adversarial testing and robustness evaluation for the Crucible framework
Add a description, image, and links to the adversarial-testing topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-testing topic, visit your repo's landing page and select "manage topics."