Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
-
Updated
Dec 4, 2025 - Python
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
A framework that makes AI research transparent, traceable, and independently verifiable.
BLUX-cA — Clarity Agent core of the BLUX ecosystem. A constitutional, audit-driven AI helm that interprets intent, enforces governance, and routes tasks safely across models, tools, and agents.
A governed system for translating applied AI research into auditable, decision-ready artifacts.
omphalOS turns strategic trade-and-technology analyses into tamper-evident, "run packets" for inspector, counsel, and oversight review.
Self-auditing governance framework that turns contradictions into verifiable, adaptive intelligence.
Deterministic, auditable ethical decision engine implementing the Sovereign Ethics Algebra (SEA).
Cairn is a local-first, cross-platform developer workbench for secure, auditable, AI-assisted software development, helping with coding, debugging, auditing, and collaboration while keeping all projects private and under user control.
🔥 Emergent intelligence in autonomous trading agents through evolutionary algorithms. Testing zero-knowledge learning in cryptocurrency markets. Where intelligence emerges, not designed.
Reference implementation of the Spiral–HDAG–Coupling architecture. It combines a verifiable ledger, a tensor-based Hyperdimensional DAG, and Time Information Crystals to provide a new kind of memory layer for Machine Learning. With integrated Zero-Knowledge ML, the system enables trustworthy, auditable, and privacy-preserving AI pipelines.
Agentic RFP response system orchestrating Sales, Technical, and Pricing agents with human-in-the-loop governance for fast, auditable enterprise responses.
Clinical + Genomic **RAG evaluation (pro)** with hybrid retrieval (BM25 + embeddings), stronger faithfulness, YAML configs, and HTML dashboards. Python **3.10+**.
This repository defines the protocols for **Helix-TTD Identity & Custody**. It enforces a strict "No Orphaned Agents" policy by binding every AI agent to a cryptographic root held by a human custodian.
完全ローカル×思考可視化×三権分立の軽量AIコア。ネット/外部DBなし。state保存/復元、4D意味距離(geo/strict)、署名付き役割カード、𝒢実装。
PULSE: deterministic release gates for AI safety
Ethical AI governance framework for multi-model alignment, integrity, and enterprise oversight.
AILEE Trust Layer is a deterministic trust and safety middleware for AI systems. It mediates uncertainty using confidence thresholds, contextual grace, peer consensus, and fail-safe fallback—transforming raw model outputs into auditable, reliable decisions.
Open, verifiable agent framework — fail-closed orchestration, evidence-based verification, tamper-evident audit chains (SHA-256/HMAC).
This repository provides a compact RAG evaluation harness tailored for clinical + genomic use cases. It operates on de-identified synthetic notes and curated genomic snippets, measures retrieval quality and grounding/faithfulness, and produces JSONL/CSV/HTML reports.
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."