Getting Started with AxonFlow
AxonFlow is an AI governance platform providing low-latency policy enforcement, multi-agent orchestration, and permission-aware data access for production AI systems.
Quick Start (5 Minutes)
Get AxonFlow running locally with Docker Compose:
# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow
# Set your OpenAI API key
export OPENAI_API_KEY=sk-your-key-here
# Start all services
docker-compose up -d
# Check all services are healthy
docker-compose ps
# Services available at:
# - Agent: http://localhost:8080
# - Orchestrator: http://localhost:8081
# - Grafana: http://localhost:3000
# - Prometheus: http://localhost:9090
That's it! You now have a fully functional AxonFlow deployment with:
- Agent + Orchestrator + PostgreSQL + Redis
- Full policy enforcement engine
- MCP connector support
- Grafana dashboards for monitoring
Verify Installation
Test that everything is working:
# Check agent health
curl http://localhost:8080/health
# Expected: {"service":"axonflow-agent","status":"healthy",...}
# Check orchestrator health
curl http://localhost:8081/health
# Expected: {"service":"axonflow-orchestrator","status":"healthy",...}
# Run the interactive demo
./examples/demo/demo.sh
The demo shows AxonFlow blocking SQL injection, detecting credit cards, and achieving single-digit millisecond latency:
Demo 1: SQL Injection Blocking
🛡️ BLOCKED - SQL Injection Detected
Demo 2: Safe Query (Allowed)
✓ ALLOWED - No policy violations
Demo 3: Credit Card Detection
🛡️ POLICY TRIGGERED - Credit Card Detected
Demo 4: Fast Policy Evaluation
⚡ Latency: single-digit ms
What You Can Build
AxonFlow enables you to add governance to any AI application:
Policy Enforcement
Define rules that control what your AI agents can do:
# policies/customer-support.yaml
name: customer-support-policy
rules:
- action: allow
conditions:
- field: user.role
operator: in
value: ["support", "admin"]
- action: block
conditions:
- field: request.contains_pii
operator: equals
value: true
message: "PII detected - request blocked"
Multi-Agent Orchestration
Coordinate multiple AI agents working in parallel:
from axonflow import AxonFlow
async with AxonFlow(base_url="http://localhost:8080") as client:
# Get policy-approved context for your agent
context = await client.get_policy_approved_context(
user_id="user-123",
action="query_customer_data",
resource="orders"
)
if context.approved:
# Your agent logic here
result = await your_agent.run(context.data)
# Audit the interaction
await client.audit_llm_call(
user_id="user-123",
prompt=prompt,
response=result
)
MCP Connectors
Access external data sources with built-in permission controls. AxonFlow supports 15+ connectors including PostgreSQL, MySQL, MongoDB, Redis, S3, Snowflake, and more.
See the MCP Connectors documentation for configuration and usage details.
Core Concepts
| Concept | Description |
|---|---|
| Agent | Policy enforcement engine - evaluates requests with single-digit ms latency |
| Orchestrator | Coordinates multi-agent workflows and manages state |
| Policy | YAML rules defining what actions are allowed/blocked |
| MCP Connector | Permission-aware interface to external data sources |
| Audit Log | Immutable record of all AI interactions |
Choose Your Integration Mode
AxonFlow offers two integration modes. Your choice depends on whether you're starting fresh or adding governance to an existing stack.
You can start directly with AxonFlow as your orchestration and governance layer — no other framework required. If you already use LangChain, CrewAI, or similar, gateway mode lets you adopt AxonFlow incrementally.
Proxy Mode (Recommended for New Projects)
AxonFlow handles the full request lifecycle: policy → planning → routing → audit.
// Single call - everything handled automatically
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'Analyze customer churn patterns',
requestType: 'chat'
});
Why Proxy Mode:
- 100% automatic audit logging — no risk of missing calls
- Multi-Agent Planning (MAP) — only available in Proxy Mode
- Response filtering catches PII in LLM outputs
- Simpler code — one API call instead of three
Gateway Mode (For Existing Stacks)
If you're already using LangChain, CrewAI, LlamaIndex, Lyzr, or similar frameworks, Gateway Mode lets you add governance without rewriting your LLM integration.
// 1. Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({ userToken, query });
if (!ctx.approved) throw new Error(ctx.blockReason);
// 2. Your existing LLM call (unchanged)
const response = await langchain.invoke(query);
// 3. Audit the call
await axonflow.auditLLMCall({ contextId: ctx.contextId, ... });
Why Gateway Mode:
- No changes to your existing LLM calls
- Incremental adoption — add governance today, evaluate deeper integration later
- Works with any framework or direct API calls
Migration Path
Many teams start with Gateway Mode to get governance in place quickly, then evaluate moving to Proxy Mode based on:
| Factor | Gateway Mode | Proxy Mode |
|---|---|---|
| Integration effort | Low (wrap existing calls) | Medium (replace LLM calls) |
| Governance coverage | Manual audit calls | Automatic, 100% coverage |
| Multi-Agent Planning | Not available | Full MAP support |
| Latency overhead | ~10ms (policy check only) | ~30ms (full lifecycle) |
Project Structure
After cloning, you'll find:
axonflow/
├── docker-compose.yml # Local deployment config
├── platform/
│ ├── agent/ # Policy enforcement engine (Go)
│ ├── orchestrator/ # Multi-agent coordinator (Go)
│ ├── connectors/ # MCP connector implementations
│ └── examples/
│ └── demo/ # Interactive demo script
├── examples/
│ ├── hello-world/ # Simple SDK usage examples
│ └── workflows/ # Multi-step workflow examples
├── sdk/
│ ├── golang/ # Go SDK
│ └── typescript/ # TypeScript SDK
├── migrations/ # Database migrations
└── docs/ # Additional documentation
Next Steps
Learn the Basics
- Your First Agent - Build a policy-enforced AI agent
- Workflow Examples - Common patterns and recipes
- Policy Syntax - Write governance rules
Explore Examples
- Trip Planner - Multi-agent travel planning
- Customer Support - Support ticket automation
- Healthcare - HIPAA-compliant medical AI
- E-Commerce - Product recommendations
Integrate Your Stack
- Python SDK - Async-first Python client
- TypeScript SDK - Node.js and browser support
- Go SDK - Native Go client
- LangChain Integration - Use with LangChain agents
Go Deeper
- Architecture Overview - How AxonFlow works
- API Reference - Full API documentation
- Local Development - Development setup
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Docker | 20.10+ | Latest |
| Docker Compose | 2.0+ | Latest |
| RAM | 4GB | 8GB |
| CPU | 2 cores | 4 cores |
| Disk | 10GB | 20GB |
Supported LLM Providers:
- OpenAI (GPT-4, GPT-4 Turbo)
- Anthropic (Claude 3)
- Local models via Ollama
Enterprise Deployment
Need production-grade deployment with high availability, auto-scaling, and enterprise support?
AxonFlow Enterprise offers:
- One-click AWS deployment via CloudFormation
- Multi-region high availability
- AWS Bedrock integration
- Industry compliance frameworks (HIPAA, SOC2, PCI-DSS)
- 24/7 premium support
Learn about Enterprise Features | AWS Marketplace
Get Help
- GitHub Issues: github.com/getaxonflow/axonflow/issues
- Documentation: docs.getaxonflow.com
- Email: [email protected]