Skip to main content

Getting Started with AxonFlow

AxonFlow is an AI governance platform providing low-latency policy enforcement, multi-agent orchestration, and permission-aware data access for production AI systems.

Quick Start (5 Minutes)

Get AxonFlow running locally with Docker Compose:

# Clone the repository
git clone https://github.com/getaxonflow/axonflow.git
cd axonflow

# Set your OpenAI API key
export OPENAI_API_KEY=sk-your-key-here

# Start all services
docker-compose up -d

# Check all services are healthy
docker-compose ps

# Services available at:
# - Agent: http://localhost:8080
# - Orchestrator: http://localhost:8081
# - Grafana: http://localhost:3000
# - Prometheus: http://localhost:9090

That's it! You now have a fully functional AxonFlow deployment with:

  • Agent + Orchestrator + PostgreSQL + Redis
  • Full policy enforcement engine
  • MCP connector support
  • Grafana dashboards for monitoring

Verify Installation

Test that everything is working:

# Check agent health
curl http://localhost:8080/health
# Expected: {"service":"axonflow-agent","status":"healthy",...}

# Check orchestrator health
curl http://localhost:8081/health
# Expected: {"service":"axonflow-orchestrator","status":"healthy",...}

# Run the interactive demo
./examples/demo/demo.sh

The demo shows AxonFlow blocking SQL injection, detecting credit cards, and achieving single-digit millisecond latency:

Demo 1: SQL Injection Blocking
🛡️ BLOCKED - SQL Injection Detected

Demo 2: Safe Query (Allowed)
✓ ALLOWED - No policy violations

Demo 3: Credit Card Detection
🛡️ POLICY TRIGGERED - Credit Card Detected

Demo 4: Fast Policy Evaluation
⚡ Latency: single-digit ms

What You Can Build

AxonFlow enables you to add governance to any AI application:

Policy Enforcement

Define rules that control what your AI agents can do:

# policies/customer-support.yaml
name: customer-support-policy
rules:
- action: allow
conditions:
- field: user.role
operator: in
value: ["support", "admin"]
- action: block
conditions:
- field: request.contains_pii
operator: equals
value: true
message: "PII detected - request blocked"

Multi-Agent Orchestration

Coordinate multiple AI agents working in parallel:

from axonflow import AxonFlow

async with AxonFlow(base_url="http://localhost:8080") as client:
# Get policy-approved context for your agent
context = await client.get_policy_approved_context(
user_id="user-123",
action="query_customer_data",
resource="orders"
)

if context.approved:
# Your agent logic here
result = await your_agent.run(context.data)

# Audit the interaction
await client.audit_llm_call(
user_id="user-123",
prompt=prompt,
response=result
)

MCP Connectors

Access external data sources with built-in permission controls. AxonFlow supports 15+ connectors including PostgreSQL, MySQL, MongoDB, Redis, S3, Snowflake, and more.

See the MCP Connectors documentation for configuration and usage details.

Core Concepts

ConceptDescription
AgentPolicy enforcement engine - evaluates requests with single-digit ms latency
OrchestratorCoordinates multi-agent workflows and manages state
PolicyYAML rules defining what actions are allowed/blocked
MCP ConnectorPermission-aware interface to external data sources
Audit LogImmutable record of all AI interactions

Choose Your Integration Mode

AxonFlow offers two integration modes. Your choice depends on whether you're starting fresh or adding governance to an existing stack.

New to AI systems?

You can start directly with AxonFlow as your orchestration and governance layer — no other framework required. If you already use LangChain, CrewAI, or similar, gateway mode lets you adopt AxonFlow incrementally.

AxonFlow handles the full request lifecycle: policy → planning → routing → audit.

// Single call - everything handled automatically
const response = await axonflow.executeQuery({
userToken: 'user-123',
query: 'Analyze customer churn patterns',
requestType: 'chat'
});

Why Proxy Mode:

  • 100% automatic audit logging — no risk of missing calls
  • Multi-Agent Planning (MAP) — only available in Proxy Mode
  • Response filtering catches PII in LLM outputs
  • Simpler code — one API call instead of three

Gateway Mode (For Existing Stacks)

If you're already using LangChain, CrewAI, LlamaIndex, Lyzr, or similar frameworks, Gateway Mode lets you add governance without rewriting your LLM integration.

// 1. Pre-check policies
const ctx = await axonflow.getPolicyApprovedContext({ userToken, query });
if (!ctx.approved) throw new Error(ctx.blockReason);

// 2. Your existing LLM call (unchanged)
const response = await langchain.invoke(query);

// 3. Audit the call
await axonflow.auditLLMCall({ contextId: ctx.contextId, ... });

Why Gateway Mode:

  • No changes to your existing LLM calls
  • Incremental adoption — add governance today, evaluate deeper integration later
  • Works with any framework or direct API calls

Migration Path

Many teams start with Gateway Mode to get governance in place quickly, then evaluate moving to Proxy Mode based on:

FactorGateway ModeProxy Mode
Integration effortLow (wrap existing calls)Medium (replace LLM calls)
Governance coverageManual audit callsAutomatic, 100% coverage
Multi-Agent PlanningNot availableFull MAP support
Latency overhead~10ms (policy check only)~30ms (full lifecycle)

Detailed mode comparison

Project Structure

After cloning, you'll find:

axonflow/
├── docker-compose.yml # Local deployment config
├── platform/
│ ├── agent/ # Policy enforcement engine (Go)
│ ├── orchestrator/ # Multi-agent coordinator (Go)
│ ├── connectors/ # MCP connector implementations
│ └── examples/
│ └── demo/ # Interactive demo script
├── examples/
│ ├── hello-world/ # Simple SDK usage examples
│ └── workflows/ # Multi-step workflow examples
├── sdk/
│ ├── golang/ # Go SDK
│ └── typescript/ # TypeScript SDK
├── migrations/ # Database migrations
└── docs/ # Additional documentation

Next Steps

Learn the Basics

Explore Examples

Integrate Your Stack

Go Deeper

System Requirements

RequirementMinimumRecommended
Docker20.10+Latest
Docker Compose2.0+Latest
RAM4GB8GB
CPU2 cores4 cores
Disk10GB20GB

Supported LLM Providers:

  • OpenAI (GPT-4, GPT-4 Turbo)
  • Anthropic (Claude 3)
  • Local models via Ollama

Enterprise Deployment

Need production-grade deployment with high availability, auto-scaling, and enterprise support?

AxonFlow Enterprise offers:

  • One-click AWS deployment via CloudFormation
  • Multi-region high availability
  • AWS Bedrock integration
  • Industry compliance frameworks (HIPAA, SOC2, PCI-DSS)
  • 24/7 premium support

Learn about Enterprise Features | AWS Marketplace

Get Help