Universal MCP Server for AI-to-AI Task Continuation
CortexFlow is an MCP (Model Context Protocol) server that enables seamless handoff between AI agents. When you finish planning with ChatGPT, Claude Code can read the context and continue execution - without re-explaining the project.
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ AI Agent A (Planner) AI Agent B (Executor) │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ ChatGPT │ │ Claude Code │ │
│ │ Gemini │ │ Cursor │ │
│ │ Qwen │ │ VS Code │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
│ │ write_context() │ read_context() │
│ │ add_task() │ update_task() │
│ │ add_note() │ mark_task_complete() │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ CortexFlow MCP Server │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ Shared Project Context │ │ │
│ │ │ │ │ │
│ │ │ • Project: "Todo API" │ │ │
│ │ │ • Phase: execution │ │ │
│ │ │ • Tasks: [Setup, Models, Routes, Tests] │ │ │
│ │ │ • Notes: "Use Express + TypeScript" │ │ │
│ │ │ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Transport: stdio (MCP) | HTTP API │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
| App | Platform | Config |
|---|---|---|
| Claude Desktop | macOS, Windows, Linux | claude_desktop_config.json |
| Cursor | macOS, Windows, Linux | Settings → MCP |
| VS Code + Continue | macOS, Windows, Linux | .continue/config.json |
| Antigravity | macOS, Windows, Linux | MCP settings |
| Zed | macOS, Linux | Settings |
| Jan | macOS, Windows, Linux | MCP settings |
| LM Studio | macOS, Windows, Linux | MCP settings |
| Msty | macOS, Windows, Linux | MCP settings |
| Agent | Transport | Config |
|---|---|---|
| Claude Code | stdio | ~/.claude/mcp.json |
| Gemini CLI | stdio | MCP config |
| Qwen CLI | stdio | MCP config |
| Aider | stdio | MCP config |
| Any MCP client | stdio | Generic config |
| App | Integration | Status |
|---|---|---|
| ChatGPT (Web/Desktop) | Custom GPT Actions | ✅ |
| Gemini (Web) | Function calling | ✅ |
| Typing Mind | Plugin/HTTP | ✅ |
| LibreChat | External tool | ✅ |
| Open WebUI | HTTP tools | ✅ |
| Any HTTP client | REST API | ✅ |
# Install globally
npm install -g cortexflow
# Or use directly with npx
npx cortexflowgit clone https://github.com/mithun50/CortexFlow
cd CortexFlow
npm install
npm run buildAdd to ~/.claude/mcp.json:
{
"mcpServers": {
"cortexflow": {
"command": "npx",
"args": ["-y", "cortexflow"]
}
}
}Or add to project .mcp.json for project-specific config.
Add to config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Linux:
~/.config/claude-desktop/config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"cortexflow": {
"command": "npx",
"args": ["-y", "cortexflow"]
}
}
}- Open Settings → MCP Servers
- Add new server:
- Name:
cortexflow - Command:
npx -y cortexflow
- Name:
Add to .continue/config.json:
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "npx",
"args": ["-y", "cortexflow"]
}
}
]
}
}Add to ~/.gemini/antigravity/mcp_config.json:
{
"mcpServers": {
"cortexflow": {
"command": "npx",
"args": ["-y", "cortexflow"]
}
}
}Or access via: Agent Options (...) → MCP Servers → Manage MCP Servers → View raw config
For HTTP mode (remote):
{
"mcpServers": {
"cortexflow": {
"serverUrl": "http://localhost:3210"
}
}
}Note: Antigravity uses
serverUrlinstead ofurlfor HTTP-based MCP servers.
- Start HTTP server:
cortexflow --http - Create Custom GPT with Actions using OpenAPI spec at
http://localhost:3210/openapi.json
For any MCP-compatible client, use stdio transport:
- Command:
npx - Args:
["-y", "cortexflow"]
| Tool | Description |
|---|---|
read_context |
Read active project: tasks, notes, phase, metadata |
write_context |
Create new project with initial tasks |
| Tool | Description |
|---|---|
add_task |
Add a new task to the project |
update_task |
Update task status or add notes |
mark_task_complete |
Mark a task as completed |
| Tool | Description |
|---|---|
add_note |
Add a note for other AI agents |
set_phase |
Update project phase (planning/execution/review/completed) |
| Tool | Description |
|---|---|
validate_task_dependencies |
Validate all task dependencies (detect circular dependencies, missing tasks) |
get_task_graph |
Get visual representation of task dependencies and execution order |
| Tool | Description |
|---|---|
list_projects |
List all projects |
set_active_project |
Switch active project |
delete_project |
Delete a project |
User to ChatGPT: "Plan a REST API for todo management"
ChatGPT calls write_context:
{
"name": "Todo API",
"description": "RESTful API with CRUD operations for todos",
"phase": "planning",
"tasks": [
{"title": "Setup Express server", "description": "Initialize with TypeScript"},
{"title": "Create Todo model", "description": "id, title, completed, createdAt"},
{"title": "Implement CRUD routes", "description": "POST, GET, PUT, DELETE"},
{"title": "Add input validation", "description": "Use Zod for validation"}
]
}ChatGPT calls add_note:
{
"content": "Start with task 1-2 in parallel. Use in-memory storage for MVP.",
"agent": "planner",
"category": "decision"
}ChatGPT calls set_phase:
{"phase": "execution"}User to Claude Code: "Continue the Todo API project"
Claude Code calls read_context and receives:
Project: Todo API
Phase: execution
Tasks: 0/4 completed, 4 pending
Tasks:
[a1b2] PENDING: Setup Express server
[c3d4] PENDING: Create Todo model
[e5f6] PENDING: Implement CRUD routes
[g7h8] PENDING: Add input validation
Recent Notes:
[planner/decision] Start with task 1-2 in parallel. Use in-memory storage for MVP.
Claude Code understands the full context and starts implementation:
// update_task
{"task_id": "a1b2", "status": "in_progress"}After completing:
// mark_task_complete
{"task_id": "a1b2", "note": "Express server with TypeScript, CORS, helmet configured"}Any connected AI can call read_context to see current state:
- Which tasks are done
- What notes were left
- Current project phase
- Full history of updates
For non-MCP clients, start HTTP server:
cortexflow --httpGET /health Health check
GET /openapi.json OpenAPI spec (for ChatGPT Actions)
GET /api/context Read active project
PUT /api/context Update project metadata
GET /api/projects List all projects
POST /api/projects Create new project
GET /api/projects/:id Get specific project
DELETE /api/projects/:id Delete project
GET /api/tasks List tasks
POST /api/tasks Add task
PUT /api/tasks/:id Update task
POST /api/tasks/:id/complete Complete task
GET /api/notes List notes
POST /api/notes Add note
POST /api/active Set active project
# Create project
curl -X POST http://localhost:3210/api/projects \
-H "Content-Type: application/json" \
-d '{"name":"My Project","description":"Building something"}'
# Read context
curl http://localhost:3210/api/context
# Add task
curl -X POST http://localhost:3210/api/tasks \
-H "Content-Type: application/json" \
-d '{"title":"First task","description":"Do the thing"}'
# Complete task
curl -X POST http://localhost:3210/api/tasks/abc123/completeProjects are stored as JSON files:
~/.cortexflow/
└── data/
├── abc123.json # Project file
├── def456.json # Another project
└── .active # Active project ID
Configure with environment variable:
export CORTEXFLOW_DATA_DIR=/custom/pathinterface ProjectContext {
id: string;
name: string;
description: string;
phase: "planning" | "execution" | "review" | "completed";
version: number;
createdAt: string;
updatedAt: string;
tasks: Task[];
notes: AgentNote[];
tags: string[];
}
interface Task {
id: string;
title: string;
description: string;
status: "pending" | "in_progress" | "blocked" | "completed" | "cancelled";
priority: number; // 1-5
assignedTo: "planner" | "executor" | "reviewer" | null;
notes: string[];
dependencies: string[];
}
interface AgentNote {
id: string;
agent: "planner" | "executor" | "reviewer";
content: string;
category: "general" | "decision" | "blocker" | "insight";
timestamp: string;
}cortexflow/
├── src/
│ ├── models.ts # Data types and schemas
│ ├── storage.ts # JSON file persistence
│ ├── server.ts # MCP server (stdio)
│ ├── http-server.ts # HTTP REST API
│ └── index.ts # Entry point
├── config/
│ ├── claude-code/ # Claude Code config
│ ├── claude-desktop/ # Claude Desktop config
│ ├── cursor/ # Cursor config
│ ├── vscode/ # VS Code Continue config
│ └── generic-mcp.json
├── package.json
├── tsconfig.json
└── README.md
# MCP server (for Claude Code, Cursor, etc.)
cortexflow
# HTTP server (for ChatGPT, web clients)
cortexflow --http
# Both servers
cortexflow --both| Variable | Default | Description |
|---|---|---|
CORTEXFLOW_PORT |
3210 |
HTTP server port |
CORTEXFLOW_DATA_DIR |
~/.cortexflow/data |
Data directory |
- HTTP server binds to localhost only
- No authentication (designed for local use)
- For remote access, use reverse proxy with auth
- Never expose directly to internet
- 📖 Full Documentation - Interactive docs website
- 📚 API Reference - MCP tools and HTTP endpoints
- 📘 Usage Guide - Workflows and best practices
- 🤝 Contributing - How to contribute
- 🔒 Security Policy - Reporting vulnerabilities
- 📜 Code of Conduct - Community guidelines
If CortexFlow helps your workflow, consider supporting:
Mithun Gowda B
- GitHub: @mithun50
- Email: mithungowda.b7411@gmail.com
MIT License - see LICENSE