Skip to content

Python Deep Agent framework built on top of Pydantic-AI designed to help you quickly build production-grade autonomous agents with planning, filesystem operations, subagent delegation, and skills.

License

Notifications You must be signed in to change notification settings

vstorm-co/pydantic-deepagents

Repository files navigation

pydantic-deep

Looking for a full-stack template? Check out fastapi-fullstack - a production-ready project generator for AI/LLM applications with FastAPI, Next.js, and pydantic-deep integration.

Need just the todo toolset? Check out pydantic-ai-todo - standalone task planning toolset that works with any pydantic-ai agent.

Need just the backends? Check out pydantic-ai-backend - file storage and sandbox backends that work with any pydantic-ai agent.

PyPI version Python 3.10+ License: MIT Coverage CI

Deep agent framework built on pydantic-ai with planning, filesystem, and subagent capabilities.

Demo

Watch Demo

Demo Screenshot

See the full demo application - a complete example showing how to build a chat interface with file uploads, skills, and streaming responses.

Features

  • Multiple Backends: StateBackend (in-memory), FilesystemBackend, DockerSandbox, CompositeBackend - via pydantic-ai-backend
  • Rich Toolsets: TodoToolset (via pydantic-ai-todo), FilesystemToolset, SubAgentToolset, SkillsToolset
  • File Uploads: Upload files for agent processing with run_with_files() or deps.upload_file()
  • Skills System: Extensible skill definitions with markdown prompts
  • Structured Output: Type-safe responses with Pydantic models via output_type
  • Context Management: Automatic conversation summarization for long sessions
  • Human-in-the-Loop: Built-in support for human confirmation workflows
  • Streaming: Full streaming support for agent responses

Modular Architecture

pydantic-deep is built with modular, reusable components:

Component Package Description
Backends pydantic-ai-backend File storage and Docker sandbox
Todo Toolset pydantic-ai-todo Task planning and tracking
Summarization Built-in Automatic context management*

*Note: Summarization will be added to pydantic-ai core in late January 2025 (pydantic-ai#3780). We will migrate to use it once available.

Installation

pip install pydantic-deep

Or with uv:

uv add pydantic-deep

Optional dependencies

# Docker sandbox support
pip install pydantic-deep[sandbox]

Quick Start

import asyncio
from pydantic_ai_backends import StateBackend
from pydantic_deep import create_deep_agent, create_default_deps

async def main():
    # Create a deep agent with state backend
    backend = StateBackend()
    deps = create_default_deps(backend)
    agent = create_deep_agent()

    # Run the agent
    result = await agent.run("Help me organize my tasks", deps=deps)
    print(result.output)

asyncio.run(main())

Structured Output

Get type-safe responses with Pydantic models:

from pydantic import BaseModel
from pydantic_deep import create_deep_agent, create_default_deps

class TaskAnalysis(BaseModel):
    summary: str
    priority: str
    estimated_hours: float

agent = create_deep_agent(output_type=TaskAnalysis)
deps = create_default_deps()

result = await agent.run("Analyze this task: implement user auth", deps=deps)
print(result.output.priority)  # Type-safe access

File Uploads

Process user-uploaded files with the agent:

from pydantic_ai_backends import StateBackend
from pydantic_deep import create_deep_agent, DeepAgentDeps, run_with_files

agent = create_deep_agent()
deps = DeepAgentDeps(backend=StateBackend())

# Upload and process files
with open("sales.csv", "rb") as f:
    result = await run_with_files(
        agent,
        "Analyze this sales data and find top products",
        deps,
        files=[("sales.csv", f.read())],
    )

Or upload files directly to deps:

deps.upload_file("config.json", b'{"key": "value"}')
# File is now at /uploads/config.json and agent sees it in system prompt

Context Management

Automatically summarize long conversations to manage token limits:

from pydantic_deep import create_deep_agent
from pydantic_deep.processors import create_summarization_processor

processor = create_summarization_processor(
    trigger=("tokens", 100000),  # Summarize when reaching 100k tokens
    keep=("messages", 20),       # Keep last 20 messages
)

agent = create_deep_agent(history_processors=[processor])

Note: This feature will be added to pydantic-ai core in late January 2025 (pydantic-ai#3780). Once available, we will migrate to use the upstream implementation.

Documentation

Quick Links

Related Projects

Development

# Clone the repository
git clone https://github.com/vstorm-co/pydantic-deepagents.git
cd pydantic-deepagents

# Install dependencies
make install

# Run tests
make test

# Run all checks (lint, typecheck, test, coverage)
make all

Star History

Star History Chart

License

MIT License - see LICENSE for details.