Menu

Docker Deployment

Relevant source files

Purpose and Scope

This document describes the containerized deployment architecture for the FastAPI Best Architecture project using Docker Compose. It covers the orchestration of eight services, multi-stage Docker build process, networking configuration, volume management, and service dependencies. For local development setup without Docker, see Development Environment Setup. For information about the Celery services being orchestrated, see Background Processing.


Architecture Overview

The deployment uses Docker Compose to orchestrate eight containerized services that together form the complete application stack. The architecture separates concerns into distinct services: a FastAPI application server, three Celery services for background processing, and four data/infrastructure services.

Service Topology

Sources: docker-compose.yml1-242


Service Definitions

fba_server

The main FastAPI application server runs using the Granian ASGI server. It exposes port 8001 internally and is accessed through the Nginx reverse proxy.

Configuration:

  • Image: fba_server:latest (built from Dockerfile)
  • Ports: 8001:8001
  • Container Name: fba_server
  • Restart Policy: always
  • Environment: Loaded from deploy/backend/docker-compose/.env.server

Volume Mounts:

  • ./deploy/backend/docker-compose/.env.server:/fba/backend/.env - Environment configuration
  • fba_static:/fba/backend/app/static - Application static files
  • fba_static_upload:/fba/backend/static/upload - User-uploaded files

Startup Command:

The server waits for PostgreSQL and Redis to be ready (up to 300 seconds) before starting supervisord, which manages the Granian process defined in deploy/backend/fba_server.conf1-12

Sources: docker-compose.yml24-50 deploy/backend/fba_server.conf1-12


fba_postgres

PostgreSQL 16 serves as the primary relational database for storing application data, user accounts, roles, permissions, and audit logs.

Configuration:

  • Image: postgres:16
  • Ports: ${DOCKER_POSTGRES_MAP_PORT:-5432}:5432 (defaults to 5432)
  • Container Name: fba_postgres
  • Environment Variables:
    • POSTGRES_DB=fba - Default database name
    • POSTGRES_PASSWORD=123456 - Root password (should be changed in production)
    • TZ=Asia/Shanghai - Timezone

Volume Mount:

  • fba_postgres:/var/lib/postgresql/data - Persistent database storage

Note: The compose file includes a commented MySQL 8.0.41 alternative configuration at docker-compose.yml67-86 for users who prefer MySQL over PostgreSQL.

Sources: docker-compose.yml52-65


fba_redis

Redis provides caching, session storage, JWT token management, distributed locks, and plugin status tracking.

Configuration:

  • Image: redis:latest
  • Ports: ${DOCKER_REDIS_MAP_PORT:-6379}:6379 (defaults to 6379)
  • Container Name: fba_redis
  • Environment Variables:
    • TZ=Asia/Shanghai - Timezone

Volume Mount:

  • fba_redis:/data - Persistent cache storage

For detailed information on Redis usage patterns, see Caching Strategy.

Sources: docker-compose.yml88-99


fba_nginx

Nginx serves as the reverse proxy, routing traffic to the FastAPI server and Flower monitoring interface. It also serves static files directly for performance.

Configuration:

  • Image: nginx:latest
  • Ports: 8000:80 - Main access point for the application
  • Container Name: fba_nginx
  • Dependencies: fba_server

Volume Mounts:

  • ./deploy/backend/nginx.conf:/etc/nginx/conf.d/default.conf:ro - Read-only Nginx configuration
  • fba_static:/www/fba_server/backend/static - Static files
  • fba_static_upload:/www/fba_server/backend/static/upload - Upload directory

Routing Configuration:

LocationUpstreamPurpose
/http://fba_server:8001Main API endpoints
/flower/http://fba_celery_flower:8555Celery monitoring UI with WebSocket support
/static/var/www/fba_server/backend/staticStatic file serving
/static/upload/var/www/fba_server/backend/static/uploadUser uploads

Request Settings:

  • client_max_body_size: 5M - Maximum upload size
  • keepalive_timeout: 300 - Long-lived connections for WebSocket support
  • Gzip compression enabled for text and JSON responses

Sources: docker-compose.yml104-117 deploy/backend/nginx.conf1-58


fba_rabbitmq

RabbitMQ serves as the message broker for Celery distributed task processing, replacing Redis for production deployments.

Configuration:

  • Image: rabbitmq:3.13.7
  • Hostname: fba_rabbitmq
  • Ports:
    • 5672:5672 - AMQP protocol
    • 15672:15672 - Management UI
  • Container Name: fba_rabbitmq
  • Environment Variables:
    • RABBITMQ_DEFAULT_USER=guest
    • RABBITMQ_DEFAULT_PASS=guest

Volume Mount:

  • fba_rabbitmq:/var/lib/rabbitmq - Message queue persistence

Sources: docker-compose.yml153-167 backend/app/task/README.md21-27


fba_celery_worker

Celery worker processes asynchronous tasks using a gevent pool for high I/O concurrency.

Configuration:

  • Image: fba_celery_worker:latest (built with SERVER_TYPE=fba_celery_worker)
  • Container Name: fba_celery_worker (must be removed for distributed worker deployment)
  • Dependencies: fba_rabbitmq
  • Environment: Loaded from deploy/backend/docker-compose/.env.server

Worker Configuration: The supervisord configuration at deploy/backend/fba_celery_worker.conf3 runs:

Parameters:

  • -P gevent - Use gevent pool for async I/O
  • -c 1000 - 1000 concurrent greenlets
  • --loglevel=INFO - Logging verbosity

Startup: Waits for RabbitMQ to be ready on port 5672 before starting.

Sources: docker-compose.yml169-191 deploy/backend/fba_celery_worker.conf1-12


fba_celery_beat

Celery Beat scheduler manages periodic tasks based on crontab expressions.

Configuration:

  • Image: fba_celery_beat:latest (built with SERVER_TYPE=fba_celery_beat)
  • Container Name: fba_celery_beat
  • Dependencies: fba_rabbitmq, fba_celery_worker
  • Environment: Loaded from deploy/backend/docker-compose/.env.server

Beat Configuration: The supervisord configuration at deploy/backend/fba_celery_beat.conf3 runs:

Beat schedules tasks defined in backend/app/task/tasks/beat.py and submits them to the worker queue. Redis distributed locks prevent duplicate task execution in multi-instance deployments.

Sources: docker-compose.yml193-215 deploy/backend/fba_celery_beat.conf1-12 backend/app/task/README.md6-8


fba_celery_flower

Flower provides real-time web-based monitoring of Celery workers and tasks.

Configuration:

  • Image: fba_celery_flower:latest (built with SERVER_TYPE=fba_celery_flower)
  • Ports: 8555:8555
  • Container Name: fba_celery_flower
  • Dependencies: fba_rabbitmq, fba_celery_worker
  • Environment: Loaded from deploy/backend/docker-compose/.env.server

Flower Configuration: The supervisord configuration at deploy/backend/fba_celery_flower.conf3 runs:

Parameters:

  • --port=8555 - Web UI port
  • --url-prefix=flower - URL prefix for reverse proxy compatibility
  • --basic-auth=admin:123456 - HTTP basic authentication (should be changed in production)

Access Flower at http://localhost:8000/flower/ through the Nginx reverse proxy.

Sources: docker-compose.yml217-241 deploy/backend/fba_celery_flower.conf1-12 deploy/backend/nginx.conf33-49


Docker Network Configuration

All services communicate through a custom Docker bridge network with a dedicated subnet.

Network Definition:

This isolated network:

  • Enables service discovery via container names (e.g., fba_postgres, fba_redis)
  • Provides network isolation from other Docker networks
  • Uses the 172.10.10.0/24 subnet for IP address allocation

Services reference each other by container name in connection strings:

  • Database: postgresql+asyncpg://user:pass@fba_postgres:5432/fba
  • Redis: redis://fba_redis:6379/0
  • RabbitMQ: amqp://guest:guest@fba_rabbitmq:5672//

Sources: docker-compose.yml1-8


Volume Management

Five named volumes provide persistent storage across container restarts and recreations.

Volume Definitions

Volume NamePurposeMount Point
fba_postgresPostgreSQL database files/var/lib/postgresql/data
fba_redisRedis persistence/data
fba_staticApplication static files/fba/backend/app/static
fba_static_uploadUser-uploaded files/fba/backend/static/upload
fba_rabbitmqRabbitMQ message persistence/var/lib/rabbitmq

Volume Lifecycle:

  • Created automatically on first docker-compose up
  • Persist data between container stops and starts
  • Survive docker-compose down (unless using -v flag)
  • Shared between services where needed (static volumes mounted to both server and nginx)

Database Switch Note: The compose file comments at docker-compose.yml11 docker-compose.yml33 and docker-compose.yml43 indicate that MySQL users should rename fba_postgres to fba_mysql throughout the configuration.

Sources: docker-compose.yml10-21


Multi-Stage Dockerfile Build

The Dockerfile implements a multi-stage build strategy to optimize image size and reuse common layers across four service images.

Build Stages

Stage 1: builder

Uses ghcr.io/astral-sh/uv:python3.10-bookworm-slim for fast dependency installation.

Process:

  1. Install system build dependencies (gcc, python3-dev) - Dockerfile8-11
  2. Copy project files to /fba - Dockerfile13
  3. Configure uv environment variables - Dockerfile18-21:
    • UV_COMPILE_BYTECODE=1 - Pre-compile Python bytecode
    • UV_NO_CACHE=1 - Disable caching in image
    • UV_LINK_MODE=copy - Copy packages instead of symlinking
    • UV_PROJECT_ENVIRONMENT=/usr/local - Install to system location
  4. Install dependencies with layer caching - Dockerfile24-25:

Stage 2: base_server

Creates a common runtime base for all service images using python:3.10-slim.

Process:

  1. Install supervisor for process management - Dockerfile30-33
  2. Copy application code from builder - Dockerfile35
  3. Copy installed dependencies from builder - Dockerfile37
  4. Copy supervisord base configuration - Dockerfile39
  5. Set working directory to /fba/backend - Dockerfile41

Stages 3-6: Service-Specific Images

Each service extends base_server with its specific supervisord configuration:

ImageSupervisord ConfigExposed Port
fba_serverfba_server.conf8001
fba_celery_workerfba_celery_worker.confNone
fba_celery_beatfba_celery_beat.confNone
fba_celery_flowerfba_celery_flower.conf8555

All service images:

Build Target Selection

The final stage at Dockerfile84 selects which service image to build:

The SERVER_TYPE build argument (default: fba_server) determines the output image. Docker Compose overrides this for Celery services using args: SERVER_TYPE=fba_celery_worker - see docker-compose.yml174 docker-compose.yml198 docker-compose.yml222

Sources: Dockerfile1-85


Supervisord Process Management

Each container uses supervisord to manage application processes, providing automatic restarts and log rotation.

Configuration Structure

Base Configuration: deploy/backend/supervisord.conf contains the supervisord daemon configuration (referenced at Dockerfile39).

Service Configurations: Each service has a dedicated program configuration:

fba_server Process

Configuration at deploy/backend/fba_server.conf1-12:

Granian Parameters:

  • --interface asgi - ASGI server mode
  • --host 0.0.0.0 --port 8001 - Listen on all interfaces
  • --workers 1 - Single worker (scale via container replication)
  • --backlog 1024 - Socket backlog size
  • --workers-kill-timeout 120 - Graceful shutdown timeout
  • --backpressure 2000 - Max concurrent connections
  • --log-level debug - Verbose logging

Common Supervisord Settings:

  • autostart=true - Start on supervisord startup
  • autorestart=true - Restart on failure
  • startretries=5 - Retry up to 5 times
  • stdout_logfile=/var/log/fba/fba_server.log - Log output
  • stdout_logfile_maxbytes=5MB - Log rotation size
  • stdout_logfile_backups=5 - Keep 5 backup logs

Celery Worker Process

Configuration at deploy/backend/fba_celery_worker.conf1-12:

Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_worker.log.

Celery Beat Process

Configuration at deploy/backend/fba_celery_beat.conf1-12:

Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_beat.log.

Celery Flower Process

Configuration at deploy/backend/fba_celery_flower.conf1-12:

Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_flower.log.

Sources: deploy/backend/fba_server.conf1-12 deploy/backend/fba_celery_worker.conf1-12 deploy/backend/fba_celery_beat.conf1-12 deploy/backend/fba_celery_flower.conf1-12


Service Startup Orchestration

Docker Compose manages service startup order and health checking through depends_on and wait-for-it scripts.

Dependency Graph

Wait-for-it Script

Services use the wait-for-it script to verify that dependent services are accepting connections before starting their processes.

fba_server Wait:

  • Waits for PostgreSQL on port 5432
  • Waits for Redis on port 6379
  • 300-second timeout

Celery Services Wait:

  • Waits for RabbitMQ on port 5672
  • 300-second timeout

After successful connection checks, each service starts supervisord:

Startup Sequence

  1. Infrastructure Layer (parallel):

    • fba_postgres starts and initializes database
    • fba_redis starts and loads persistence
    • fba_rabbitmq starts and restores queue state
  2. Application Layer:

    • fba_server waits for postgres+redis, then starts FastAPI server
    • fba_celery_worker waits for rabbitmq, then starts worker pool
  3. Scheduling Layer:

    • fba_celery_beat waits for rabbitmq+worker, then starts scheduler
  4. Monitoring Layer:

    • fba_celery_flower waits for rabbitmq+worker, then starts UI
    • fba_nginx waits for server, then starts reverse proxy

Sources: docker-compose.yml44-50 docker-compose.yml185-191 docker-compose.yml209-215 docker-compose.yml235-241


Deployment Commands

Initial Deployment

Build and start all services:

The --build flag forces image rebuilding, and -d runs containers in detached mode.

Viewing Logs

View logs for all services:

View logs for a specific service:

Service Management

Stop all services:

Stop and remove containers (preserves volumes):

Stop and remove containers and volumes (destroys data):

Restart a specific service:

Scaling Workers

Scale Celery workers for distributed processing:

  1. Remove container_name: fba_celery_worker from docker-compose.yml177
  2. Run:

This starts three worker containers that share the RabbitMQ queue.

Sources: docker-compose.yml176-177


Environment Configuration

Services load configuration from environment files mounted into containers.

Environment File Location

All application services mount:

This file must contain all required environment variables documented in Configuration System.

Critical Settings

Database Connection:

DB_HOST=fba_postgres
DB_PORT=5432
DB_USER=your_user
DB_PASSWORD=your_password
DB_DATABASE=fba

Redis Connection:

REDIS_HOST=fba_redis
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_DATABASE=0

RabbitMQ Connection:

CELERY_BROKER=rabbitmq
RABBITMQ_HOST=fba_rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_USERNAME=guest
RABBITMQ_PASSWORD=guest

Security:

JWT_SECRET_KEY=your-secret-key-change-this
JWT_ALGORITHM=HS256

Note: Container names (fba_postgres, fba_redis, fba_rabbitmq) are used as hostnames due to Docker network service discovery.

Sources: docker-compose.yml38 docker-compose.yml182 docker-compose.yml206 docker-compose.yml232


Optional Frontend Deployment

The compose file includes a commented configuration for deploying the web UI alongside the backend at docker-compose.yml119-151

fba_ui Service

Notes:

  • Requires the frontend repository at the specified context path
  • Conflicts with fba_nginx - choose one or the other
  • Resource intensive - requires 4GB+ RAM and 4+ CPU cores for building
  • Consider building locally and pushing to a registry for production

SSL Certificate Mounting: The configuration includes volume mounts for SSL certificates to enable HTTPS:

Replace placeholders with actual certificate paths on your server.

Sources: docker-compose.yml119-151


Production Considerations

Security Hardening

  1. Change Default Passwords:

  2. Use Secrets Management:

    • Store credentials in Docker secrets or external secret managers
    • Avoid committing .env.server to version control
  3. Restrict Network Access:

    • Remove port mappings for PostgreSQL, Redis, and RabbitMQ if external access isn't needed
    • Configure firewall rules to limit access to port 8000

Resource Limits

Add resource constraints to services:

Health Checks

Add health checks for automatic restart:

Log Management

Configure log drivers for centralized logging:

Backup Strategy

  1. Database Backups:

  2. Volume Backups:

  3. Automated Backups: Schedule backups using cron or add a backup service to docker-compose.yml

Sources: Multiple files across deployment configuration