This document describes the containerized deployment architecture for the FastAPI Best Architecture project using Docker Compose. It covers the orchestration of eight services, multi-stage Docker build process, networking configuration, volume management, and service dependencies. For local development setup without Docker, see Development Environment Setup. For information about the Celery services being orchestrated, see Background Processing.
The deployment uses Docker Compose to orchestrate eight containerized services that together form the complete application stack. The architecture separates concerns into distinct services: a FastAPI application server, three Celery services for background processing, and four data/infrastructure services.
Sources: docker-compose.yml1-242
The main FastAPI application server runs using the Granian ASGI server. It exposes port 8001 internally and is accessed through the Nginx reverse proxy.
Configuration:
fba_server:latest (built from Dockerfile)8001:8001fba_serveralwaysdeploy/backend/docker-compose/.env.serverVolume Mounts:
./deploy/backend/docker-compose/.env.server:/fba/backend/.env - Environment configurationfba_static:/fba/backend/app/static - Application static filesfba_static_upload:/fba/backend/static/upload - User-uploaded filesStartup Command:
The server waits for PostgreSQL and Redis to be ready (up to 300 seconds) before starting supervisord, which manages the Granian process defined in deploy/backend/fba_server.conf1-12
Sources: docker-compose.yml24-50 deploy/backend/fba_server.conf1-12
PostgreSQL 16 serves as the primary relational database for storing application data, user accounts, roles, permissions, and audit logs.
Configuration:
postgres:16${DOCKER_POSTGRES_MAP_PORT:-5432}:5432 (defaults to 5432)fba_postgresPOSTGRES_DB=fba - Default database namePOSTGRES_PASSWORD=123456 - Root password (should be changed in production)TZ=Asia/Shanghai - TimezoneVolume Mount:
fba_postgres:/var/lib/postgresql/data - Persistent database storageNote: The compose file includes a commented MySQL 8.0.41 alternative configuration at docker-compose.yml67-86 for users who prefer MySQL over PostgreSQL.
Sources: docker-compose.yml52-65
Redis provides caching, session storage, JWT token management, distributed locks, and plugin status tracking.
Configuration:
redis:latest${DOCKER_REDIS_MAP_PORT:-6379}:6379 (defaults to 6379)fba_redisTZ=Asia/Shanghai - TimezoneVolume Mount:
fba_redis:/data - Persistent cache storageFor detailed information on Redis usage patterns, see Caching Strategy.
Sources: docker-compose.yml88-99
Nginx serves as the reverse proxy, routing traffic to the FastAPI server and Flower monitoring interface. It also serves static files directly for performance.
Configuration:
nginx:latest8000:80 - Main access point for the applicationfba_nginxfba_serverVolume Mounts:
./deploy/backend/nginx.conf:/etc/nginx/conf.d/default.conf:ro - Read-only Nginx configurationfba_static:/www/fba_server/backend/static - Static filesfba_static_upload:/www/fba_server/backend/static/upload - Upload directoryRouting Configuration:
| Location | Upstream | Purpose |
|---|---|---|
/ | http://fba_server:8001 | Main API endpoints |
/flower/ | http://fba_celery_flower:8555 | Celery monitoring UI with WebSocket support |
/static | /var/www/fba_server/backend/static | Static file serving |
/static/upload | /var/www/fba_server/backend/static/upload | User uploads |
Request Settings:
client_max_body_size: 5M - Maximum upload sizekeepalive_timeout: 300 - Long-lived connections for WebSocket supportSources: docker-compose.yml104-117 deploy/backend/nginx.conf1-58
RabbitMQ serves as the message broker for Celery distributed task processing, replacing Redis for production deployments.
Configuration:
rabbitmq:3.13.7fba_rabbitmq5672:5672 - AMQP protocol15672:15672 - Management UIfba_rabbitmqRABBITMQ_DEFAULT_USER=guestRABBITMQ_DEFAULT_PASS=guestVolume Mount:
fba_rabbitmq:/var/lib/rabbitmq - Message queue persistenceSources: docker-compose.yml153-167 backend/app/task/README.md21-27
Celery worker processes asynchronous tasks using a gevent pool for high I/O concurrency.
Configuration:
fba_celery_worker:latest (built with SERVER_TYPE=fba_celery_worker)fba_celery_worker (must be removed for distributed worker deployment)fba_rabbitmqdeploy/backend/docker-compose/.env.serverWorker Configuration: The supervisord configuration at deploy/backend/fba_celery_worker.conf3 runs:
Parameters:
-P gevent - Use gevent pool for async I/O-c 1000 - 1000 concurrent greenlets--loglevel=INFO - Logging verbosityStartup: Waits for RabbitMQ to be ready on port 5672 before starting.
Sources: docker-compose.yml169-191 deploy/backend/fba_celery_worker.conf1-12
Celery Beat scheduler manages periodic tasks based on crontab expressions.
Configuration:
fba_celery_beat:latest (built with SERVER_TYPE=fba_celery_beat)fba_celery_beatfba_rabbitmq, fba_celery_workerdeploy/backend/docker-compose/.env.serverBeat Configuration: The supervisord configuration at deploy/backend/fba_celery_beat.conf3 runs:
Beat schedules tasks defined in backend/app/task/tasks/beat.py and submits them to the worker queue. Redis distributed locks prevent duplicate task execution in multi-instance deployments.
Sources: docker-compose.yml193-215 deploy/backend/fba_celery_beat.conf1-12 backend/app/task/README.md6-8
Flower provides real-time web-based monitoring of Celery workers and tasks.
Configuration:
fba_celery_flower:latest (built with SERVER_TYPE=fba_celery_flower)8555:8555fba_celery_flowerfba_rabbitmq, fba_celery_workerdeploy/backend/docker-compose/.env.serverFlower Configuration: The supervisord configuration at deploy/backend/fba_celery_flower.conf3 runs:
Parameters:
--port=8555 - Web UI port--url-prefix=flower - URL prefix for reverse proxy compatibility--basic-auth=admin:123456 - HTTP basic authentication (should be changed in production)Access Flower at http://localhost:8000/flower/ through the Nginx reverse proxy.
Sources: docker-compose.yml217-241 deploy/backend/fba_celery_flower.conf1-12 deploy/backend/nginx.conf33-49
All services communicate through a custom Docker bridge network with a dedicated subnet.
Network Definition:
This isolated network:
fba_postgres, fba_redis)172.10.10.0/24 subnet for IP address allocationServices reference each other by container name in connection strings:
postgresql+asyncpg://user:pass@fba_postgres:5432/fbaredis://fba_redis:6379/0amqp://guest:guest@fba_rabbitmq:5672//Sources: docker-compose.yml1-8
Five named volumes provide persistent storage across container restarts and recreations.
| Volume Name | Purpose | Mount Point |
|---|---|---|
fba_postgres | PostgreSQL database files | /var/lib/postgresql/data |
fba_redis | Redis persistence | /data |
fba_static | Application static files | /fba/backend/app/static |
fba_static_upload | User-uploaded files | /fba/backend/static/upload |
fba_rabbitmq | RabbitMQ message persistence | /var/lib/rabbitmq |
Volume Lifecycle:
docker-compose updocker-compose down (unless using -v flag)Database Switch Note:
The compose file comments at docker-compose.yml11 docker-compose.yml33 and docker-compose.yml43 indicate that MySQL users should rename fba_postgres to fba_mysql throughout the configuration.
Sources: docker-compose.yml10-21
The Dockerfile implements a multi-stage build strategy to optimize image size and reuse common layers across four service images.
Uses ghcr.io/astral-sh/uv:python3.10-bookworm-slim for fast dependency installation.
Process:
/fba - Dockerfile13UV_COMPILE_BYTECODE=1 - Pre-compile Python bytecodeUV_NO_CACHE=1 - Disable caching in imageUV_LINK_MODE=copy - Copy packages instead of symlinkingUV_PROJECT_ENVIRONMENT=/usr/local - Install to system locationCreates a common runtime base for all service images using python:3.10-slim.
Process:
/fba/backend - Dockerfile41Each service extends base_server with its specific supervisord configuration:
| Image | Supervisord Config | Exposed Port |
|---|---|---|
fba_server | fba_server.conf | 8001 |
fba_celery_worker | fba_celery_worker.conf | None |
fba_celery_beat | fba_celery_beat.conf | None |
fba_celery_flower | fba_celery_flower.conf | 8555 |
All service images:
/var/log/fba for log storage - Dockerfile48 Dockerfile59 Dockerfile68 Dockerfile77supervisord as the entry point - Dockerfile52 Dockerfile61 Dockerfile70 Dockerfile81The final stage at Dockerfile84 selects which service image to build:
The SERVER_TYPE build argument (default: fba_server) determines the output image. Docker Compose overrides this for Celery services using args: SERVER_TYPE=fba_celery_worker - see docker-compose.yml174 docker-compose.yml198 docker-compose.yml222
Sources: Dockerfile1-85
Each container uses supervisord to manage application processes, providing automatic restarts and log rotation.
Base Configuration: deploy/backend/supervisord.conf contains the supervisord daemon configuration (referenced at Dockerfile39).
Service Configurations: Each service has a dedicated program configuration:
Configuration at deploy/backend/fba_server.conf1-12:
Granian Parameters:
--interface asgi - ASGI server mode--host 0.0.0.0 --port 8001 - Listen on all interfaces--workers 1 - Single worker (scale via container replication)--backlog 1024 - Socket backlog size--workers-kill-timeout 120 - Graceful shutdown timeout--backpressure 2000 - Max concurrent connections--log-level debug - Verbose loggingCommon Supervisord Settings:
autostart=true - Start on supervisord startupautorestart=true - Restart on failurestartretries=5 - Retry up to 5 timesstdout_logfile=/var/log/fba/fba_server.log - Log outputstdout_logfile_maxbytes=5MB - Log rotation sizestdout_logfile_backups=5 - Keep 5 backup logsConfiguration at deploy/backend/fba_celery_worker.conf1-12:
Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_worker.log.
Configuration at deploy/backend/fba_celery_beat.conf1-12:
Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_beat.log.
Configuration at deploy/backend/fba_celery_flower.conf1-12:
Uses the same supervisord settings with logs written to /var/log/fba/fba_celery_flower.log.
Sources: deploy/backend/fba_server.conf1-12 deploy/backend/fba_celery_worker.conf1-12 deploy/backend/fba_celery_beat.conf1-12 deploy/backend/fba_celery_flower.conf1-12
Docker Compose manages service startup order and health checking through depends_on and wait-for-it scripts.
Services use the wait-for-it script to verify that dependent services are accepting connections before starting their processes.
fba_server Wait:
Celery Services Wait:
After successful connection checks, each service starts supervisord:
Infrastructure Layer (parallel):
fba_postgres starts and initializes databasefba_redis starts and loads persistencefba_rabbitmq starts and restores queue stateApplication Layer:
fba_server waits for postgres+redis, then starts FastAPI serverfba_celery_worker waits for rabbitmq, then starts worker poolScheduling Layer:
fba_celery_beat waits for rabbitmq+worker, then starts schedulerMonitoring Layer:
fba_celery_flower waits for rabbitmq+worker, then starts UIfba_nginx waits for server, then starts reverse proxySources: docker-compose.yml44-50 docker-compose.yml185-191 docker-compose.yml209-215 docker-compose.yml235-241
Build and start all services:
The --build flag forces image rebuilding, and -d runs containers in detached mode.
View logs for all services:
View logs for a specific service:
Stop all services:
Stop and remove containers (preserves volumes):
Stop and remove containers and volumes (destroys data):
Restart a specific service:
Scale Celery workers for distributed processing:
container_name: fba_celery_worker from docker-compose.yml177This starts three worker containers that share the RabbitMQ queue.
Sources: docker-compose.yml176-177
Services load configuration from environment files mounted into containers.
All application services mount:
This file must contain all required environment variables documented in Configuration System.
Database Connection:
DB_HOST=fba_postgres
DB_PORT=5432
DB_USER=your_user
DB_PASSWORD=your_password
DB_DATABASE=fba
Redis Connection:
REDIS_HOST=fba_redis
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_DATABASE=0
RabbitMQ Connection:
CELERY_BROKER=rabbitmq
RABBITMQ_HOST=fba_rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_USERNAME=guest
RABBITMQ_PASSWORD=guest
Security:
JWT_SECRET_KEY=your-secret-key-change-this
JWT_ALGORITHM=HS256
Note: Container names (fba_postgres, fba_redis, fba_rabbitmq) are used as hostnames due to Docker network service discovery.
Sources: docker-compose.yml38 docker-compose.yml182 docker-compose.yml206 docker-compose.yml232
The compose file includes a commented configuration for deploying the web UI alongside the backend at docker-compose.yml119-151
Notes:
context pathfba_nginx - choose one or the otherSSL Certificate Mounting: The configuration includes volume mounts for SSL certificates to enable HTTPS:
Replace placeholders with actual certificate paths on your server.
Sources: docker-compose.yml119-151
Change Default Passwords:
POSTGRES_PASSWORD at docker-compose.yml60--basic-auth at deploy/backend/fba_celery_flower.conf3RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS at docker-compose.yml162-163Use Secrets Management:
.env.server to version controlRestrict Network Access:
Add resource constraints to services:
Add health checks for automatic restart:
Configure log drivers for centralized logging:
Database Backups:
Volume Backups:
Automated Backups: Schedule backups using cron or add a backup service to docker-compose.yml
Sources: Multiple files across deployment configuration
Refresh this wiki
This wiki was recently refreshed. Please wait 6 days to refresh again.