High-performance, production-ready toolkit to detect goroutine and memory leaks in any Go application!
- 🔍 Goroutine Leak Detection - Advanced pattern recognition
- 💾 Memory Leak Analysis - Heap and allocation tracking
- 🧠 GC Analysis - Distinguish real leaks from GC delays
- ⚡ Production Optimized - HTTP pooling, string interning, pre-compiled regexes
- 🎯 Focus Options - Analyze goroutines-only or memory-only
- 📊 Multiple Algorithms - Stack patterns, channel leaks (Uber LeakProf)
- 🚀 Real-time Monitoring - Continuous leak detection
- 📈 Trend Analysis - Datadog-inspired memory patterns
- 🔧 Highly Configurable - Custom patterns, thresholds, and rules
- 📄 Comprehensive Reports - Detailed analysis with recommendations
This toolkit has been extensively optimized for production use:
- 🔄 HTTP Connection Pooling: Reuses connections for better performance
- 🧠 Pre-compiled Regexes: Compile once, use many times (2x faster parsing)
- 💾 Memory Pre-allocation: Reduces garbage collection pressure
- 🔗 String Interning: Reuses duplicate strings to save memory
- 📊 Unified Measurements: Single call gets both goroutine and memory data
- 🚀 Efficient Parsing: Direct string processing instead of line-by-line scanning
- ⏱️ Execution Time: ~1.25 seconds for comprehensive analysis
This comprehensive, optimized toolkit helps you detect both goroutine and memory leaks in your Go applications with advanced configuration options and production-ready algorithms. It's been streamlined to just 2 essential files:
leakfinder.go- All-in-one optimized leak detection toolleakfinder.sh- Enhanced runner script with advanced options
# Download latest release for your platform
curl -L https://github.com/ashutoshmishraji/leak-detection-toolkit/releases/latest/download/leakfinder-linux-amd64 -o leakfinder
chmod +x leakfinder
./leakfinder helpgo install github.com/ashutoshmishraji/leak-detection-toolkit@latest
leakfinder helpgit clone https://github.com/ashutoshmishraji/leak-detection-toolkit.git
cd leak-detection-toolkit
go build -o leakfinder leakfinder.go
./leakfinder help- Go application with pprof enabled:
import _ "net/http/pprof" go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }()
# Quick health check (both goroutines and memory)
./leakfinder.sh check
# Focus on specific aspects
./leakfinder.sh check -goroutines-only # Only goroutines
./leakfinder.sh check -memory-only # Only memory
# Real-time monitoring with custom duration
./leakfinder.sh -monitor-duration=10m monitor
./leakfinder.sh -monitor-duration=5m monitor -goroutines-only
# Detailed analysis with verbose output
./leakfinder.sh -verbose analyze
./leakfinder.sh analyze -memory-only # Memory-focused analysis
# Before/after comparison
./leakfinder.sh compare
./leakfinder.sh compare -goroutines-only # Compare only goroutines
# Advanced leak detection algorithms
./leakfinder.sh patterns # Stack pattern analysis
./leakfinder.sh channels # Channel leak detection (Uber LeakProf)Add -force-gc to any command to verify if detected leaks are real or just waiting for garbage collection:
# Add GC analysis to any command:
./leakfinder.sh check -force-gc # Quick check + GC verification
./leakfinder.sh analyze -force-gc # Detailed analysis + GC verification
./leakfinder.sh compare -force-gc # Before/after + GC verification
./leakfinder.sh monitor -force-gc # Real-time monitoring + GC verification
# Combine with focus options:
./leakfinder.sh check -force-gc -memory-only # Memory check + GC test
./leakfinder.sh analyze -force-gc -goroutines-only # Goroutine analysis + GC test
# Custom GC parameters:
./leakfinder.sh -gc-cycles=3 -gc-wait=60s check -force-gc # 3 GC cycles, wait 60s
./leakfinder.sh -gc-threshold=70 analyze -force-gc # 70% effectiveness threshold- 📊 Initial measurement - Takes baseline readings
- 🧹 Force GC cycles - Triggers immediate garbage collection (default: 2 cycles)
- ⏳ Wait for natural GC - Allows background cleanup to complete (default: 30s)
- 📊 Final measurement - Compares before/after to determine leak confidence
- 🟢 GC CLEANUP DELAY (≥80% freed) - Not a real leak, just waiting for GC
- 🟡 PARTIAL LEAK (≥50% freed) - Some cleanup occurred, investigate further
- 🟠 LIKELY LEAK (≥20% freed) - Minimal GC cleanup, probably a real leak
- 🔴 CONFIRMED LEAK (<20% freed) - GC had no effect, definitely a leak
# Build once, run multiple times (clean paths)
go build -o leakfinder leakfinder.go
# Then use the built binary with consistent flags
./leakfinder check
./leakfinder check -goroutines-only
./leakfinder -verbose analyze -memory-only
./leakfinder monitor -goroutines-only
./leakfinder compare -memory-only
./leakfinder patterns
# Add GC analysis to any command:
./leakfinder check -force-gc
./leakfinder analyze -force-gc -memory-only
./leakfinder compare -force-gc -goroutines-only
./leakfinder -gc-cycles=3 -gc-wait=60s check -force-gc
# Or run directly (shows temp paths in help)
go run leakfinder.go check
go run leakfinder.go -force-gc analyze -memory-only
go run leakfinder.go -goroutines-only monitorAll commands support consistent focus flags for optimal performance:
- Default: Analyzes both goroutines and memory (comprehensive)
-goroutines-only: Focus on goroutines only (faster, less resource usage)-memory-only: Focus on memory only (heap analysis, allocations)
Performance Benefits:
- ⚡ 2x faster parsing with pre-compiled regexes
- 🔄 Connection reuse reduces network overhead
- 💾 Memory optimization with string interning and pre-allocation
- 📊 Single measurement calls eliminate duplicate HTTP requests
- ⏱️ ~1.25 second execution for comprehensive analysis
- Shows current goroutine count and memory usage
- Gives instant health assessment with configurable thresholds
- Perfect for quick status checks and CI/CD integration
- Focus options:
-goroutines-only,-memory-only - GC analysis: Add
-force-gcto verify if issues are real leaks or GC delays
- Monitors goroutines and memory with configurable intervals
- Shows trends, alerts, and memory patterns (Datadog-inspired)
- Detects gradual growth and cliff-like memory drops
- Configurable duration and sampling intervals
- Focus options:
-goroutines-only,-memory-only - GC analysis: Add
-force-gcfor periodic GC verification during monitoring
- Deep analysis of goroutine patterns and memory usage
- Identifies suspicious patterns with advanced filtering
- Saves detailed reports and memory profiles
- Best for investigating complex issues
- Focus options:
-goroutines-only,-memory-only - GC analysis: Add
-force-gcto distinguish real leaks from GC delays
- Takes baseline measurements (goroutines + memory)
- Waits for you to run tests/operations
- Takes after measurements with precise differences
- Shows exact impact of code changes
- Focus options:
-goroutines-only,-memory-only - GC analysis: Add
-force-gcto verify if detected changes are real leaks
- Filter by specific functions, packages, or patterns
- Use environment variables for flexible filtering
- Exclude known safe patterns from analysis
- Perfect for focusing on specific code areas
- Advanced stack signature analysis for persistent goroutine leaks
- Samples goroutines over time to identify patterns that don't resolve
- Scores potential leaks based on persistence, count, and behavior
- Best for: General leak hunting and development debugging
- Finds: All types of persistent goroutines (channels, mutexes, timers, etc.)
- Channel-focused leak detection using production-proven algorithms
- Only analyzes channel-blocked goroutines (chan receive/send/select)
- Aggregates by source location (file:line) for precise identification
- Filters out known system goroutines to minimize false positives
- Best for: Production monitoring and channel-specific issues
- Finds: Only critical channel operation leaks
- Isolates specific goroutines by function name or package
- Saves filtered reports containing only matching goroutines
- Perfect for debugging specific libraries or components
- Environment-driven: Uses FILTER_FUNC, FILTER_PKG, EXCLUDE_PATTERNS
- Best for: Component debugging, library-specific leak hunting
- Finds: Goroutines from specific functions, packages, or patterns
All primary commands (check, monitor, analyze, compare) support consistent focus flags:
# Default behavior - analyze both goroutines and memory
./leak_detector.sh check # ✅ Both (comprehensive)
./leak_detector.sh monitor # ✅ Both (comprehensive)
./leak_detector.sh analyze # ✅ Both (comprehensive)
./leak_detector.sh compare # ✅ Both (comprehensive)
# Focus on goroutines only (faster, less resource usage)
./leak_detector.sh check -goroutines-only # 🔍 Goroutines only
./leak_detector.sh monitor -goroutines-only # 🔍 Goroutines only
./leak_detector.sh analyze -goroutines-only # 🔍 Goroutines only
./leak_detector.sh compare -goroutines-only # 🔍 Goroutines only
# Focus on memory only (heap analysis, allocations)
./leak_detector.sh check -memory-only # 💾 Memory only
./leak_detector.sh monitor -memory-only # 💾 Memory only
./leak_detector.sh analyze -memory-only # 💾 Memory only
./leak_detector.sh compare -memory-only # 💾 Memory onlyBenefits:
- Faster execution when you only need one type of analysis
- Reduced resource usage on target application
- Focused output without irrelevant information
- Consistent interface across all commands
Add -force-gc to any command and customize GC behavior:
# GC Analysis flags
-force-gc # Enable GC leak analysis for any command
-gc-cycles=N # Number of GC cycles to force (default: 2)
-gc-wait=DURATION # Wait time for natural GC cleanup (default: 30s)
-gc-threshold=PERCENT # GC effectiveness threshold for leak classification (default: 50.0)GC Analysis Examples:
# Basic GC analysis (uses defaults: 2 cycles, 30s wait, 50% threshold)
./leak_detector.sh check -force-gc
# Quick GC test (faster but less thorough)
./leak_detector.sh -gc-cycles=1 -gc-wait=10s analyze -force-gc
# Thorough GC test (slower but more accurate)
./leak_detector.sh -gc-cycles=5 -gc-wait=2m compare -force-gc
# Strict leak detection (higher threshold)
./leak_detector.sh -gc-threshold=80 analyze -force-gc -memory-only
# Production debugging (comprehensive)
./leak_detector.sh -gc-cycles=3 -gc-wait=60s -gc-threshold=70 -verbose check -force-gcWhen to Adjust GC Parameters:
- Quick Development:
-gc-cycles=1 -gc-wait=5s(faster feedback) - CI/CD Pipelines:
-gc-cycles=2 -gc-wait=15s(balanced speed/accuracy) - Production Debugging:
-gc-cycles=3 -gc-wait=60s(thorough analysis) - Complex Applications:
-gc-wait=2m(apps with finalizers/large object graphs) - Strict Detection:
-gc-threshold=80(fewer false positives)
-url=http://localhost:6060 # Custom pprof endpoint
-timeout=30s # HTTP request timeout-monitor-duration=5m # How long to monitor
-monitor-interval=3s # Sampling frequency
-memory-history=10 # Memory pattern history length# Goroutine thresholds
-goroutine-normal=50 # Normal level
-goroutine-moderate=100 # Moderate warning
-goroutine-high=200 # High warning
-goroutine-critical=500 # Critical alert
# Memory thresholds (MB)
-memory-normal=50 # Normal level
-memory-moderate=100 # Moderate warning
-memory-high=200 # High warning
-memory-critical=500 # Critical alert-leakprof-threshold=5 # Min goroutines for leak
-leakprof-duration=60s # Min duration for leak
-leakprof-interval=10s # Sampling interval
-leakprof-samples=6 # Number of samples
-leakprof-blocked=true # Focus on blocked goroutines# App-specific suspicious patterns
-suspicious-patterns="pattern1,pattern2" # Custom suspicious patterns
-safe-patterns="pattern1,pattern2" # Custom safe patterns
-patterns-file="config.yaml" # Load patterns from file-save-reports=true # Save analysis files
-output-dir=. # General output directory
-reports-dir=./reports # Specific reports directory
-verbose # Show detailed configuration# Configure suspicious patterns for your application
./leak_detector.sh -suspicious-patterns="myProcessor.stuck,dataWorker.timeout,consumer.blocked" analyze
# Mark your background services as safe (won't be flagged)
./leak_detector.sh -safe-patterns="myApp.healthChecker,myApp.backgroundSync,myApp.metricsCollector" monitor
# Combine both for precise monitoring
./leak_detector.sh -suspicious-patterns="processor.deadlock" -safe-patterns="app.monitor" analyze
# Runtime pattern injection via environment variables
SUSPICIOUS_PATTERNS="worker.stuck,queue.blocked" ./leak_detector.sh monitor
SAFE_PATTERNS="myApp.healthCheck,myApp.backgroundTask" ./leak_detector.sh analyze
# Both command-line and environment patterns are merged
SUSPICIOUS_PATTERNS="env.pattern" ./leak_detector.sh -suspicious-patterns="flag.pattern" analyze# Custom thresholds for production environment
./leak_detector.sh -goroutine-critical=1000 -memory-critical=2000 -verbose monitor
# Extended monitoring with custom intervals
./leak_detector.sh -monitor-duration=30m -monitor-interval=10s monitor
# Production monitoring with app-specific patterns
./leak_detector.sh -suspicious-patterns="worker.stuck,processor.timeout" -monitor-duration=1h monitor
# Save reports to custom directory
./leak_detector.sh -reports-dir=./production-reports analyze# Quick development check with lower thresholds
./leak_detector.sh -goroutine-moderate=50 -memory-moderate=50 check
# Before/after testing with custom endpoint
./leak_detector.sh -url=http://localhost:8080 compare
# Detailed analysis with verbose output
./leak_detector.sh -verbose -timeout=60s analyze# Filter by specific function names
FILTER_FUNC=worker ./leak_detector.sh filter # Worker goroutines only
FILTER_FUNC=processor ./leak_detector.sh filter # Processor goroutines
FILTER_FUNC=consumer ./leak_detector.sh filter # Consumer goroutines
# Filter by package names
FILTER_PKG=github.com/myapp ./leak_detector.sh filter # Application goroutines
FILTER_PKG=internal/poll ./leak_detector.sh filter # Network I/O goroutines
FILTER_PKG=database/sql ./leak_detector.sh filter # Database goroutines
# Complex filtering with exclusions
EXCLUDE_PATTERNS=time.Sleep,sync.Cond ./leak_detector.sh filter
EXCLUDE_PATTERNS=runtime.gopark,runtime.notetsleepg ./leak_detector.sh filter
# Combine multiple filters (function + package)
FILTER_FUNC=dispatch FILTER_PKG=worker ./leak_detector.sh filter# Debug database connection leaks
FILTER_PKG=database/sql ./leak_detector.sh filter
FILTER_FUNC=Query ./leak_detector.sh filter
# Investigate HTTP client leaks
FILTER_PKG=net/http ./leak_detector.sh filter
FILTER_FUNC=Client ./leak_detector.sh filter
# Check application-specific components
FILTER_PKG=github.com/myapp/workers ./leak_detector.sh filter
FILTER_PKG=github.com/myapp/processors ./leak_detector.sh filter
# Monitor message queue goroutines
FILTER_FUNC=consumer ./leak_detector.sh filter
FILTER_FUNC=publisher ./leak_detector.sh filter
# Focus on custom application code (exclude system)
EXCLUDE_PATTERNS=runtime.,internal/poll,sync. ./leak_detector.sh filter# Stack pattern analysis with custom thresholds
./leak_detector.sh -leakprof-threshold=10 -leakprof-duration=2m patterns
# Channel leak detection (production-grade)
./leak_detector.sh -leakprof-samples=10 channels
# High-frequency sampling for detailed pattern analysis
./leak_detector.sh -leakprof-interval=5s -leakprof-samples=12 patternsThe toolkit now uses configurable thresholds (defaults shown):
- < 50: ✅ Normal
- 50-100:
⚠️ Moderate - 100-200:
⚠️ High - > 200: 🚨 Critical
- < 50MB: ✅ Normal
- 50-100MB:
⚠️ Moderate - 100-200MB:
⚠️ High - > 200MB: 🚨 Critical
- 80-100: ✅ Healthy
- 60-79:
⚠️ Moderate issues - < 60: 🚨 Critical issues
- Stable ➡️: Normal memory usage
- Growing 📈: Gradual increase (potential leak)
- Cliff 📉: Sudden drop (possible crash/restart)
- Score 70+: High likelihood of persistent goroutine leak
- Score 50-69: Moderate suspicion - worth investigating
- Score < 50: Low risk - likely normal behavior
- Total Goroutines: Shows count of matching goroutines (not total)
- Filtered By: Displays active filter criteria (function:name, package:name)
- Report Files: Contains only matching goroutines, not entire dump
- Perfect Isolation: Each filter creates focused analysis of specific components
check- Quick health statusmonitor- Watch during operationschannels- Production channel leak detection
analyze- General detailed analysismemory- Memory-specific issuespatterns- Pattern-based leak huntingfilter- Focus on specific code areas
compare- Before/after impact measurementpatterns- Development debuggingfilter- Test specific components
leak-detection-toolkit/
├── leak_detection_toolkit.go # Optimized main tool
├── leak_detector.sh # Enhanced runner script
├── go.mod # Go dependencies
├── README.md # This documentation
└── reports/ # Auto-created analysis directory
├── goroutine_analysis_TIMESTAMP.txt # Detailed goroutine dumps
├── memory_profile_TIMESTAMP.pprof # Memory profiles for pprof
├── filtered_analysis_TIMESTAMP.txt # Filtered analysis results
├── leakprof_analysis_TIMESTAMP.txt # Stack pattern analysis reports
└── uber_leakprof_TIMESTAMP.txt # Channel leak detection summaries
The shell script provides clean output and handles pprof connectivity checks:
# Basic commands
./leak_detector.sh check
./leak_detector.sh monitor
./leak_detector.sh analyze
# With advanced options
./leak_detector.sh -verbose -goroutine-critical=1000 check
./leak_detector.sh -monitor-duration=10m -reports-dir=./my-reports monitor
./leak_detector.sh -url=http://localhost:8080 -timeout=60s analyze
# Pattern analysis
./leak_detector.sh -leakprof-threshold=10 patterns
./leak_detector.sh channels
# Filtered analysis (with environment variables)
FILTER_FUNC=worker ./leak_detector.sh filterChoose your preferred method:
# Build once
go build -o leak-detector leak_detection_toolkit.go
# Use built binary (clean help output)
./leak-detector check
./leak-detector -verbose analyze
./leak-detector patterns# Run directly (shows temp paths in help messages)
go run leak_detection_toolkit.go check
go run leak_detection_toolkit.go -verbose -goroutine-critical=1000 check
go run leak_detection_toolkit.go -monitor-duration=10m monitor
# Pattern analysis
go run leak_detection_toolkit.go -leakprof-threshold=10 patterns
go run leak_detection_toolkit.go channels
# Filtered analysis (with environment variables)
FILTER_FUNC=myFunction go run leak_detection_toolkit.go filter-
Shell Script (
./leak_detector.sh) - Best for most users- ✅ Clean output and user-friendly messages
- ✅ Automatic pprof connectivity checks
- ✅ Enhanced error handling
-
Build First (
go buildthen./leak-detector) - Best for repeated use- ✅ Clean help messages without temp paths
- ✅ Faster execution (no compilation each time)
- ✅ Single binary deployment
-
Direct Go Run (
go run) - Best for quick testing- ✅ No build step required
⚠️ Shows temp paths in help messages⚠️ Compiles each time (slower)
The toolkit saves memory profiles that you can analyze with Go's pprof tool:
# After running memory analysis
go tool pprof reports/memory_profile_TIMESTAMP.pprof
# Common pprof commands:
(pprof) top # Show top memory consumers
(pprof) top -cum # Show cumulative allocations
(pprof) list main # Show line-by-line allocations
(pprof) web # Generate web visualization
(pprof) png # Generate PNG graph
(pprof) traces # Show allocation tracesThe toolkit identifies these suspicious patterns:
- Infinite loops:
for {,for;;,select { - Blocking operations:
chan,<-,sync.WaitGroup - Network hangs:
net.Dial,http.Client,io.Copy - Retry loops:
retry,Retry,backoff,attempt - Database hangs:
sql.DB,Query,Exec
- Gradual growth: Continuous memory increase
- Cliff drops: Sudden memory decreases (crashes)
- High object counts: Excessive heap objects
- Poor GC efficiency: High allocation rates
- Configurable baselines: Thresholds match your application
- Stable patterns: Memory and goroutines return to baseline
- Clean analysis: No suspicious patterns detected
- Good health scores: Consistently above 80
- Minimal differences: Before/after comparisons show small changes
- No LeakProf alerts: Production algorithms find no persistent channel leaks
- GC Analysis confirms: Detected issues are real leaks, not GC delays
# Quick development check
./leak_detector.sh check -force-gc -memory-only
# Investigate specific issue
./leak_detector.sh analyze -force-gc -verbose
# Test code changes
./leak_detector.sh compare -force-gc# Fast CI check (15s total)
./leak_detector.sh -gc-cycles=1 -gc-wait=10s -timeout=30s check -force-gc
# Thorough nightly build (2m total)
./leak_detector.sh -gc-cycles=3 -gc-wait=60s -verbose analyze -force-gc# Comprehensive production analysis
./leak_detector.sh -gc-cycles=3 -gc-wait=2m -gc-threshold=70 -verbose analyze -force-gc
# Monitor production service
./leak_detector.sh -monitor-duration=1h -monitor-interval=5m monitor -force-gc
# Before/after deployment comparison
./leak_detector.sh -gc-cycles=5 -gc-wait=90s compare -force-gc# Check after WebRTC session
./leak_detector.sh compare -force-gc -memory-only
# Monitor during load test
./leak_detector.sh -monitor-duration=30m monitor -force-gc
# Analyze DataChannel leaks
FILTER_PKG=github.com/pion ./leak_detector.sh filter -force-gc# Add to your CI pipeline
./leak_detector.sh -goroutine-critical=500 -timeout=30s check# Continuous monitoring
./leak_detector.sh -monitor-duration=1h -monitor-interval=30s monitor# Custom production thresholds
./leak_detector.sh \
-goroutine-critical=2000 \
-memory-critical=4000 \
-verbose \
check- Connection refused: Ensure pprof is enabled and accessible
- High baselines: Adjust thresholds with
-goroutine-criticalflags - No reports saved: Check
-reports-dirpermissions - Timeout errors: Increase
-timeoutvalue
# Enable verbose output for debugging
./leak_detector.sh -verbose -timeout=60s checkThis toolkit has been extensively optimized for production environments:
- ⚡ 2x faster parsing - Pre-compiled regexes eliminate compilation overhead
- 🔄 Connection pooling - HTTP client reuses connections for better throughput
- 💾 Memory efficiency - String interning and pre-allocation reduce GC pressure
- 📊 Unified calls - Single measurement function eliminates duplicate HTTP requests
- 🎯 Focused analysis - Optional flags (
-goroutines-only,-memory-only) for targeted performance
- 🏢 Multi-app support - Same optimizations work across all Go applications
- ⏱️ Fast execution - ~1.25 seconds for comprehensive analysis
- 🔧 Production-ready - Optimized for continuous monitoring and CI/CD integration
- 📈 Resource efficient - Lower CPU and memory footprint
- 🎨 Beautiful output - Clear icons and visual hierarchy
- 🔧 Consistent interface - Same flags work across all commands
- 📝 Comprehensive docs - Complete usage examples and configuration options
- 🛠️ Easy deployment - Just 2 files for complete functionality
That's it! Production-optimized leak detection for any Go application with comprehensive configuration options, advanced algorithms, and organized reporting - all in 2 optimized files.
Note: WebRTC-specific examples and configurations are available in the separate
../webrtc-examples/folder.