Queue Workers Guide
Queue Workers Guide
Run background jobs reliably with Cbox images.
Quick Start
# docker-compose.yml
services:
worker:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan queue:work redis --sleep=3 --tries=3
volumes:
- ./:/var/www/html
environment:
QUEUE_CONNECTION: redis
REDIS_HOST: redis
depends_on:
- redis
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
redis_data:
Worker Types
Basic Queue Worker
Simple worker for processing jobs:
worker:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan queue:work redis --sleep=3 --tries=3 --max-jobs=1000 --max-time=3600
restart: unless-stopped
Options explained:
--sleep=3: Wait 3 seconds when no jobs--tries=3: Retry failed jobs 3 times--max-jobs=1000: Restart after 1000 jobs (prevents memory leaks)--max-time=3600: Restart after 1 hour
Laravel Horizon
Comprehensive queue management with dashboard:
horizon:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan horizon
restart: unless-stopped
volumes:
- ./:/var/www/html
environment:
QUEUE_CONNECTION: redis
REDIS_HOST: redis
Access dashboard at /horizon after installing:
composer require laravel/horizon
php artisan horizon:install
php artisan migrate
Scheduler (Cron Jobs)
Run Laravel scheduled tasks:
scheduler:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: >
sh -c "while true; do
php artisan schedule:run --verbose --no-interaction
sleep 60
done"
restart: unless-stopped
Or use the built-in cron support:
scheduler:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
environment:
LARAVEL_SCHEDULER: "true"
Architecture Patterns
Simple Setup (1-5 workers)
services:
app:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
worker:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan queue:work
deploy:
replicas: 3
Queue Priority Setup
services:
worker-high:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan queue:work --queue=high,default
worker-low:
image: ghcr.io/cboxdk/php-baseimages/php-fpm-nginx:8.3-bookworm
command: php artisan queue:work --queue=low
Horizon with Auto-Scaling
// config/horizon.php
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'default', 'low'],
'balance' => 'auto',
'minProcesses' => 1,
'maxProcesses' => 10,
'balanceMaxShift' => 1,
'balanceCooldown' => 3,
],
],
],
Queue Drivers
Redis (Recommended)
services:
worker:
environment:
QUEUE_CONNECTION: redis
REDIS_HOST: redis
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
redis_data:
Database
services:
worker:
environment:
QUEUE_CONNECTION: database
DB_HOST: mysql
Run migration first:
php artisan queue:table
php artisan migrate
Amazon SQS
services:
worker:
environment:
QUEUE_CONNECTION: sqs
AWS_ACCESS_KEY_ID: your-key
AWS_SECRET_ACCESS_KEY: your-secret
SQS_PREFIX: https://sqs.us-east-1.amazonaws.com/your-account
SQS_QUEUE: your-queue
Job Configuration
Creating Jobs
// app/Jobs/ProcessOrder.php
class ProcessOrder implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 3;
public $timeout = 120;
public $maxExceptions = 2;
public function __construct(
public Order $order
) {}
public function handle(): void
{
// Process order
}
public function failed(Throwable $exception): void
{
// Handle failure
}
}
Dispatching Jobs
// Immediate dispatch
ProcessOrder::dispatch($order);
// Delayed dispatch
ProcessOrder::dispatch($order)->delay(now()->addMinutes(5));
// Specific queue
ProcessOrder::dispatch($order)->onQueue('high');
// Chain jobs
Bus::chain([
new ProcessOrder($order),
new SendConfirmation($order),
new NotifyAdmin($order),
])->dispatch();
Batching Jobs
$batch = Bus::batch([
new ProcessOrder($order1),
new ProcessOrder($order2),
new ProcessOrder($order3),
])->then(function (Batch $batch) {
// All jobs completed
})->catch(function (Batch $batch, Throwable $e) {
// First failure
})->finally(function (Batch $batch) {
// Batch finished
})->dispatch();
Monitoring
Health Check Endpoint
// routes/api.php
Route::get('/health/queue', function () {
$pending = Queue::size();
$failed = DB::table('failed_jobs')->count();
return response()->json([
'status' => $pending < 1000 ? 'healthy' : 'backlog',
'pending_jobs' => $pending,
'failed_jobs' => $failed,
]);
});
Horizon Metrics
// Access via Horizon dashboard or API
$metrics = app('horizon.metrics');
$throughput = $metrics->throughput();
$runtime = $metrics->runtime();
Failed Jobs
# List failed jobs
php artisan queue:failed
# Retry all failed
php artisan queue:retry all
# Retry specific job
php artisan queue:retry 5
# Clear failed jobs
php artisan queue:flush
Best Practices
Memory Management
// In job class
public function handle(): void
{
// Process in chunks to manage memory
Order::chunk(100, function ($orders) {
foreach ($orders as $order) {
$this->process($order);
}
});
}
Graceful Shutdown
worker:
stop_grace_period: 30s
command: php artisan queue:work --timeout=25
The worker will finish current job before shutting down.
Job Timeouts
class LongRunningJob implements ShouldQueue
{
public $timeout = 3600; // 1 hour
public function retryUntil(): DateTime
{
return now()->addHours(24);
}
}
Unique Jobs
class ProcessPodcast implements ShouldQueue, ShouldBeUnique
{
public function uniqueId(): string
{
return $this->podcast->id;
}
public function uniqueFor(): int
{
return 60; // seconds
}
}
Scaling
Horizontal Scaling
# Scale workers
docker compose up -d --scale worker=5
Auto-Scaling (Kubernetes)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: queue-worker
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: queue-worker
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metric:
name: redis_queue_size
target:
type: AverageValue
averageValue: 100
Troubleshooting
Jobs Not Processing
- Check worker is running:
docker compose ps - Check queue connection:
php artisan queue:monitor - Check Redis connection:
redis-cli ping - Check job syntax:
php artisan queue:work --once
Memory Exhaustion
- Add
--max-jobs=1000to restart workers periodically - Process data in chunks
- Use
gc_collect_cycles()for complex jobs
Jobs Timing Out
- Increase
--timeoutvalue - Set job-specific
$timeoutproperty - Break into smaller jobs
- Use job chaining
High Failure Rate
- Check
failed_jobstable for errors - Implement proper error handling
- Use exponential backoff
- Add dead letter queue for inspection