CDX-301e · Module 3
Concurrency Limits & Queue Management
3 min read
Concurrency limits control how many parallel tasks can execute simultaneously. Without limits, a script that fans out 50 tasks will attempt to provision 50 microVMs at once — potentially exhausting your quota, spiking costs, and overwhelming the merge process. Concurrency limits act as a throttle: submit all 50 tasks, but only N execute at a time. As each task completes, the next in the queue starts. This provides the organizational benefit of batch submission with the resource discipline of controlled parallelism.
Queue management adds priority, ordering, and failure handling to the concurrency system. Priority queues ensure critical tasks (production hotfixes, blocking test failures) execute before routine tasks (documentation updates, code style fixes). Ordering guarantees prevent dependent tasks from executing out of sequence — even in a parallel system, some tasks must complete before others can start. Failure handling determines whether a failed task is retried, skipped, or causes the entire batch to abort.
# Concurrency configuration
concurrency:
max_parallel: 5 # At most 5 tasks running
queue_strategy: fifo # First in, first out
retry_policy:
max_retries: 2 # Retry failed tasks twice
backoff: exponential # 30s, 60s between retries
failure_mode: continue # Other tasks proceed on failure
# Priority levels
priority:
critical: 0 # Production fixes — execute immediately
high: 1 # Feature work — next in queue
normal: 2 # Routine maintenance — default
low: 3 # Nice-to-have — execute when idle
# Queue metrics
- Queue depth: Tasks waiting for a slot
- Wait time: Average time from submit to start
- Throughput: Tasks completed per hour
- Failure rate: % of tasks that fail after all retries
- Start conservative Begin with max_parallel=3. Monitor merge conflict rates, review throughput, and cost. Increase only when the merge process is not the bottleneck.
- Implement failure modes Choose "continue" for independent tasks (each is valuable on its own) and "abort" for pipeline stages (downstream tasks depend on upstream success).
- Track queue metrics If queue depth consistently exceeds 3x your concurrency limit, either increase the limit or reduce task submission rate.