Async & Queues

Decouple your services, absorb traffic spikes, and stop making users wait for work that doesn't need to happen in real-time.

Why Go Async?

Synchronous processing means the user waits while everything happens — API call, database write, email sent, PDF generated. Async processing moves expensive or non-critical work out of the request path. The user gets a fast response; the heavy lifting happens in the background.

Synchronous
Request
↓ save order
↓ charge payment
↓ send email
↓ generate invoice
Response (2.3s)
Asynchronous
Request
↓ save order
Response (120ms)
Background:
Email
Invoice
Analytics
Move non-critical work out of the request path. Users don't need to wait for emails to send.

Message Queues

A message queue is a buffer that sits between a producer (the thing generating work) and a consumer (the thing doing the work). The producer publishes a message and moves on. The consumer picks it up when ready.

Producer
Queue
Consumer
API Server
SQS / RabbitMQ / Kafka
Worker
The queue decouples production speed from consumption speed.

Key properties:

  • Durability: Messages persist even if consumers are down. The queue holds them until they're processed.
  • At-least-once delivery: Most queues guarantee that messages are delivered at least once. Your consumers need to be idempotent — processing the same message twice should be safe.
  • Ordering: Some queues (Kafka, SQS FIFO) guarantee message order. Standard queues don't — if order matters, you need to handle it.
Queue use cases: Email/notification delivery, image/video processing, data pipeline ingestion, order fulfillment, search index updates, analytics event collection. Basically any work that doesn't need to happen within the user's request-response cycle.

Task Queues

Task queues are a higher-level abstraction over message queues. Instead of raw messages, you enqueue function calls with arguments. A pool of workers picks up tasks and executes them.

Examples: Celery (Python), Sidekiq (Ruby), Bull (Node.js). These frameworks handle task serialization, retry logic, dead-letter queues, rate limiting, and monitoring.

Feature Message Queue Task Queue
Abstraction Raw messages (bytes/JSON) Function calls with arguments
Retries Manual implementation Built-in with backoff
Monitoring Varies by provider Dashboards, success/failure tracking
Scheduling Not built-in Delayed/scheduled task support
Use case Service-to-service communication Background job processing

Back Pressure

Back pressure is what happens when producers generate work faster than consumers can handle it. Without a strategy, queues grow unboundedly, memory fills up, and your system falls over.

Back Pressure Strategies

  • Bounded queues: Set a maximum queue size. When it's full, producers are blocked or receive errors. This pushes pressure upstream where it belongs.
  • Rate limiting: Limit how fast producers can enqueue. Excess requests get rejected with a 429 status or queued for later.
  • Load shedding: When overwhelmed, intentionally drop low-priority work. Process critical tasks first, skip non-essential ones.
  • Auto-scaling consumers: Spin up more workers when queue depth increases. Scale down when it drains. This is reactive but effective for burst traffic.
High traffic burst
Queue (filling up)
Worker 1
running
Worker 2
running
Worker 3
scaling up
Auto-scale workers to match queue depth. Set upper bounds to prevent unbounded growth.
Infinite queues are a lie. Every queue has a limit — whether it's configured or whether it's "however much disk/memory your broker has." Explicit bounded queues with clear overflow behavior (reject, drop, circuit break) are always safer than hoping the queue never fills.

Event-Driven Architecture

In event-driven systems, services communicate by publishing events (facts about what happened) rather than sending commands (instructions to do something). This is a powerful pattern for decoupling:

  • The order service publishes OrderPlaced. It doesn't know or care who's listening.
  • The email service subscribes and sends a confirmation. The inventory service subscribes and decrements stock. The analytics service subscribes and logs the sale.
  • Adding a new subscriber (e.g., a loyalty points service) requires zero changes to the order service.

Tools: Kafka (event streaming at scale), AWS EventBridge (serverless event bus), RabbitMQ (traditional pub/sub), Redis Streams (lightweight event log).