1. Library
  2. Computer Networks
  3. Advanced Topics
  4. Modern Architecture

Updated 8 hours ago

Event-driven architecture builds systems where components communicate by producing and consuming events rather than through direct calls. But this isn't just a technical pattern—it's a different way of thinking about how systems should work.

The Paradigm Shift

Traditional architectures use request-response. Service A calls Service B and waits. This creates tight coupling—A must know B exists, know how to call it, and depend on B being available right now.

Event-driven architecture inverts this relationship. Instead of telling other services what to do, a service announces what happened. Others react if they care.

A command says "do this." An event says "this happened." Commands can fail. Facts cannot.

This shift from orchestration to choreography enables fundamentally different system designs.

What Are Events?

Events are immutable records of things that have happened:

  • "Order placed at 14:32:05 for customer 12345 with total $99.99"
  • "Payment received for invoice 67890"
  • "Inventory level for item A123 fell below 10 units"

These aren't requests or commands. They're historical facts. You can't argue with them, you can't reject them, you can only react to them.

This immutability is the foundation everything else builds on. Events contain what happened, when, to whom, and enough context to understand the occurrence without asking follow-up questions.

Producers and Consumers

In event-driven systems, components play one or both roles:

Producers detect occurrences and publish events. An order service publishes "order placed" when a customer completes checkout. It doesn't know or care who's listening.

Consumers subscribe to events and react. A notification service might send confirmation emails. A warehouse service might begin fulfillment. An analytics service might update dashboards.

The same event, consumed independently, for completely different purposes. The producer remains oblivious to all of them.

This is loose coupling made concrete. You can add a new consumer tomorrow without touching the producer. You can remove a consumer and the producer won't notice. Systems evolve independently.

Event Brokers

Events flow through brokers—infrastructure that handles routing, persistence, and delivery:

  • Producers publish to channels (also called topics or streams)
  • Consumers subscribe to channels they care about
  • Brokers ensure events reach their destinations

Apache Kafka, Amazon Kinesis, Google Cloud Pub/Sub, and RabbitMQ are common choices. They provide guarantees that matter: events persist until consumed (temporary unavailability doesn't mean lost data), ordering within partitions, and delivery semantics ranging from at-least-once to exactly-once.

Patterns That Emerge

Event notification is the simplest form—"this happened, react if you want." No payload beyond the fact itself. Consumers query for details if needed.

Event-carried state transfer includes the relevant data in the event. Consumers maintain local copies and update from events, never querying the source. This trades storage for independence.

Event sourcing takes immutability to its logical conclusion: events ARE the database. Current state is computed by replaying events. You never lose history because history is all you store.

CQRS separates writes (commands that generate events) from reads (queries against views built from events). Write models optimize for correctness; read models optimize for query patterns.

Why This Works

Loose coupling isn't just architectural preference—it's operational freedom. Deploy producers and consumers on different schedules. Scale them independently. Replace implementations without coordination.

Resilience emerges from persistence. If a consumer goes down, events wait. When it recovers, it catches up. No data lost, no complex retry logic in the producer.

Extensibility means new capabilities without touching existing code. Want fraud detection? Add a consumer. Want analytics? Add a consumer. The order service keeps doing exactly what it always did.

Real-time reaction because events propagate immediately. No polling, no batching, no lag between occurrence and response.

The Hard Parts

Eventual consistency is the price of loose coupling. After an event publishes, consumers process it... eventually. The system passes through inconsistent states before converging. If your business logic requires "either all of this happens or none of it," event-driven architecture makes that harder.

Debugging across time and space becomes challenging. Why did this happen? Because of that event. Which came from that service. Which reacted to another event. Tracing causality requires tooling and discipline.

Schema evolution is a permanent tax. Events have structure. Structure changes. Old consumers must handle new event shapes. New consumers must handle old events still in the stream. Versioning strategies and schema registries become essential infrastructure.

Ordering matters more than you'd think. Events about the same customer should process in order. Inventory events must sequence correctly. Brokers provide ordering within partitions, so related events need consistent partition keys. Global ordering across everything is possible but expensive—most systems identify where order matters and partition strategically.

Handling Failures

Error handling looks different without request-response:

Retries with backoff attempt failed processing again, waiting progressively longer between attempts to avoid overwhelming recovering systems.

Dead letter queues collect poison events—those that fail repeatedly—for human review. Something's wrong, and automated retry won't fix it.

Compensating events undo effects of earlier events when errors occur after partial processing. "Order cancelled" compensates for "order placed" when payment fails.

Circuit breakers stop attempting to process events when downstream dependencies are unavailable, preventing cascade failures.

Observability Requirements

Event-driven systems need specific monitoring:

Event lag shows how far behind consumers are. Growing lag means capacity problems before they become failures.

Flow visualization maps event paths through the system, revealing bottlenecks and unexpected dependencies.

Distributed tracing correlates events across services, reconstructing complete workflows from initial trigger to final outcome.

When Events Fit

Event-driven architecture excels for:

  • Complex workflows spanning services where loose coupling enables independent evolution
  • Real-time processing like fraud detection or monitoring where milliseconds matter
  • High scale where producers and consumers scale independently
  • Audit requirements where event sourcing provides complete history
  • System integration where events bridge disparate systems without tight coupling

It's overkill for simple CRUD applications where request-response is clearer and eventual consistency is a burden.

Hybrid Reality

Most production systems combine both patterns:

  • Synchronous requests for user-facing operations where immediate response matters
  • Asynchronous events for background processing, integration, and workflows

The user clicks "place order" and gets immediate confirmation (synchronous). Everything that follows—inventory updates, shipping, notifications, analytics—happens through events (asynchronous).

Frequently Asked Questions About Event-Driven Architecture

Was this page helpful?

😔
🤨
😃