1. Introduction: A Query Walks into a Database

You send off a SQL query.

In most systems, that means a single process parses it, plans it, runs it, and returns the result. Fast, contained, predictable.

Aurora DSQL doesn’t work like that.

Your query passes between services. One handles parsing and planning. Another coordinates the transaction. A third ensures durability. A fourth stores the data. Each runs on its own. Each has a focused role.

It’s less a program, more a set of services working out what to do.

This isn’t just about scaling out. It shifts the whole shape of the system. What used to be hidden behind function calls is now out in the open — visible, distributed, and bound by contracts.

This post walks through that shift.

Not to say Aurora DSQL is the future. Others like CockroachDB and Fauna have already blurred these lines in different ways. But Aurora takes the decomposition further — and that opens up new patterns, new trade-offs, and new ways of thinking about what a database even is.

It’s still SQL. But the execution model underneath has changed. That changes how it behaves, how it fails, and how we relate to it.


2. Why Decompose the Database?

Traditional databases bundled everything into one process. Planning, coordination, execution, storage — all in the same binary, sharing memory and fate. That worked when compute was local, machines were stable, and failure was rare.

Modern infrastructure doesn’t work that way. Machines disappear. Workloads shift regions midstream. Latency budgets leave no room for retries. What used to be failure modes are now the baseline.

You can wrap the monolith in layers — sharding, proxies, replicas — but the shape underneath stays brittle. It still assumes stability. And punishes you when it’s not there.

Aurora DSQL starts somewhere else.

It splits the system apart. Planning, transaction ordering, durability, and storage each run as their own service. Each with clear responsibilities. Each restartable, reschedulable, and independently scalable.

You don’t tiptoe around distribution here — you build directly on its shifting ground.

The system flexes under pressure. Failures are contained, coordination surfaces naturally, and volatility isn’t a threat — it’s the backdrop the whole thing is designed for.

This is fault modelling flipped on its head. Aurora DSQL assumes loss: of processes, of memory, of locality. That assumption shapes everything else. By isolating concerns, it avoids entanglement — and keeps going even as the ground shifts.

Systems like CockroachDB and FoundationDB share this DNA. They also decompose core responsibilities — splitting planning from storage, or layering rich semantics over a transactional core. But Aurora pushes that boundary further. Each layer runs as an independent service, with its own deployment model, scaling boundaries, and failure domains. The separation is enforced in operation as much as in design.


3. Inside Aurora DSQL: The Database as a Fleet

Aurora DSQL behaves like a relational database — but internally, it’s a set of services that act more like a distributed fleet. Each one has a narrow job. Together, they provide familiar semantics across an unfamiliar shape.

To make sense of it, it helps to break things into three zones: execution, state, and coordination.

🗺️ High-Level Architecture Zones

    
flowchart LR
    EP["🧱 Execution Path"]
    SD["📦 State & Durability"]
    CO["🔧 Coordination & Health"]

    EP --> SD
    CO -.-> EP
    CO -.-> SD

    style EP fill:#f1faff,stroke:#1e88e5
    style SD fill:#e6ffe9,stroke:#43a047
    style CO fill:#f9f9f9,stroke:#bbb,stroke-dasharray: 5 5

  
Aurora DSQL — High-Level Architecture Zones

🧱 Execution Path

    
flowchart LR
    SR["Session Router"]
    QP["Query Processor"]
    AJ["Adjudicator"]

    SR --> QP --> AJ

    style SR fill:#f1faff
    style QP fill:#e9f5ff
    style AJ fill:#fff9e6

  
Aurora DSQL — Execution Path

These components handle the live path of a query — parsing, planning, coordinating, and preparing writes.

🧭 Session Router

The session router is the front door. It maintains client session state, tracks affinities, and ensures transactional boundaries hold even as requests bounce across ephemeral instances.

It doesn’t process queries itself — it guides them to the right Query Processor, and helps preserve the illusion of a sticky session in a stateless world.

🧠 Query Processor

The Query Processor (QP) parses SQL, builds an execution plan, and optimises it. Based on PostgreSQL internals, it’s been refactored to run independently of any specific storage or durability backend.

Once a query’s plan is ready, the QP passes transactional intents to the Adjudicator via gRPC. It doesn’t persist changes or track conflicts itself — it delegates to specialised services that do.

QP is stateless by design, which means it can scale out horizontally. Incoming load is spread across QPs without needing a central coordinator.

🧮 Adjudicator

The Adjudicator handles transaction coordination using optimistic concurrency control. It doesn’t use locks. Instead, it evaluates whether a transaction can commit safely once all the work is done.

Each transaction receives a commit timestamp that preserves snapshot isolation. Conflicts are detected at the end of the path, not during execution. If the check passes, the Adjudicator finalises the result and logs it to the Journal.

Because the Adjudicator focuses purely on validation and ordering, it can scale horizontally and operate across multiple regions without shared memory or central coordination.


📦 State & Durability

    
flowchart LR
    JR["Journal"]
    CB["Crossbar"]
    ST["Storage Layer"]

    JR --> CB --> ST

    style JR fill:#fef2e6
    style CB fill:#f3e6ff
    style ST fill:#e6ffe9

  
Aurora DSQL — State and Durability Components

These services anchor the system’s truth. They record what happened — and make sure it sticks.

📓 Journal

The Journal is a distributed write-ahead log. Every committed transaction flows through it. It provides durability, crash recovery, and a global timeline of changes — not just for storage, but for replicas and readers too.

Aurora DSQL writes entire commits to a single journal, regardless of how many rows they touch. That simplifies atomicity and makes the write path easy to scale. But it introduces a problem on the read side.

If any journal might hold the latest update for a row, then storage has to monitor all of them. As more journals are added to boost throughput, storage starts to drown — too many streams, too much noise.

🪝 Crossbar

Crossbar helps scale reads without tying them to write throughput. It tracks which storage nodes are interested in which keys, and routes updates accordingly.

Storage nodes don’t tail every journal. Instead, they subscribe to key ranges. Crossbar listens to the journals, assembles a global transaction timeline, and forwards updates to only the nodes that care.

This design also supports multi-region deployments. By merging the journals into a single logical sequence, Crossbar makes it possible to replicate and recover consistently — even when updates arrive from different parts of the world.

The journal records the sequence of events. Crossbar decides who needs to hear about each one.

💾 Storage Layer

The Storage Layer holds Multi-Version Concurrency Control (MVCC) snapshots, serves reads from point-in-time views, and applies writes as they stream in from Crossbar.

It avoids coordination and skips validation. Its role is to store data as instructed and serve it efficiently. This clean separation allows storage to scale on its own, without being pulled into transaction logic or planning overhead.


🔧 Coordination & Health

    
flowchart LR
    CP["Control Plane"]

    style CP fill:#f9f9f9,stroke:#bbb,stroke-dasharray: 5 5

  
Aurora DSQL — Control Plane

This isn’t part of the query path — but it’s what makes the rest of the system live and breathe.

⚙️ Control Plane

The Control Plane watches everything. It tracks service health, schedules components, assigns sessions, and helps recover from faults. It doesn’t handle queries, but it informs how and where they run.

Rather than assume components are always up, Aurora DSQL designs for churn. The Control Plane helps services discover each other, distribute work, and remain elastic under shifting load.


📌 A Query’s Journey Through Aurora DSQL

    
flowchart TD
    Client["Client Request"]

    subgraph QueryFlow["Query Execution Flow"]
        SR["Session Router"]
        QP["Query Processor"]
        AJ["Adjudicator"]
        JR["Journal"]
        CB["Crossbar"]
        ST["Storage Layer"]
    end

    subgraph Control["Control Plane"]
        CP["Control Plane"]
    end

    %% Query path
    Client --> SR --> QP --> AJ --> JR --> CB --> ST

    %% Control links (dashed)
    CP -.-> SR
    CP -.-> QP
    CP -.-> AJ
    CP -.-> JR
    CP -.-> CB
    CP -.-> ST

    %% Styles
    style Client fill:#ffffff,stroke:#333
    style SR fill:#f1faff
    style QP fill:#e9f5ff
    style AJ fill:#fff9e6
    style JR fill:#fef2e6
    style CB fill:#f3e6ff
    style ST fill:#e6ffe9
    style CP fill:#f9f9f9,stroke:#bbb,stroke-dasharray: 5 5

  
A Query’s Journey Through Aurora DSQL

Each component talks over clean interfaces. They don’t share memory. They don’t assume locality. They operate in concert — like a fleet, not a monolith — each one focused, swappable, and resilient.


4. The Benefits and the Bill

Decomposition buys you flexibility, but it also reshapes the work. New edges appear. New failure modes too.

Aurora DSQL’s architecture brings real advantages. You can scale just the layer that’s under pressure. A Query Processor can disappear without disrupting storage. An Adjudicator can fail and restart without losing progress. Each service is stateless where possible, replaceable by design.

The trade-offs sit deeper in the system. The architecture favours a consistent global order of commits, even across regions. That means prioritising determinism over low-latency availability. Conflicts aren’t blocked — they’re discovered at commit time and resolved by rejecting the transaction. This works well at scale, but introduces extra complexity at the edges.

Retries are lightweight, but the control paths carry real weight. Crossbar handles key-range routing and enforces global ordering. Journals need to be merged into a consistent timeline — and when those systems lag, performance isn’t the only thing at risk. Correctness can slip too, especially under pressure.

That separation brings operational clarity. You can monitor just the Adjudicator. Or just storage write latency. You get a cleaner picture of where the pain is — and where it isn’t.

But it also spreads the burden.

Every cross-service hop adds a bit of latency, a bit of unpredictability. There’s no single log to tail, no server to bounce. Cold-starting a stateless service might mean reacquiring leases or rebuilding context from scratch.

Local development becomes tricky. You can’t docker-compose your way to realism. The contracts between components need to be well-defined and forward-compatible — otherwise, you’re left debugging distributed mismatches across a mesh of interfaces.

“When storage is a service, you don’t ‘restart the DB’ — you find the part that broke and fix that.”

This is operational work that looks more like distributed systems engineering than classic DBA tasks.

And it prompts harder questions.

Where does the database end now? Who owns cross-cutting performance regressions? How do you test something that behaves like a fleet but promises the guarantees of a single box?

Aurora DSQL puts the complexity on display — and expects you to own it.

Table: Aurora DSQL — Gains and Trade-offs

✅ Benefits⚠️ Costs
Scale individual componentsIncreased latency across service hops
Isolate faults cleanlyMore complex deployment and monitoring
Elastic, stateless servicesCold starts introduce delays
Clearer operational visibilityDebugging spans multiple service boundaries
Flexible failure recoveryLocal development becomes harder
Consistent global commit orderReduced write availability under contention
Active-active region supportRequires journal merge and key-aware routing

5. Rethinking the Database Shape

Aurora DSQL changes more than execution — it redefines the shape and boundaries of the database itself.

We’re used to binaries: self-contained, stateful, singular. Start a process, open a socket, store and retrieve. Even clustered systems echo that shape — tight coupling, shared assumptions.

Aurora breaks that.

No central daemon. No fixed address. The database becomes a set of services — each with a job, a contract, and a lifecycle of its own. Planning, ordering, storage, durability — split out, scaled separately.

That opens up real flexibility. You can scale just the layer that’s under pressure, restart a single part without taking down the rest, and trace behaviour with a level of precision that monoliths rarely allow.

But that flexibility comes at a cost. Interfaces harden in ways in-process boundaries never did. Latency becomes more visible. Failure paths multiply. And development starts to feel more like stitching together a system than writing against a single tool.

Other systems are moving this way too. FoundationDB splits storage from semantics. CockroachDB fuses SQL and Raft. Fauna and PlanetScale push modular, serverless shapes.

Aurora goes further — each piece deployed and run as its own service.

And that leaves us with questions:

  • Can this shape work outside AWS?
  • What happens when contracts drift?
  • How much do we gain — and what do we give up?

Aurora DSQL walks away from the monolith — toward something more flexible, but harder to hold in your head.