Demystifying AWS DSQL: Building Scalable, Multi-Region Distributed SQL Clusters
In today’s data-driven world, low-latency access, high availability, and global scalability are non-negotiable for modern applications. Amazon Web Services (AWS) answers this call with AWS DSQL — its emerging offering for Distributed SQL workloads. In this blog, we’ll explore what AWS DSQL is, its key benefits, and how you can set up a multi-region cluster to serve highly resilient and globally consistent applications.
What is AWS DSQL?
AWS DSQL refers to a Distributed SQL database service — a modern evolution of traditional relational databases that are built for cloud-native, horizontally scalable, and geo-distributed architectures.
Unlike traditional SQL databases (like MySQL or PostgreSQL), which are vertically scalable and struggle with availability in distributed setups, Distributed SQL databases allow you to:
- Scale horizontally across nodes and regions
- Maintain strong consistency with ACID transactions
- Provide fault tolerance and automatic failover
- Support multi-region active-active architecture
Examples of Distributed SQL on AWS:
While AWS does not have a product called “DSQL” per se, it supports Distributed SQL databases such as:
- Amazon Aurora Global Databases
- CockroachDB (via AWS Marketplace or self-managed)
- YugabyteDB (self-hosted or via partner solutions)
- Google Spanner alternatives hosted on AWS
- FoundationDB or FaunaDB if used in hybrid-cloud setups
Let’s walk through a multi-region cluster setup using Amazon Aurora Global Database — one of the most accessible and powerful distributed SQL solutions in AWS.
Why Multi-Region?
Multi-region deployment is critical for:
- 🌐 Global low-latency access
- ⚠️ Disaster Recovery & High Availability
- 🔄 Active-active or active-passive replication
- ✅ Compliance with data residency laws
Setting Up an Aurora Global Database (Distributed SQL Model)
💡 Overview:
Aurora Global Database is designed for globally distributed applications, allowing a single Aurora database to span multiple AWS regions.
Here’s how you can configure it:
Lets create a Multi-Region Cluster
Let’s connect to the Cluster
export CLUSTER_ENDPOINT=deabuc53ppqqsbkv6hfyvbwfii.dsql.us-east-1.on.aws
export PGPASSWORDD=$(aws dsql generate-db-connect-admin-auth-token --hostname $CLUSTER_ENDPOINT --expires-in 14400)
export PGSSLMODE=require
psql --quiet --username admin --dbname postgres --host $CLUSTER_ENDPOINT
it will ask for a password to connect, goto the console, click on “connect” and “get Token”
export CLUSTER_ENDPOINT=pqabuc537mhkb5y4gfvaagtvsq.dsql.us-east-2.on.aws
export PGPASSWORDD=$(aws dsql generate-db-connect-admin-auth-token --hostname $CLUSTER_ENDPOINT --expires-in 14400)
export PGSSLMODE=require
psql --quiet --username admin --dbname postgres --host $CLUSTER_ENDPOINT
Now lets run the queries and check it
select * from ACD_BGLR_2025;
lets quickly create a table
CREATE TABLE ACD_BGLR_2025 ( S_No INT PRIMARY KEY, First_Name VARCHAR(50), Last_Name VARCHAR(50), Topic VARCHAR(100));
lets try a select query again
select * from ACD_BGLR_2025;
Now, lets insert the values in us-east-1
INSERT INTO ACD_BGLR_2025 (S_No, First_Name, Last_Name, Topic) VALUES (01, 'Rajani', 'Ekunde', 'Aurora DSQL');
I have inserted the values in us-east-1 and checking if these details are being captured in us-east-2
I have inserted the values in us-east-2 and checking if these details are being captured in us-east-1
INSERT INTO ACD_BGLR_2025 (S_No, First_Name, Last_Name, Topic) VALUES (02, 'Rajaram', 'Erraguntla', 'Aurora DSQL');
UPDATE ACD_BGLR_2025 SET TOPIC = "AWS AURORA DSQL" WHERE S_No =01;