Skip to content

SynxDB Cloud · now in early access

The cloud-native data warehouse for builders on AWS.

Columnar MPP performance, Postgres compatibility, S3-native storage. Provision in under a minute.

$ psql "postgres://synxdb.cluster-abc.synxdata.com:5432/prod"
CREATE TABLE events (
  ts          timestamptz,
  user_id     bigint,
  event_type  text,
  payload     jsonb
) USING columnar;

Performance

Query a year of data in seconds.

SynxDB Cloud's columnar storage scans only the columns you touch and skips the rows you don't. An MPP engine fans the work across every node in the cluster, so scaling up is scaling out — not a bigger box.

  • Columnar layout — read one column without reading the other forty.
  • Distributed execution — every node scans its own slice in parallel.
  • Postgres planner — no new query dialect to learn.

1TB scan, relative time

lower is better

Row-store scan 100s
Columnar scan 32s
Columnar + MPP (8 nodes) 8s

Illustrative — benchmarks against your workload will vary. Published numbers coming at GA.

Built for the cloud

A warehouse that feels like a database.

Three things SynxDB Cloud gets right out of the box — so you spend your time on queries, not on plumbing.

Compatibility

Your Postgres, at warehouse scale.

Same SQL, same drivers, same tooling. Point psql, dbt, or any Postgres client at SynxDB Cloud and the query just runs.

connection string

postgres://user@synxdb.cluster.synxdata.com:5432/prod

Storage

S3 is your data lake. So is ours.

Tables live on S3 by default. Bring your existing Parquet in place, query it alongside native tables, pay S3 prices for cold data.

ingest in one line

COPY events FROM 's3://my-bucket/events/*.parquet';

Elasticity

Scale the cluster, not your ops team.

Compute and storage scale independently. Add nodes for a noisy Monday, shrink them back Friday night. No rebalancing windows, no downtime.

resize, live

ALTER CLUSTER prod SET SIZE = 16;

How it works

From nothing to your first query, in three steps.

  1. 01

    Provision

    Pick a region, pick a size, hit go. A new cluster is live in under a minute — no VPC peering, no capacity planning up front.

    $ synx cluster create --region us-east-1 --size starter
  2. 02

    Connect

    Point any Postgres client at the cluster. psql, dbt, Metabase, your own service — same wire protocol, same credentials, same drivers.

    $ psql "$(synx cluster dsn prod)"
  3. 03

    Query

    Load data from S3, join it to your native tables, run it across the whole cluster. The planner handles parallelism; you write SQL.

    SELECT count(*) FROM events WHERE ts > now() - interval '30 days';

By the numbers

The shape of the product, in three numbers.

Directional today, audited at GA. We'd rather publish one honest metric than a grid of marketing ones.

Cluster live
< 60s

From signup to first query, in any AWS region we support.

Less data scanned
32×

Columnar layout reads the columns your query touches — nothing else.

Postgres compatible
1 wire protocol

Any driver that speaks Postgres, speaks SynxDB Cloud. No new dialect.

Ecosystem

Plays well with the stack you already run.

SynxDB Cloud is an AWS-native warehouse built on the Apache Cloudberry (Incubating) open-source foundation, and speaks Postgres wire protocol end-to-end.

  • AWS Partner
  • Apache Cloudberry
  • PostgreSQL
  • Parquet
  • S3
  • Iceberg

Spin up a warehouse. Run a query. See for yourself.

Early access is open. A starter cluster is free while we're in preview — no credit card, no sales call.