
Sequin performance
Single‑slot peak throughput & latency
Sequin streams from a single Postgres replication slot to Kafka at:- 50k ops/s or 40 MB/s (whichever comes first), sustained
- 55ms average latency
- 253ms 99th‑percentile latency
Sequin on different instance sizes
Here’s a sampling of throughput Sequin can handle on different instance types:AWS Graviton EC2 | vCPU / GiB | Avg row size | Sustained throughput | Bandwidth |
---|---|---|---|---|
c8g.4xlarge | 16 / 32 | 100B | 60k ops/s | 5.53 MB/s |
c8g.4xlarge | 16 / 32 | 200B | 58k ops/s | 10.64 MB/s |
c8g.4xlarge | 16 / 32 | 400B | 52k ops/s | 19.92 MB/s |
c8g.4xlarge | 16 / 32 | 1.6kb | 23k ops/s | 36.06 MB/s |
c8g.2xlarge | 8 / 16 | 200B | 34k ops/s | 6.32 MB/s |
c8g.xlarge | 4 / 8 | 200B | 20k ops/s | 3.81 MB/s |
Debezium on MSK Connect (AWS)
Best if: you already run Amazon MSK and want a managed Kafka Connect runtime. The highest throughput we were able to consistently achieve with Debezium deployed on AWS MSK Connect was 6k ops/sec.Metric | 8 MCUs (max available on AWS MSK Connect) |
---|---|
Sustained throughput | 6k ops/s |
Average latency | ≈ 258ms |
99th‑percentile latency | ≈ 499ms |

Scaling limits
- 8 MCUs (8 vCPU / 32 GiB RAM) is the hard ceiling MSK Connect exposes for a single connector.
- The connector is single‑threaded for snapshotting and heavily synchronized during streaming, so adding more MCUs has diminishing returns past ~4 MCUs.
Debezium Server (stand‑alone)
Best if: you need a minimal footprint and do not want to run full Kafka Connect.Hardware | Row size | Debezium Server | Sequin |
---|---|---|---|
16 vCPU / 32 GiB | 200 B | 20k ops/s (4 MB/s) | 58k ops/s (10.64 MB/s) |
16 vCPU / 32 GiB | 400 B | 20.3k ops/s (7.73 MB/s) | 52k ops/s (19.92 MB/s) |
8 vCPU / 16 GiB | 100 B | 20k ops/s (2 MB/s) | 30k ops/s (3 MB/s) |
4 vCPU / 8 GiB | 100 B | 20k ops/s (2 MB/s) | 29k ops/s (2.9 MB/s) |
Debezium on Confluent Cloud
(Coming soon)Comparative benchmarks coming soon. Upvote the issue if you want to see them.
Debezium on self‑hosted Kafka Connect
(Coming soon)Comparative benchmarks coming soon. Upvote the issue if you want to see them.
Benchmark methodology
All of our benchmarks are open source and available on GitHub.
workload_generator.py
deployed to a dedicated EC2 instance.
Throughput and end-to-end latency are measured with a Kafka consumer deployed to a separate EC2 instance. The stats are calculated as:
- Throughput: the number of records delivered to Kafka per second.
- Latency: the time between a change occuring in Postgres (
updated_at
timestamp) and it’s delivery to AWS MSK Kafka (Kafkacreation
timestamp).
Workload
workload_generator.py
applies a mixed workload of INSERT
, UPDATE
, and DELETE
operations to the benchmark_records
table.

benchmark_records
Postgres table has the following schema:
Stats collection
Similarly, thecdc_stats.py
script is deployed to a separate EC2 instance and reads from AWS MSK Kafka. Stats are bucketed and saved to a CSV file for analysis.

Infrastructure
Sequin, Debezium, and the rest of the infrastructure are deployed to AWS in the following configuration:- AWS RDS Postgres
db.m5.8xlarge
instance (32 vCPUs, 128GB RAM) - AWS MSK Kafka provisioned with 4 brokers
- AWS EC2 instances with Sequin running in Docker.
Summary
Tool / Deployment | Sustained throughput | Avg latency | 99p latency | |
---|---|---|---|---|
Sequin | >50 k ops / s | 55 ms | 253 ms | |
Debezium · MSK | 6 k ops / s | 258 ms | 499 ms | |
Debezium · Server | 23 k ops / s | 210 ms | 440 ms | |
Fivetran | 5+ minutes | - | - | - |
Airbyte | 1+ hours | - | - | - |