A sink in Sequin is a destination for your Postgres data.

Source tables

Sequin sinks can be configured to stream changes for one or many tables in your database. You can specify to sink:

  • All: Capture changes from every table / schema exposed by the publication. When new schemas or tables are added, they are automatically included.
  • Include only selected schemas or tables: Choose the schemas and tables you care about; changes from only those schemas and tables will be processed in your sink.
  • Exclude specific schemas or tables: Start with everything in the publication, then list the schemas or tables you want to omit. Future schemas and tables will be included in your sink unless they appear on your exclusion list.

Sequin will only stream changes for tables that are exposed by the publication. If you don’t see a table in the sink configuration, it’s because it’s not exposed by the publication.

Delivery Guarantees

Sequin provides exactly-once delivery under most circumstances, but exactly-once delivery is not always possible. Therefore, Sequin implements at-least-once delivery with idempotency keys.

If a message is not delivered successfully, Sequin will automatically retry with exponential backoff, increasing the retry interval up to a maximum of 3 minutes between attempts.

By default, Sequin retries messages indefinitely. You can configure a maximum number of retries for a sink by setting the max_retry_count parameter. If a message fails delivery after the specified number of retries, it will be discarded.

Message grouping and ordering

Sequin has the ability to group messages by one or more fields. Changes for rows in the same group are guaranteed to arrive in the order they occurred (i.e. the order they were committed in Postgres) to your sink.

By default, Sequin groups messages by the primary key value(s) of the source row. This means that if a row is updated multiple times, each update event will arrive in order to your sink.

You can specify message grouping behavior when configuring a sink. You can group rows by a single field or multiple fields.

So, for example, instead of grouping by primary key, you could instead group orders by account_id. This means that all changes for a given account_id will be sent to your sink in the order they were committed in Postgres.

All this applies to how Sequin delivers messages to your sink. Whether your sink supports ordered/FIFO processing depends on the sink. See the reference page for your sink for details.

The only sink that doesn’t support ordering out of the box is Redis streams.

SQS supports ordering, but only when configured as a FIFO queue.

Backfill support

Sequin supports backfilling to sinks. When you configure a sink with multiple tables, you can choose to backfill all tables or specific tables. See the backfills reference for more details.

Monitoring and debugging

You can view the status of sinks in the Sequin web console.

All sinks have a “Messages” tab that shows the status of messages currently in flight to the sink, recently delivered messages, and messages that have failed delivery.

You can click into any failed message to view the delivery logs and error details.

Advanced configuration

Load shedding policy

If a sink’s in-memory buffer fills up, the load_shedding_policy will be triggered. There are two available load shedding policies:

pause_on_full (default) — pauses replication for all sinks on the slot until the buffer has room again. In this case, Postgres guarantees no lost messages. However, in order to do this, Postgres has to save the messages on disk. This means the replication slot can grow without bound. Unmanaged, this will cause your database to run out of disk space.

discard_on_full — Sequin will continue reading messages from the replication slot. However, messages intended for the overloaded sink(s) will be dropped. Other sinks will not be impacted. This limits possibility that your database will run out of disk space but will cause sinks to be inconsistent with the state of the database.

There is no way to recover dropped messages. You can run a backfill to replay rows that may have been missed, but those backfill messages will not contain the discrete change data that was lost.