How to stream Postgres to RabbitMQ
Create message queues from Postgres events with RabbitMQ. Stream database changes to RabbitMQ to trigger workflows, keep services in sync, and more.
This guide shows you how to set up Postgres change data capture (CDC) and stream changes to RabbitMQ using Sequin.
With Postgres data streaming to RabbitMQ, you can trigger workflows, keep services in sync, build audit logs, maintain caches, and more.
By the end of this how-to, you’ll have database changes flowing to a RabbitMQ exchange.
Prerequisites
If you’re self-hosting Sequin, you’ll need:
- Sequin installed
- A database connected
- A RabbitMQ server ready to go
If you’re using Sequin Cloud, you’ll need:
- A Sequin Cloud account
- A database connected
- A RabbitMQ server ready to go
Basic setup
Prepare RabbitMQ
You’ll need a RabbitMQ server and exchange ready for Sequin to stream changes to. You can use either a local server for development or a cloud-hosted RabbitMQ service in production.
Local development with Docker
For local development, you can quickly spin up RabbitMQ using Docker:
This starts RabbitMQ with the management plugin enabled at http://localhost:15672 (login with guest/guest).
Configure RabbitMQ
Before creating your sink, set up RabbitMQ to receive messages:
-
Create an exchange:
- Name: Choose a name for your exchange (e.g., “sequin”)
- Type: topic
-
Create a queue:
- Name: Choose a name for your queue (e.g., “my-table-changes”)
-
Create a binding:
- From exchange: The exchange you created
- To queue: The queue you created
- Routing key:
sequin.changes.<database>.<schema>.<table>.*
This routing key pattern will match all changes to your table. The format is:
sequin.changes.<database>.<schema>.<table>.<action>
Create RabbitMQ sink
Navigate to the “Sinks” tab, click “Create Sink”, and select “RabbitMQ Sink”.
Configure the source
Select source tables
Under “Source”, select the schemas and tables you want to stream data from.
Add filters (optional)
Add filters to the sink to control which database changes are sent to your RabbitMQ queue.
Add transform (optional)
Add a transform to the sink to modify the payload before it’s sent to RabbitMQ.
Specify backfill
You can optionally indicate if you want to backfill all or a portion of the table’s existing data into RabbitMQ. Backfills are useful if you want to use RabbitMQ to process historical data.
You can backfill at any time. If you don’t want to backfill, toggle “Backfill” off.
Configure RabbitMQ
Enter RabbitMQ connection details
Fill in your RabbitMQ connection details:
- Host (required): The hostname of your RabbitMQ server
- Port (required): The port number (default: 5672)
- Exchange (required): The name of the exchange where messages will be published (max 255 characters)
- Virtual Host: The RabbitMQ virtual host to use (default: ”/”, max 255 characters)
- Username: The username for authentication
- Password: The password for authentication
- TLS: Enable TLS/SSL for secure connections
Create the sink
Give your sink a name, then click “Create RabbitMQ Sink”.
Verify & debug
To verify that your RabbitMQ sink is working:
- Make some changes in your source table
- Verify that the count of messages for your sink increases in the Sequin web console
- Using the RabbitMQ management UI or CLI, check your queue:
You should see the messages from Sequin appear in the queue.
If messages don’t seem to be flowing:
- Click the “Messages” tab to view the state of messages for your sink
- Click any failed message
- Check the delivery logs for error details, including any RabbitMQ connection errors
Next steps
-
Setup a consumer
Now that your Postgres data is flowing into RabbitMQ, you can setup a consumer to read from the queue and process the data.
Refer to the RabbitMQ sink reference for the shape of messages that Sequin will publish.
-
Deploy your implementation
When you’re ready to deploy your implementation, see “How to deploy to production”.
-
Advanced configuration
For more about how RabbitMQ sinks work, see the RabbitMQ sink reference.