How to maintain caches with Sequin
Keep your caches warm and up-to-date using Sequin
This guide shows you how to keep your caches synchronized with your Postgres database using Sequin for change data capture.
By the end, you’ll have a pipeline that streams database changes to your caching layer while handling common challenges like cache invalidation, race conditions, and recovery from failures.
When to use this guide
This approach works well when you need to:
- Keep Redis, Memcached, or other caches in sync with your database
- Maintain search indexes that reflect your current data
- Update materialized views or derived data stores
- Ensure cache consistency across multiple services
Prerequisites
If you’re self-hosting Sequin, you’ll need:
- Sequin installed
- A database connected
- A caching system (Redis, Memcached, etc.) ready to receive updates
- A sink destination (like SQS, Kafka, Redis, or HTTP)
If you’re using Sequin Cloud, you’ll need:
- A Sequin Cloud account
- A database connected
- A caching system (Redis, Memcached, etc.) ready to receive updates
- A sink destination (like SQS, Kafka, Redis, or HTTP)
If using SQS, be sure to use a FIFO queue.
Architecture overview
Your cache maintenance pipeline will have these components:
- Source table(s): The table(s) in Postgres that contain the data you want to cache
- Destination sink: The message queue or webhook endpoint that delivers changes to your processing system (e.g. SQS, Kafka, or HTTP endpoint)
- Cache update processor: An application or service you write that receives changes and updates your cache
Create a sink
First, create a sink to the queue, stream, or webhook endpoint that you want to use to process changes:
Select the source
Select the table containing the data you want to cache.
Optionally add SQL filters to sync a subset of your source table to the destination.
Select the message type
Leave the default “Changes” message type selected.
Leave message grouping default
If your sink supports message grouping, leave the default option selected for “Message grouping”.
This will ensure that messages are grouped by primary key, helping eliminate race conditions as you write rows to your cache.
Specify backfill
You can optionally backfill your cache with data currently in your source table.
Backfill messages are change messages where the action
is read
.
Configure sink-specific settings
Configure sink-specific settings and click “Create Sink”.
Implement cache updates
Here’s an example of how you might process changes to update a cache:
Sink-specific considerations
Redis Streams
If you’re using Redis Streams, you’ll need to be mindful of race conditions as you process messages.
While Sequin will write messages pertaining to the same row to your stream in order, there’s a risk that two separate workers will process messages simultaneously for the same row.
You can use the Redis stream entry ID for a message to determine the order of messages for a given row. See the Redis sink reference for more details.
Verify your pipeline is working
If you specified a backfill, there should be messages in your stream ready for your system to process:
- On the sink overview page, click the “Messages” tab. You should see messages flowing to your sink.
- Check the logs of your cache update processor to ensure it’s processing messages as expected.
Maintenance
Re-warming your cache
You may need to re-warm your cache in these scenarios:
- After infrastructure changes: Cache server upgrades/migrations
- Data model updates: Schema or business logic changes
- Cache eviction: Unexpected cache flushes
You can use a backfill to play read
operations from your table into your stream.
Next steps
See “Deploy to production” for guidance on copying your local sink configuration to your production environment.
Was this page helpful?