This guide shows you how to keep your caches synchronized with your Postgres database using Sequin for change data capture.

By the end, you’ll have a pipeline that streams database changes to your caching layer while handling common challenges like cache invalidation, race conditions, and recovery from failures.

When to use this guide

This approach works well when you need to:

  • Keep Redis, Memcached, or other caches in sync with your database
  • Maintain search indexes that reflect your current data
  • Update materialized views or derived data stores
  • Ensure cache consistency across multiple services

Prerequisites

If you’re self-hosting Sequin, you’ll need:

  1. Sequin installed
  2. A database connected
  3. A caching system (Redis, Memcached, etc.) ready to receive updates
  4. A sink destination (like SQS, Kafka, Redis, or HTTP)

If you’re using Sequin Cloud, you’ll need:

  1. A Sequin Cloud account
  2. A database connected
  3. A caching system (Redis, Memcached, etc.) ready to receive updates
  4. A sink destination (like SQS, Kafka, Redis, or HTTP)

If using SQS, be sure to use a FIFO queue.

Architecture overview

Your cache maintenance pipeline will have these components:

  1. Source table(s): The table(s) in Postgres that contain the data you want to cache
  2. Destination sink: The message queue or webhook endpoint that delivers changes to your processing system (e.g. SQS, Kafka, or HTTP endpoint)
  3. Cache update processor: An application or service you write that receives changes and updates your cache

Create a sink

First, create a sink to the queue, stream, or webhook endpoint that you want to use to process changes:

1

Select the source

Select the table containing the data you want to cache.

Optionally add SQL filters to sync a subset of your source table to the destination.

2

Select the message type

Leave the default “Changes” message type selected.

3

Leave message grouping default

If your sink supports message grouping, leave the default option selected for “Message grouping”.

This will ensure that messages are grouped by primary key, helping eliminate race conditions as you write rows to your cache.

4

Specify backfill

You can optionally backfill your cache with data currently in your source table.

Backfill messages are change messages where the action is read.

5

Configure sink-specific settings

Configure sink-specific settings and click “Create Sink”.

Implement cache updates

Here’s an example of how you might process changes to update a cache:

processor.py
def process_change(change):
    # Extract key information
    cache_key = f"{change.metadata.table_name}:{change.record['id']}"
    
    if change.action in ['insert', 'update', 'read']:
        # Optional: Add version/timestamp
        change.record['last_updated'] = change.metadata.commit_timestamp
        
        # Update cache
        cache.set(
            cache_key,
            json.dumps(change.record),
            ex=86400  # Consider your expiry needs
        )
    elif change.action == 'delete':
        cache.delete(cache_key)

Sink-specific considerations

Redis Streams

If you’re using Redis Streams, you’ll need to be mindful of race conditions as you process messages.

While Sequin will write messages pertaining to the same row to your stream in order, there’s a risk that two separate workers will process messages simultaneously for the same row.

You can use the Redis stream entry ID for a message to determine the order of messages for a given row. See the Redis sink reference for more details.

Verify your pipeline is working

If you specified a backfill, there should be messages in your stream ready for your system to process:

  1. On the sink overview page, click the “Messages” tab. You should see messages flowing to your sink.
  2. Check the logs of your cache update processor to ensure it’s processing messages as expected.

Maintenance

Re-warming your cache

You may need to re-warm your cache in these scenarios:

  1. After infrastructure changes: Cache server upgrades/migrations
  2. Data model updates: Schema or business logic changes
  3. Cache eviction: Unexpected cache flushes

You can use a backfill to play read operations from your table into your stream.

Next steps

See “Deploy to production” for guidance on copying your local sink configuration to your production environment.