This guide shows you how to keep your search indexes in sync with your Postgres database in real-time using Sequin.

By the end of this guide, you’ll have a working sink that automatically creates, updates, and deletes search index entries as your data changes in your database. Your users will get accurate, fresh search results that is simple to setup and maintain.

When to use this approach

This approach works well when:

  • Row-to-document mapping is straightforward: each search document maps to one table row.
  • Freshness matters: you want inserts, updates, and deletes reflected in search results within milliseconds, not minutes or hours.
  • Transform logic is stateless: everything you need to index already lives on the row or can be computed in a simple transform function.

This approach is not a good fit if:

  • Complex document assembly: a single search document needs data from many tables or requires heavy joins/aggregations. (We’re building a join feature, but today you’d need an external pipeline.)

Prerequisites

  1. Sequin installed or a Sequin Cloud account
  2. A database connected to Sequin
  3. A supported search engine (Elasticsearch, Typesense, etc.) up and running

Create a sink to your search engine

To get started, navigate to the “Sinks” page in the Sequin UI and click “Create Sink”. Select your search engine as the sink type. Configure the sink as follows:

1

Select the source

  1. Select your database from the connected databases list.
  2. Choose the tables or schemas containing the data you want to index.
2

Apply filters

Keep all operations enabled (insert, update, delete) to ensure every change in your database is accurately reflected in your search engine.

If there are rows in the source table you don’t need to cache (for instance, perhaps you only want to index active products), you can apply a filter to exclude them.

3

Transform the data

Configure a transform function to store only the data you need in your search index:

def transform(_action, record, _changes, _metadata) do
  record  # Remove unneeded metadata
end

You can be more selective and only store certain fields in your search index:

def transform(_action, record, _changes, _metadata) do
  %{
    id: record.id,
    name: record.name,
    description: record.description
  }
end
4

Backfill your search index

To hydrate your search index with the existing data in your database, you can start the sink with an initial backfill.

If you only want a subset of your source tables in your search index, skip this step for now and run an incremental backfill after the sink is running.

5

Group by primary key

By default, Sequin will group messages by primary key. This is a good default for most use cases as this will ensure every change to a specific row is processed in order in your search engine.

If you need to more specific message grouping, you can configure the sink to group messages by a different field.

6

Configure routing

By default, every change in your database will be routed to one index. This is a good default for sinks that include a single table.

For sinks that include multiple tables, you can enable “Dynamic routing” and add a routing function:

def route(action, record, changes, metadata) do
  %{index: "#{metadata.table_name}"}
end

For example, this function will route messages to the index corresponding to the table name. You could build more complex routing that sends messages to multiple indexes based on the data in the message.

7

Enter the connection details for your search engine

Enter the configuration parameters for your search engine:

8

Create the sink

Give you sink a name and click “Create Sink”.

Verify your cache is being updated

If you specified a backfill, there should be messages in your search index:

  1. On the sink overview page, click the “Messages” tab. You should see messages flowing to your sink.

  2. Query your search engine to verify that the data is being updated:

    curl -X GET "http://localhost:8108/collections/{{YOUR_COLLECTION_NAME}}/documents/search?q=*" \
    -H "X-TYPESENSE-API-KEY: my-api-key"
    

Re-backfilling your search index

You may need to re-backfill your search index in these scenarios:

  1. After infrastructure changes: to hydrate your search index with the existing data in your database
  2. Data model updates: to ensure your search index is up to date with your database schema
  3. Accidental deletion: in cases where bugs or misconfigurations cause your search index to be deleted or dropped, you can re-backfill your search index to restore it

To do so, run a backfill. You can trigger a backfill manually in the Sequin Console or using the Sequin management API.

Next steps

You’re search engine is now in sync with your database. Changes to your Postgres data will appear in your indexes within milliseconds, without any manual intervention or risk of stale data.

Here are some helpful resources to bring this approach to production: