Creates a new sink consumer.
Request fields
The name of the sink consumer
The initial status of the sink consumer (active, disabled, paused)
The source configuration for the sink consumer List of schemas to include, or null to include all
List of schemas to exclude, or null to exclude none
List of tables to include, or null to include all A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
List of tables to exclude, or null to exclude none A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
Additional configuration for individual tables. To configure which tables are consumed, see the source
field. The name of the table. A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
The column names to group by or null
to disable grouping
The database actions to include in the sink (insert, update, delete)
The destination configuration for the sink consumer. The shape varies by destination type. Comma-separated list of Kafka hosts
Optional username for authentication
Optional password for authentication
Optional SASL mechanism (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, AWS_MSK_IAM)
AWS region (required for AWS_MSK_IAM)
AWS access key ID (required for AWS_MSK_IAM)
AWS secret access key (required for AWS_MSK_IAM)
Whether the queue is a FIFO queue
Username for authentication
Password for authentication
Virtual host name (default: ”/”)
Redis database number (default: 0)
Optional username for authentication
Optional password for authentication
Redis database number (default: 0)
Optional username for authentication
Optional password for authentication
Optional TTL in milliseconds for the keys
Mode of operation (static or dynamic, default: static)
Must be “azure_event_hub”
Optional username for authentication
Optional password for authentication
Optional JWT for authentication
Optional NKey seed for authentication
Base64-encoded credentials
Whether to use the emulator
Emulator base URL if using emulator
Elasticsearch endpoint URL
Authentication type (api_key, basic, bearer)
Authentication value, either an API key or a base64-encoded string of a username and password
Number of documents to batch together (default: 100)
Primary key field name (default: “id”)
Number of documents to batch together (default: 100)
Request timeout in seconds (default: 5)
The name of the S2 stream
The access token for the S2 account associated with the basin and stream
Whether the topic is a FIFO topic
Typesense collection name
Number of documents to batch together (default: 100)
Request timeout in seconds (default: 5)
The source database for the sink consumer
Whether message grouping is enabled for ordering purposes. When true
(default), messages are grouped by primary key. When false
, grouping is disabled for higher throughput batching.
Number of records to batch together (1-1000)
The maximum number of times a message will be retried if delivery fails. Once this limit is reached, the message will be discarded.
Defaults to null
, meaning messages are retried indefinitely.
Determines how Sequin handles overload when sink consumers can’t keep up with incoming messages.
Options are:
pause_on_full
(default) — pauses replication until the buffer has room again
discard_on_full
— drops messages for overloaded consumers to avoid pausing replication
For more details, see load shedding policy .
The format of the timestamp in the source data.
Possible values include iso8601
and unix_microsecond
.
Additional metadata you can attach to this sink consumer. Annotations can be any JSON object.
Response fields
The unique identifier of the sink consumer
The name of the sink consumer
The current status of the sink consumer (active, disabled, paused)
The source database for the sink consumer
The source configuration for the sink consumer. This determines which tables are consumed. List of schemas to include, or null to include all
List of schemas to exclude, or null to exclude none
List of tables to include, or null to include all A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
List of tables to exclude, or null to exclude none A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
Additional configuration for individual tables. To configure which tables are consumed, see the source
field. The name of the table. A table name can be specified with schema (e.g. my-schema.my-table
) or without (e.g. my-table
) which defaults to schema public
.
The column names to group by or null
to disable grouping
The database actions to include in the sink (insert, update, delete)
The destination configuration for the sink consumer. The shape varies by destination type. Show destination properties
Show Azure Event Hub properties
Must be “azure_event_hub”
Show Elasticsearch properties
Elasticsearch endpoint URL
Authentication type (api_key, basic, bearer)
Number of documents to batch together (default: 100)
Show GCP PubSub properties
Base64-encoded credentials
Whether to use the emulator
Emulator base URL if using emulator
Comma-separated list of Kafka hosts
Optional username for authentication
Optional password for authentication
Optional SASL mechanism (plain, scram_sha_256, scram_sha_512, aws_msk_iam)
AWS region (required for aws_msk_iam)
AWS access key ID (required for aws_msk_iam)
AWS secret access key (required for aws_msk_iam)
Show Meilisearch properties
Primary key field name (default: “id”)
Number of documents to batch together (default: 100)
Request timeout in seconds (default: 5)
Optional username for authentication
Optional password for authentication
Optional JWT for authentication
Optional NKey seed for authentication
Username for authentication
Password for authentication
Virtual host name (default: ”/”)
Show Redis Stream properties
Redis database number (default: 0)
Optional username for authentication
Optional password for authentication
Show Redis String properties
Redis database number (default: 0)
Optional username for authentication
Optional password for authentication
Optional TTL in milliseconds for the keys
Mode of operation (static or dynamic, default: static)
The name of the S2 stream
The access token for the S2 account associated with the basin and stream
Show Sequin Stream properties
Whether the topic is a FIFO topic
Whether the queue is a FIFO queue
Show Typesense properties
Typesense collection name
Number of documents to batch together (default: 100)
Request timeout in seconds (default: 5)
The maximum number of times a message will be retried if delivery fails
User-defined annotations for the sink consumer
Array of active backfill IDs
Number of records to batch together (1-1000)
Determines how Sequin handles overload when sink consumers can’t keep up with incoming messages
The format of the timestamp in the source data
Health status information for the sink consumer Array of individual health checks
curl -X POST "https://api.sequinstream.com/api/sinks" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "kafka-ids",
"status": "active",
"source": {
"include_schemas": null,
"exclude_schemas": null,
"include_tables": null,
"exclude_tables": null
},
"actions": ["insert", "update", "delete"],
"destination": {
"type": "kafka",
"hosts": "localhost:9092",
"tls": false,
"topic": "records"
},
"database": "dune",
"filter": "",
"transform": "id-transform",
"routing": "",
"message_grouping": true,
"batch_size": 100,
"max_retry_count": null,
"load_shedding_policy": "pause_on_full",
"timestamp_format": "iso8601",
"annotations": {}
}'
Kafka
SQS
RabbitMQ
Redis
Azure Event Hub
NATS
GCP PubSub
S2
Sequin Stream
Webhook
Elasticsearch
Kinesis
Meilisearch
Redis String
SNS
Typesense
{
"id" : "4ed2a8e5-47a7-4b51-9270-d2f4fdcb94fb" ,
"name" : "my-kafka-sink" ,
"status" : "active" ,
"database" : "my-database" ,
"source" : {
"include_schemas" : null ,
"exclude_schemas" : null ,
"include_tables" : null ,
"exclude_tables" : null
},
"tables" : [
{
"name" : "public.products" ,
"group_column_names" : [ "category" ]
}
],
"actions" : [ "insert" , "update" , "delete" ],
"destination" : {
"type" : "kafka" ,
"hosts" : "localhost:9092" ,
"tls" : false ,
"topic" : "records" ,
"username" : "kafka_user" ,
"password" : "kafka_password" ,
"sasl_mechanism" : "PLAIN"
},
"filter" : "none" ,
"transform" : "id-transform" ,
"routing" : "none" ,
"message_grouping" : true ,
"max_retry_count" : null ,
"annotations" : {},
"active_backfills" : [],
"batch_size" : 50 ,
"load_shedding_policy" : "pause_on_full" ,
"timestamp_format" : "iso8601" ,
"health" : {
"name" : "Consumer health" ,
"status" : "healthy" ,
"checks" : [
{
"name" : "Sink configuration" ,
"status" : "healthy"
},
...
]
}
}