R - the subtype of ConnectRecord on which this transformation will operatepublic class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> extends Object implements org.apache.kafka.connect.transforms.Transformation<R>
Envelope) records that are struct of values containing values
before and after change. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after value and send it down
unwrapped in Envelope.
The functionality is similar to ExtractField SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required. *
The SMT also has the option to insert fields from the original record's 'source' struct into the new unwrapped record prefixed with "__" (for example __lsn in Postgres, or __file in MySQL)
| Modifier and Type | Field and Description |
|---|---|
private boolean |
addOperationHeader |
private String[] |
addSourceFields |
private org.apache.kafka.connect.transforms.ExtractField<R> |
afterDelegate |
private org.apache.kafka.connect.transforms.ExtractField<R> |
beforeDelegate |
private boolean |
dropTombstones |
private ExtractNewRecordStateConfigDefinition.DeleteHandling |
handleDeletes |
private static org.slf4j.Logger |
LOGGER |
private static String |
PURPOSE |
private org.apache.kafka.connect.transforms.InsertField<R> |
removedDelegate |
private static int |
SCHEMA_CACHE_SIZE |
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> |
schemaUpdateCache |
private SmtManager<R> |
smtManager |
private org.apache.kafka.connect.transforms.InsertField<R> |
updatedDelegate |
| Constructor and Description |
|---|
ExtractNewRecordState() |
| Modifier and Type | Method and Description |
|---|---|
private R |
addSourceFields(String[] addSourceFields,
R originalRecord,
R unwrappedRecord) |
R |
apply(R record) |
void |
close() |
org.apache.kafka.common.config.ConfigDef |
config() |
void |
configure(Map<String,?> configs) |
private org.apache.kafka.connect.data.Schema |
makeUpdatedSchema(org.apache.kafka.connect.data.Schema schema,
org.apache.kafka.connect.data.Schema sourceSchema,
String[] addSourceFields) |
private static final String PURPOSE
private static final int SCHEMA_CACHE_SIZE
private static final org.slf4j.Logger LOGGER
private boolean dropTombstones
private ExtractNewRecordStateConfigDefinition.DeleteHandling handleDeletes
private boolean addOperationHeader
private String[] addSourceFields
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> afterDelegate
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> beforeDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> removedDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> updatedDelegate
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> schemaUpdateCache
private SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> smtManager
public void configure(Map<String,?> configs)
configure in interface org.apache.kafka.common.Configurableprivate R addSourceFields(String[] addSourceFields, R originalRecord, R unwrappedRecord)
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(org.apache.kafka.connect.data.Schema schema,
org.apache.kafka.connect.data.Schema sourceSchema,
String[] addSourceFields)
public org.apache.kafka.common.config.ConfigDef config()
Copyright © 2020 JBoss by Red Hat. All rights reserved.