R - the subtype of ConnectRecord on which this transformation will operatepublic class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> extends Object implements org.apache.kafka.connect.transforms.Transformation<R>
Envelope) records that are struct of values containing values
before and after change. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after value and send it down
unwrapped in Envelope.
The functionality is similar to ExtractField SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms' into the unwrapped record or ad them as header attributes.
| Modifier and Type | Class and Description |
|---|---|
private static class |
ExtractNewRecordState.FieldReference
Represents a field that should be added to the outgoing record as a header
attribute or struct field.
|
| Modifier and Type | Field and Description |
|---|---|
private List<ExtractNewRecordState.FieldReference> |
additionalFields |
private List<ExtractNewRecordState.FieldReference> |
additionalHeaders |
private org.apache.kafka.connect.transforms.ExtractField<R> |
afterDelegate |
private org.apache.kafka.connect.transforms.ExtractField<R> |
beforeDelegate |
private boolean |
dropTombstones |
private static Pattern |
FIELD_SEPARATOR |
private ExtractNewRecordStateConfigDefinition.DeleteHandling |
handleDeletes |
private static org.slf4j.Logger |
LOGGER |
private static String |
PURPOSE |
private org.apache.kafka.connect.transforms.InsertField<R> |
removedDelegate |
private String |
routeByField |
private static int |
SCHEMA_CACHE_SIZE |
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> |
schemaUpdateCache |
private SmtManager<R> |
smtManager |
private org.apache.kafka.connect.transforms.InsertField<R> |
updatedDelegate |
| Constructor and Description |
|---|
ExtractNewRecordState() |
| Modifier and Type | Method and Description |
|---|---|
private R |
addFields(List<ExtractNewRecordState.FieldReference> additionalFields,
R originalRecord,
R unwrappedRecord) |
R |
apply(R record) |
void |
close() |
org.apache.kafka.common.config.ConfigDef |
config() |
void |
configure(Map<String,?> configs) |
private org.apache.kafka.connect.header.Headers |
makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders,
org.apache.kafka.connect.data.Struct originalRecordValue)
Create an Headers object which contains the headers to be added.
|
private org.apache.kafka.connect.data.Schema |
makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields,
org.apache.kafka.connect.data.Schema schema,
org.apache.kafka.connect.data.Struct originalRecordValue) |
private R |
setTopic(String updatedTopicValue,
R record) |
private org.apache.kafka.connect.data.SchemaBuilder |
updateSchema(ExtractNewRecordState.FieldReference fieldReference,
org.apache.kafka.connect.data.SchemaBuilder builder,
org.apache.kafka.connect.data.Schema originalRecordSchema) |
private org.apache.kafka.connect.data.Struct |
updateValue(ExtractNewRecordState.FieldReference fieldReference,
org.apache.kafka.connect.data.Struct updatedValue,
org.apache.kafka.connect.data.Struct struct) |
private static final org.slf4j.Logger LOGGER
private static final String PURPOSE
private static final int SCHEMA_CACHE_SIZE
private static final Pattern FIELD_SEPARATOR
private boolean dropTombstones
private ExtractNewRecordStateConfigDefinition.DeleteHandling handleDeletes
private List<ExtractNewRecordState.FieldReference> additionalHeaders
private List<ExtractNewRecordState.FieldReference> additionalFields
private String routeByField
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> afterDelegate
private final org.apache.kafka.connect.transforms.ExtractField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> beforeDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> removedDelegate
private final org.apache.kafka.connect.transforms.InsertField<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> updatedDelegate
private BoundedConcurrentHashMap<org.apache.kafka.connect.data.Schema,org.apache.kafka.connect.data.Schema> schemaUpdateCache
private SmtManager<R extends org.apache.kafka.connect.connector.ConnectRecord<R>> smtManager
public void configure(Map<String,?> configs)
configure in interface org.apache.kafka.common.Configurableprivate org.apache.kafka.connect.header.Headers makeHeaders(List<ExtractNewRecordState.FieldReference> additionalHeaders, org.apache.kafka.connect.data.Struct originalRecordValue)
private R addFields(List<ExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord)
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<ExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue)
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema originalRecordSchema)
private org.apache.kafka.connect.data.Struct updateValue(ExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
public org.apache.kafka.common.config.ConfigDef config()
Copyright © 2020 JBoss by Red Hat. All rights reserved.