Package io.debezium.transforms
Class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
java.lang.Object
io.debezium.transforms.AbstractExtractNewRecordState<R>
io.debezium.transforms.ExtractNewRecordState<R>
- Type Parameters:
R- the subtype ofConnectRecordon which this transformation will operate
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.kafka.common.Configurable,org.apache.kafka.connect.transforms.Transformation<R>
public class ExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
extends AbstractExtractNewRecordState<R>
Debezium generates CDC (
Envelope) records that are struct of values containing values
before and after change. Sink connectors usually are not able to work
with a complex structure so a user use this SMT to extract after value and send it down
unwrapped in Envelope.
The functionality is similar to ExtractField SMT but has a special semantics for handling
delete events; when delete event is emitted by database then Debezium emits two messages: a delete
message and a tombstone message that serves as a signal to Kafka compaction process.
The SMT by default drops the tombstone message created by Debezium and converts the delete message into a tombstone message that can be dropped, too, if required.
The SMT also has the option to insert fields from the original record (e.g. 'op' or 'source.ts_ms') into the unwrapped record or add them as header attributes.
- Author:
- Jiri Pechanec
-
Nested Class Summary
Nested classes/interfaces inherited from class io.debezium.transforms.AbstractExtractNewRecordState
AbstractExtractNewRecordState.FieldReference, AbstractExtractNewRecordState.NewRecordValueMetadata -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate final Field.Setprivate static final Fieldprivate static final Fieldprivate static final Fieldprivate booleanprivate Stringprivate booleanprivate static final Stringprivate static final org.slf4j.Loggerprivate static final intprivate BoundedConcurrentHashMap<AbstractExtractNewRecordState.NewRecordValueMetadata,org.apache.kafka.connect.data.Schema> Fields inherited from class io.debezium.transforms.AbstractExtractNewRecordState
additionalFields, additionalHeaders, config, extractRecordStrategy, PURPOSE, routeByField, smtManager -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate RaddFields(List<AbstractExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) buildCacheKey(org.apache.kafka.connect.data.Struct value, R originalRecord) org.apache.kafka.common.config.ConfigDefconfig()voidprivate RdropFields(R record) private RdropKeyFields(R record, List<String> fieldNames) private RdropValueFields(R record, List<String> fieldNames) getFieldsToDropFromSchema(org.apache.kafka.connect.data.Schema schema, List<String> fieldNames) private org.apache.kafka.connect.data.SchemamakeUpdatedSchema(List<AbstractExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) private org.apache.kafka.connect.data.SchemaBuilderupdateSchema(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) private org.apache.kafka.connect.data.StructupdateValue(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct) Methods inherited from class io.debezium.transforms.AbstractExtractNewRecordState
apply, close, getHeaderByName, makeHeaders, setTopic
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
EXCLUDE
- See Also:
-
SCHEMA_CACHE_SIZE
private static final int SCHEMA_CACHE_SIZE- See Also:
-
DROP_FIELDS_HEADER
-
DROP_FIELDS_FROM_KEY
-
DROP_FIELDS_KEEP_SCHEMA_COMPATIBLE
-
dropFieldsHeaderName
-
dropFieldsFromKey
private boolean dropFieldsFromKey -
dropFieldsKeepSchemaCompatible
private boolean dropFieldsKeepSchemaCompatible -
schemaUpdateCache
private BoundedConcurrentHashMap<AbstractExtractNewRecordState.NewRecordValueMetadata,org.apache.kafka.connect.data.Schema> schemaUpdateCache -
configFields
-
-
Constructor Details
-
ExtractNewRecordState
public ExtractNewRecordState()
-
-
Method Details
-
configure
- Specified by:
configurein interfaceorg.apache.kafka.common.Configurable- Overrides:
configurein classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
doApply
- Specified by:
doApplyin classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
validateConfigFields
- Specified by:
validateConfigFieldsin classAbstractExtractNewRecordState<R extends org.apache.kafka.connect.connector.ConnectRecord<R>>
-
config
public org.apache.kafka.common.config.ConfigDef config() -
addFields
private R addFields(List<AbstractExtractNewRecordState.FieldReference> additionalFields, R originalRecord, R unwrappedRecord) -
buildCacheKey
private AbstractExtractNewRecordState.NewRecordValueMetadata buildCacheKey(org.apache.kafka.connect.data.Struct value, R originalRecord) -
dropFields
-
dropKeyFields
-
dropValueFields
-
getFieldsToDropFromSchema
-
makeUpdatedSchema
private org.apache.kafka.connect.data.Schema makeUpdatedSchema(List<AbstractExtractNewRecordState.FieldReference> additionalFields, org.apache.kafka.connect.data.Schema schema, org.apache.kafka.connect.data.Struct originalRecordValue) -
updateSchema
private org.apache.kafka.connect.data.SchemaBuilder updateSchema(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.SchemaBuilder builder, org.apache.kafka.connect.data.Schema fieldSchema) -
updateValue
private org.apache.kafka.connect.data.Struct updateValue(AbstractExtractNewRecordState.FieldReference fieldReference, org.apache.kafka.connect.data.Struct updatedValue, org.apache.kafka.connect.data.Struct struct)
-