Class RecordMakers
java.lang.Object
io.debezium.connector.mysql.legacy.RecordMakers
A component that makes
SourceRecords for tables.- Author:
- Randall Hauch
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprotected static interfacefinal classASourceRecordfactory for a specific table and consumer. -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate final Map<Long,RecordMakers.Converter> private final booleanprivate final org.slf4j.Loggerprivate final MySqlSchemaprivate final org.apache.kafka.connect.data.Schemaprivate final org.apache.kafka.connect.data.Schemaprivate final SchemaNameAdjusterprivate final SourceInfoprivate final TopicSelector<TableId> -
Constructor Summary
ConstructorsConstructorDescriptionRecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector<TableId> topicSelector, boolean emitTombstoneOnDelete, Map<String, ?> restartOffset) Create the record makers using the supplied components. -
Method Summary
Modifier and TypeMethodDescriptionbooleanAssign the given table number to the table with the specifiedtable ID.voidclear()Clear all of the cached record makers.forTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.forTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.getSourceRecordOffset(Map<String, Object> sourceOffset) getTableIdFromTableNumber(long tableNumber) Converts table number back to table idbooleanDetermine if there is a record maker for the given table.voidClear all of the cached record makers and generate new ones.protected org.apache.kafka.connect.data.StructschemaChangeRecordKey(String databaseName) protected org.apache.kafka.connect.data.StructschemaChangeRecordValue(String databaseName, Set<TableId> tables, String ddlStatements) intschemaChanges(String databaseName, Set<TableId> tables, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Produce a schema change record for the given DDL statements.
-
Field Details
-
logger
private final org.slf4j.Logger logger -
schema
-
source
-
topicSelector
-
emitTombstoneOnDelete
private final boolean emitTombstoneOnDelete -
convertersByTableNumber
-
tableNumbersByTableId
-
tableIdsByTableNumber
-
schemaChangeKeySchema
private final org.apache.kafka.connect.data.Schema schemaChangeKeySchema -
schemaChangeValueSchema
private final org.apache.kafka.connect.data.Schema schemaChangeValueSchema -
schemaNameAdjuster
-
restartOffset
-
-
Constructor Details
-
RecordMakers
public RecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector<TableId> topicSelector, boolean emitTombstoneOnDelete, Map<String, ?> restartOffset) Create the record makers using the supplied components.- Parameters:
schema- the schema information about the MySQL server databases; may not be nullsource- the connector's source information; may not be nulltopicSelector- the selector for topic names; may not be nullemitTombstoneOnDelete- whether to emit a tombstone message upon DELETE events or notrestartOffset- the offset to publish with theSourceInfo.RESTART_PREFIXprefix as additional information in the offset. If the connector attempts to restart from an offset with information with this prefix it will create an offset from the prefixed information rather than restarting from the base offset.- See Also:
-
-
Method Details
-
forTable
public RecordMakers.RecordsForTable forTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.- Parameters:
tableId- the identifier of the table for which records are to be producedincludedColumns- the set of columns that will be included in each row; may be null if all columns are includedconsumer- the consumer for all produced records; may not be null- Returns:
- the table-specific record maker; may be null if the table is not included in the connector
-
hasTable
Determine if there is a record maker for the given table.- Parameters:
tableId- the identifier of the table- Returns:
trueif there is arecord maker, orfalseif there is none
-
forTable
public RecordMakers.RecordsForTable forTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.- Parameters:
tableNumber- theassigned table numberfor which records are to be producedincludedColumns- the set of columns that will be included in each row; may be null if all columns are includedconsumer- the consumer for all produced records; may not be null- Returns:
- the table-specific record maker; may be null if the table is not included in the connector
-
schemaChanges
public int schemaChanges(String databaseName, Set<TableId> tables, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer) Produce a schema change record for the given DDL statements.- Parameters:
databaseName- the name of the database that is affected by the DDL statements; may not be nulltables- the list of tables affected by the DDL statementsddlStatements- the DDL statements; may not be nullconsumer- the consumer for all produced records; may not be null- Returns:
- the number of records produced; will be 0 or more
-
clear
public void clear()Clear all of the cached record makers. This should be done when the logs are rotated, since in that a different table numbering scheme will be used by all subsequent TABLE_MAP binlog events. -
regenerate
public void regenerate()Clear all of the cached record makers and generate new ones. This should be done when the schema changes for reasons other than reading DDL from the binlog. -
getSourceRecordOffset
-
assign
Assign the given table number to the table with the specifiedtable ID.- Parameters:
tableNumber- the table number found in binlog eventsid- the identifier for the corresponding table- Returns:
trueif the assignment was successful, orfalseif the table is currently excluded in the connector's configuration
-
schemaChangeRecordKey
-
schemaChangeRecordValue
-
getTableIdFromTableNumber
Converts table number back to table id- Parameters:
tableNumber-- Returns:
- the table id or null for unknown tables
-