Class RecordMakers
- java.lang.Object
-
- io.debezium.connector.mysql.legacy.RecordMakers
-
public class RecordMakers extends Object
A component that makesSourceRecords for tables.- Author:
- Randall Hauch
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected static interfaceRecordMakers.ConverterclassRecordMakers.RecordsForTableASourceRecordfactory for a specific table and consumer.
-
Field Summary
Fields Modifier and Type Field Description private Map<Long,RecordMakers.Converter>convertersByTableNumberprivate booleanemitTombstoneOnDeleteprivate org.slf4j.Loggerloggerprivate Map<String,?>restartOffsetprivate MySqlSchemaschemaprivate org.apache.kafka.connect.data.SchemaschemaChangeKeySchemaprivate org.apache.kafka.connect.data.SchemaschemaChangeValueSchemaprivate SchemaNameAdjusterschemaNameAdjusterprivate SourceInfosourceprivate Map<Long,TableId>tableIdsByTableNumberprivate Map<TableId,Long>tableNumbersByTableIdprivate TopicSelector<TableId>topicSelector
-
Constructor Summary
Constructors Constructor Description RecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector<TableId> topicSelector, boolean emitTombstoneOnDelete, Map<String,?> restartOffset)Create the record makers using the supplied components.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanassign(long tableNumber, TableId id)Assign the given table number to the table with the specifiedtable ID.voidclear()Clear all of the cached record makers.RecordMakers.RecordsForTableforTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.RecordMakers.RecordsForTableforTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.private Map<String,?>getSourceRecordOffset(Map<String,Object> sourceOffset)TableIdgetTableIdFromTableNumber(long tableNumber)Converts table number back to table idbooleanhasTable(TableId tableId)Determine if there is a record maker for the given table.voidregenerate()Clear all of the cached record makers and generate new ones.protected org.apache.kafka.connect.data.StructschemaChangeRecordKey(String databaseName)protected org.apache.kafka.connect.data.StructschemaChangeRecordValue(String databaseName, Set<TableId> tables, String ddlStatements)intschemaChanges(String databaseName, Set<TableId> tables, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)Produce a schema change record for the given DDL statements.
-
-
-
Field Detail
-
logger
private final org.slf4j.Logger logger
-
schema
private final MySqlSchema schema
-
source
private final SourceInfo source
-
topicSelector
private final TopicSelector<TableId> topicSelector
-
emitTombstoneOnDelete
private final boolean emitTombstoneOnDelete
-
convertersByTableNumber
private final Map<Long,RecordMakers.Converter> convertersByTableNumber
-
schemaChangeKeySchema
private final org.apache.kafka.connect.data.Schema schemaChangeKeySchema
-
schemaChangeValueSchema
private final org.apache.kafka.connect.data.Schema schemaChangeValueSchema
-
schemaNameAdjuster
private final SchemaNameAdjuster schemaNameAdjuster
-
-
Constructor Detail
-
RecordMakers
public RecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector<TableId> topicSelector, boolean emitTombstoneOnDelete, Map<String,?> restartOffset)
Create the record makers using the supplied components.- Parameters:
schema- the schema information about the MySQL server databases; may not be nullsource- the connector's source information; may not be nulltopicSelector- the selector for topic names; may not be nullemitTombstoneOnDelete- whether to emit a tombstone message upon DELETE events or notrestartOffset- the offset to publish with theSourceInfo.RESTART_PREFIXprefix as additional information in the offset. If the connector attempts to restart from an offset with information with this prefix it will create an offset from the prefixed information rather than restarting from the base offset.- See Also:
MySqlConnectorTask.getRestartOffset(Map)
-
-
Method Detail
-
forTable
public RecordMakers.RecordsForTable forTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.- Parameters:
tableId- the identifier of the table for which records are to be producedincludedColumns- the set of columns that will be included in each row; may be null if all columns are includedconsumer- the consumer for all produced records; may not be null- Returns:
- the table-specific record maker; may be null if the table is not included in the connector
-
hasTable
public boolean hasTable(TableId tableId)
Determine if there is a record maker for the given table.- Parameters:
tableId- the identifier of the table- Returns:
trueif there is arecord maker, orfalseif there is none
-
forTable
public RecordMakers.RecordsForTable forTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.- Parameters:
tableNumber- theassigned table numberfor which records are to be producedincludedColumns- the set of columns that will be included in each row; may be null if all columns are includedconsumer- the consumer for all produced records; may not be null- Returns:
- the table-specific record maker; may be null if the table is not included in the connector
-
schemaChanges
public int schemaChanges(String databaseName, Set<TableId> tables, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Produce a schema change record for the given DDL statements.- Parameters:
databaseName- the name of the database that is affected by the DDL statements; may not be nulltables- the list of tables affected by the DDL statementsddlStatements- the DDL statements; may not be nullconsumer- the consumer for all produced records; may not be null- Returns:
- the number of records produced; will be 0 or more
-
clear
public void clear()
Clear all of the cached record makers. This should be done when the logs are rotated, since in that a different table numbering scheme will be used by all subsequent TABLE_MAP binlog events.
-
regenerate
public void regenerate()
Clear all of the cached record makers and generate new ones. This should be done when the schema changes for reasons other than reading DDL from the binlog.
-
getSourceRecordOffset
private Map<String,?> getSourceRecordOffset(Map<String,Object> sourceOffset)
-
assign
public boolean assign(long tableNumber, TableId id)Assign the given table number to the table with the specifiedtable ID.- Parameters:
tableNumber- the table number found in binlog eventsid- the identifier for the corresponding table- Returns:
trueif the assignment was successful, orfalseif the table is currently excluded in the connector's configuration
-
schemaChangeRecordKey
protected org.apache.kafka.connect.data.Struct schemaChangeRecordKey(String databaseName)
-
schemaChangeRecordValue
protected org.apache.kafka.connect.data.Struct schemaChangeRecordValue(String databaseName, Set<TableId> tables, String ddlStatements)
-
getTableIdFromTableNumber
public TableId getTableIdFromTableNumber(long tableNumber)
Converts table number back to table id- Parameters:
tableNumber-- Returns:
- the table id or null for unknown tables
-
-