Class RecordMakers

java.lang.Object
io.debezium.connector.mysql.legacy.RecordMakers

public class RecordMakers extends Object
A component that makes SourceRecords for tables.
Author:
Randall Hauch
  • Field Details

    • logger

      private final org.slf4j.Logger logger
    • schema

      private final MySqlSchema schema
    • source

      private final SourceInfo source
    • topicSelector

      private final TopicSelector<TableId> topicSelector
    • emitTombstoneOnDelete

      private final boolean emitTombstoneOnDelete
    • convertersByTableNumber

      private final Map<Long,RecordMakers.Converter> convertersByTableNumber
    • tableNumbersByTableId

      private final Map<TableId,Long> tableNumbersByTableId
    • tableIdsByTableNumber

      private final Map<Long,TableId> tableIdsByTableNumber
    • schemaChangeKeySchema

      private final org.apache.kafka.connect.data.Schema schemaChangeKeySchema
    • schemaChangeValueSchema

      private final org.apache.kafka.connect.data.Schema schemaChangeValueSchema
    • schemaNameAdjuster

      private final SchemaNameAdjuster schemaNameAdjuster
    • restartOffset

      private final Map<String,?> restartOffset
  • Constructor Details

    • RecordMakers

      public RecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector<TableId> topicSelector, boolean emitTombstoneOnDelete, Map<String,?> restartOffset)
      Create the record makers using the supplied components.
      Parameters:
      schema - the schema information about the MySQL server databases; may not be null
      source - the connector's source information; may not be null
      topicSelector - the selector for topic names; may not be null
      emitTombstoneOnDelete - whether to emit a tombstone message upon DELETE events or not
      restartOffset - the offset to publish with the SourceInfo.RESTART_PREFIX prefix as additional information in the offset. If the connector attempts to restart from an offset with information with this prefix it will create an offset from the prefixed information rather than restarting from the base offset.
      See Also:
  • Method Details

    • forTable

      public RecordMakers.RecordsForTable forTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
      Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.
      Parameters:
      tableId - the identifier of the table for which records are to be produced
      includedColumns - the set of columns that will be included in each row; may be null if all columns are included
      consumer - the consumer for all produced records; may not be null
      Returns:
      the table-specific record maker; may be null if the table is not included in the connector
    • hasTable

      public boolean hasTable(TableId tableId)
      Determine if there is a record maker for the given table.
      Parameters:
      tableId - the identifier of the table
      Returns:
      true if there is a record maker, or false if there is none
    • forTable

      public RecordMakers.RecordsForTable forTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
      Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.
      Parameters:
      tableNumber - the assigned table number for which records are to be produced
      includedColumns - the set of columns that will be included in each row; may be null if all columns are included
      consumer - the consumer for all produced records; may not be null
      Returns:
      the table-specific record maker; may be null if the table is not included in the connector
    • schemaChanges

      public int schemaChanges(String databaseName, Set<TableId> tables, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
      Produce a schema change record for the given DDL statements.
      Parameters:
      databaseName - the name of the database that is affected by the DDL statements; may not be null
      tables - the list of tables affected by the DDL statements
      ddlStatements - the DDL statements; may not be null
      consumer - the consumer for all produced records; may not be null
      Returns:
      the number of records produced; will be 0 or more
    • clear

      public void clear()
      Clear all of the cached record makers. This should be done when the logs are rotated, since in that a different table numbering scheme will be used by all subsequent TABLE_MAP binlog events.
    • regenerate

      public void regenerate()
      Clear all of the cached record makers and generate new ones. This should be done when the schema changes for reasons other than reading DDL from the binlog.
    • getSourceRecordOffset

      private Map<String,?> getSourceRecordOffset(Map<String,Object> sourceOffset)
    • assign

      public boolean assign(long tableNumber, TableId id)
      Assign the given table number to the table with the specified table ID.
      Parameters:
      tableNumber - the table number found in binlog events
      id - the identifier for the corresponding table
      Returns:
      true if the assignment was successful, or false if the table is currently excluded in the connector's configuration
    • schemaChangeRecordKey

      protected org.apache.kafka.connect.data.Struct schemaChangeRecordKey(String databaseName)
    • schemaChangeRecordValue

      protected org.apache.kafka.connect.data.Struct schemaChangeRecordValue(String databaseName, Set<TableId> tables, String ddlStatements)
    • getTableIdFromTableNumber

      public TableId getTableIdFromTableNumber(long tableNumber)
      Converts table number back to table id
      Parameters:
      tableNumber -
      Returns:
      the table id or null for unknown tables