public class RecordMakers extends Object
SourceRecords for tables.| Modifier and Type | Class and Description |
|---|---|
protected static interface |
RecordMakers.Converter |
class |
RecordMakers.RecordsForTable
A
SourceRecord factory for a specific table and consumer. |
| Modifier and Type | Field and Description |
|---|---|
private Map<Long,RecordMakers.Converter> |
convertersByTableNumber |
private boolean |
emitTombstoneOnDelete |
private org.slf4j.Logger |
logger |
private MySqlSchema |
schema |
private org.apache.kafka.connect.data.Schema |
schemaChangeKeySchema |
private org.apache.kafka.connect.data.Schema |
schemaChangeValueSchema |
private SchemaNameAdjuster |
schemaNameAdjuster |
private SourceInfo |
source |
private Map<Long,TableId> |
tableIdsByTableNumber |
private Map<TableId,Long> |
tableNumbersByTableId |
private TopicSelector |
topicSelector |
| Constructor and Description |
|---|
RecordMakers(MySqlSchema schema,
SourceInfo source,
TopicSelector topicSelector,
boolean emitTombstoneOnDelete)
Create the record makers using the supplied components.
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
assign(long tableNumber,
TableId id)
Assign the given table number to the table with the specified
table ID. |
void |
clear()
Clear all of the cached record makers.
|
RecordMakers.RecordsForTable |
forTable(long tableNumber,
BitSet includedColumns,
BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.
|
RecordMakers.RecordsForTable |
forTable(TableId tableId,
BitSet includedColumns,
BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Obtain the record maker for the given table, using the specified columns and sending records to the given consumer.
|
TableId |
getTableIdFromTableNumber(long tableNumber)
Converts table number back to table id
|
boolean |
hasTable(TableId tableId)
Determine if there is a record maker for the given table.
|
void |
regenerate()
Clear all of the cached record makers and generate new ones.
|
protected org.apache.kafka.connect.data.Struct |
schemaChangeRecordKey(String databaseName) |
protected org.apache.kafka.connect.data.Struct |
schemaChangeRecordValue(String databaseName,
String ddlStatements) |
int |
schemaChanges(String databaseName,
String ddlStatements,
BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
Produce a schema change record for the given DDL statements.
|
private final org.slf4j.Logger logger
private final MySqlSchema schema
private final SourceInfo source
private final TopicSelector topicSelector
private final boolean emitTombstoneOnDelete
private final Map<Long,RecordMakers.Converter> convertersByTableNumber
private final org.apache.kafka.connect.data.Schema schemaChangeKeySchema
private final org.apache.kafka.connect.data.Schema schemaChangeValueSchema
private final SchemaNameAdjuster schemaNameAdjuster
public RecordMakers(MySqlSchema schema, SourceInfo source, TopicSelector topicSelector, boolean emitTombstoneOnDelete)
schema - the schema information about the MySQL server databases; may not be nullsource - the connector's source information; may not be nulltopicSelector - the selector for topic names; may not be nullpublic RecordMakers.RecordsForTable forTable(TableId tableId, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
tableId - the identifier of the table for which records are to be producedincludedColumns - the set of columns that will be included in each row; may be null if all columns are includedconsumer - the consumer for all produced records; may not be nullpublic boolean hasTable(TableId tableId)
tableId - the identifier of the tabletrue if there is a record maker, or false
if there is nonepublic RecordMakers.RecordsForTable forTable(long tableNumber, BitSet includedColumns, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
tableNumber - the assigned table number for which records are to be producedincludedColumns - the set of columns that will be included in each row; may be null if all columns are includedconsumer - the consumer for all produced records; may not be nullpublic int schemaChanges(String databaseName, String ddlStatements, BlockingConsumer<org.apache.kafka.connect.source.SourceRecord> consumer)
databaseName - the name of the database that is affected by the DDL statements; may not be nullddlStatements - the DDL statements; may not be nullconsumer - the consumer for all produced records; may not be nullpublic void clear()
public void regenerate()
public boolean assign(long tableNumber,
TableId id)
table ID.tableNumber - the table number found in binlog eventsid - the identifier for the corresponding tabletrue if the assignment was successful, or false if the table is currently excluded in the
connector's configurationprotected org.apache.kafka.connect.data.Struct schemaChangeRecordKey(String databaseName)
protected org.apache.kafka.connect.data.Struct schemaChangeRecordValue(String databaseName, String ddlStatements)
public TableId getTableIdFromTableNumber(long tableNumber)
tableNumber - Copyright © 2018 JBoss by Red Hat. All rights reserved.