Package io.debezium.relational.history
Class KafkaDatabaseHistory
- java.lang.Object
-
- io.debezium.relational.history.AbstractDatabaseHistory
-
- io.debezium.relational.history.KafkaDatabaseHistory
-
- All Implemented Interfaces:
DatabaseHistory
@NotThreadSafe public class KafkaDatabaseHistory extends AbstractDatabaseHistory
ADatabaseHistoryimplementation that records schema changes as normalSourceRecords on the specified topic, and that recovers the history by establishing a Kafka Consumer re-processing all messages on that topic.- Author:
- Randall Hauch
-
-
Field Summary
Fields Modifier and Type Field Description static Field.SetALL_FIELDSstatic FieldBOOTSTRAP_SERVERSprivate ExecutorServicecheckTopicSettingsExecutorprivate static StringCLEANUP_POLICY_NAMEprivate static StringCLEANUP_POLICY_VALUEprivate static StringCONSUMER_PREFIXprivate ConfigurationconsumerConfigprivate static shortDEFAULT_TOPIC_REPLICATION_FACTORThe default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.private static StringDEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAMEThe name of broker property defining default replication factor for topics without the explicit setting.static FieldINTERNAL_CONNECTOR_CLASSstatic FieldINTERNAL_CONNECTOR_IDprivate static DurationKAFKA_QUERY_TIMEOUTprivate static org.slf4j.LoggerLOGGERprivate intmaxRecoveryAttemptsprivate static IntegerPARTITIONThe one and only partition of the history topic.private static shortPARTITION_COUNTprivate DurationpollIntervalprivate org.apache.kafka.clients.producer.KafkaProducer<String,String>producerprivate static StringPRODUCER_PREFIXprivate ConfigurationproducerConfigprivate DocumentReaderreaderstatic FieldRECOVERY_POLL_ATTEMPTSstatic FieldRECOVERY_POLL_INTERVAL_MSprivate static StringRETENTION_BYTES_NAMEprivate static longRETENTION_MS_MAXprivate static longRETENTION_MS_MINprivate static StringRETENTION_MS_NAMEstatic FieldTOPICprivate StringtopicNameprivate static intUNLIMITED_VALUE-
Fields inherited from class io.debezium.relational.history.AbstractDatabaseHistory
config, INTERNAL_PREFER_DDL, logger
-
Fields inherited from interface io.debezium.relational.history.DatabaseHistory
CONFIGURATION_FIELD_PREFIX_STRING, DDL_FILTER, NAME, SKIP_UNPARSEABLE_DDL_STATEMENTS, STORE_ONLY_CAPTURED_TABLES_DDL, STORE_ONLY_MONITORED_TABLES_DDL
-
-
Constructor Summary
Constructors Constructor Description KafkaDatabaseHistory()
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description private voidcheckTopicSettings(String topicName)voidconfigure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema)Configure this instance.protected static StringconsumerConfigPropertyName(String kafkaConsumerPropertyName)booleanexists()Determines if the database history entity exists; i.e.private static Field.ValidatorforKafka(Field.Validator validator)private shortgetDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin)private LonggetEndOffsetOfDbHistoryTopic(Long previousEndOffset, org.apache.kafka.clients.consumer.KafkaConsumer<String,String> historyConsumer)private org.apache.kafka.clients.admin.ConfiggetKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin)voidinitializeStorage()Called to initialize permanent storage of the history.protected voidrecoverRecords(Consumer<HistoryRecord> records)voidstart()Start the history.voidstop()Stop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener).private voidstopCheckTopicSettingsExecutor()booleanstorageExists()Determines if the underlying storage exists (e.g.protected voidstoreRecord(HistoryRecord record)StringtoString()-
Methods inherited from class io.debezium.relational.history.AbstractDatabaseHistory
record, record, recover, skipUnparseableDdlStatements, storeOnlyCapturedTables
-
-
-
-
Field Detail
-
LOGGER
private static final org.slf4j.Logger LOGGER
-
CLEANUP_POLICY_NAME
private static final String CLEANUP_POLICY_NAME
- See Also:
- Constant Field Values
-
CLEANUP_POLICY_VALUE
private static final String CLEANUP_POLICY_VALUE
- See Also:
- Constant Field Values
-
RETENTION_MS_NAME
private static final String RETENTION_MS_NAME
- See Also:
- Constant Field Values
-
RETENTION_MS_MAX
private static final long RETENTION_MS_MAX
- See Also:
- Constant Field Values
-
RETENTION_MS_MIN
private static final long RETENTION_MS_MIN
-
RETENTION_BYTES_NAME
private static final String RETENTION_BYTES_NAME
- See Also:
- Constant Field Values
-
UNLIMITED_VALUE
private static final int UNLIMITED_VALUE
- See Also:
- Constant Field Values
-
PARTITION_COUNT
private static final short PARTITION_COUNT
- See Also:
- Constant Field Values
-
DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
private static final String DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
The name of broker property defining default replication factor for topics without the explicit setting.- See Also:
kafka.server.KafkaConfig.DefaultReplicationFactorProp, Constant Field Values
-
DEFAULT_TOPIC_REPLICATION_FACTOR
private static final short DEFAULT_TOPIC_REPLICATION_FACTOR
The default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.- See Also:
- Constant Field Values
-
TOPIC
public static final Field TOPIC
-
BOOTSTRAP_SERVERS
public static final Field BOOTSTRAP_SERVERS
-
RECOVERY_POLL_INTERVAL_MS
public static final Field RECOVERY_POLL_INTERVAL_MS
-
RECOVERY_POLL_ATTEMPTS
public static final Field RECOVERY_POLL_ATTEMPTS
-
INTERNAL_CONNECTOR_CLASS
public static final Field INTERNAL_CONNECTOR_CLASS
-
INTERNAL_CONNECTOR_ID
public static final Field INTERNAL_CONNECTOR_ID
-
ALL_FIELDS
public static Field.Set ALL_FIELDS
-
CONSUMER_PREFIX
private static final String CONSUMER_PREFIX
- See Also:
- Constant Field Values
-
PRODUCER_PREFIX
private static final String PRODUCER_PREFIX
- See Also:
- Constant Field Values
-
KAFKA_QUERY_TIMEOUT
private static final Duration KAFKA_QUERY_TIMEOUT
-
PARTITION
private static final Integer PARTITION
The one and only partition of the history topic.
-
reader
private final DocumentReader reader
-
topicName
private String topicName
-
consumerConfig
private Configuration consumerConfig
-
producerConfig
private Configuration producerConfig
-
maxRecoveryAttempts
private int maxRecoveryAttempts
-
pollInterval
private Duration pollInterval
-
checkTopicSettingsExecutor
private ExecutorService checkTopicSettingsExecutor
-
-
Method Detail
-
configure
public void configure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema)
Description copied from interface:DatabaseHistoryConfigure this instance.- Specified by:
configurein interfaceDatabaseHistory- Overrides:
configurein classAbstractDatabaseHistory- Parameters:
config- the configuration for this history storecomparator- the function that should be used to compare history records duringrecovery; may be null if thedefault comparatoris to be usedlistener- TODOuseCatalogBeforeSchema- true if the parsed string for a table contains only 2 items and the first should be used as the catalog and the second as the table name, or false if the first should be used as the schema and the second as the table name
-
start
public void start()
Description copied from interface:DatabaseHistoryStart the history.- Specified by:
startin interfaceDatabaseHistory- Overrides:
startin classAbstractDatabaseHistory
-
storeRecord
protected void storeRecord(HistoryRecord record) throws DatabaseHistoryException
- Specified by:
storeRecordin classAbstractDatabaseHistory- Throws:
DatabaseHistoryException
-
recoverRecords
protected void recoverRecords(Consumer<HistoryRecord> records)
- Specified by:
recoverRecordsin classAbstractDatabaseHistory
-
getEndOffsetOfDbHistoryTopic
private Long getEndOffsetOfDbHistoryTopic(Long previousEndOffset, org.apache.kafka.clients.consumer.KafkaConsumer<String,String> historyConsumer)
-
storageExists
public boolean storageExists()
Description copied from interface:DatabaseHistoryDetermines if the underlying storage exists (e.g. a Kafka topic, file or similar). Note: storage may exist while history entities not yet written, seeDatabaseHistory.exists()
-
exists
public boolean exists()
Description copied from interface:DatabaseHistoryDetermines if the database history entity exists; i.e. the storage must have been initialized and the history must have been populated.
-
checkTopicSettings
private void checkTopicSettings(String topicName)
-
stop
public void stop()
Description copied from interface:DatabaseHistoryStop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener).- Specified by:
stopin interfaceDatabaseHistory- Overrides:
stopin classAbstractDatabaseHistory
-
stopCheckTopicSettingsExecutor
private void stopCheckTopicSettingsExecutor()
-
consumerConfigPropertyName
protected static String consumerConfigPropertyName(String kafkaConsumerPropertyName)
-
initializeStorage
public void initializeStorage()
Description copied from interface:DatabaseHistoryCalled to initialize permanent storage of the history.- Specified by:
initializeStoragein interfaceDatabaseHistory- Overrides:
initializeStoragein classAbstractDatabaseHistory
-
getDefaultTopicReplicationFactor
private short getDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin) throws Exception- Throws:
Exception
-
getKafkaBrokerConfig
private org.apache.kafka.clients.admin.Config getKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin) throws Exception- Throws:
Exception
-
forKafka
private static Field.Validator forKafka(Field.Validator validator)
-
-