Package io.debezium.relational.history
Class KafkaDatabaseHistory
java.lang.Object
io.debezium.relational.history.AbstractDatabaseHistory
io.debezium.relational.history.KafkaDatabaseHistory
- All Implemented Interfaces:
DatabaseHistory
A
DatabaseHistory implementation that records schema changes as normal SourceRecords on the specified topic,
and that recovers the history by establishing a Kafka Consumer re-processing all messages on that topic.- Author:
- Randall Hauch
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic Field.Setstatic final Fieldprivate ExecutorServiceprivate static final Stringprivate static final Stringprivate static final Stringprivate Configurationprivate static final shortThe default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.private static final StringThe name of broker property defining default replication factor for topics without the explicit setting.static final Fieldstatic final Fieldstatic final Fieldprivate Durationprivate static final org.slf4j.Loggerprivate intprivate static final IntegerThe one and only partition of the history topic.private static final intprivate Durationprivate static final Stringprivate Configurationprivate final DocumentReaderstatic final Fieldstatic final Fieldprivate static final Stringprivate static final longprivate static final longprivate static final Stringstatic final Fieldprivate Stringprivate static final intprivate static final booleanFields inherited from class io.debezium.relational.history.AbstractDatabaseHistory
config, INTERNAL_PREFER_DDL, loggerFields inherited from interface io.debezium.relational.history.DatabaseHistory
CONFIGURATION_FIELD_PREFIX_STRING, DDL_FILTER, NAME, SKIP_UNPARSEABLE_DDL_STATEMENTS, STORE_ONLY_CAPTURED_TABLES_DDL, STORE_ONLY_MONITORED_TABLES_DDL -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate voidcheckTopicSettings(String topicName) voidconfigure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema) Configure this instance.protected static StringconsumerConfigPropertyName(String kafkaConsumerPropertyName) booleanexists()Determines if the database history entity exists; i.e.private static Field.ValidatorforKafka(Field.Validator validator) private shortgetDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin) private LonggetEndOffsetOfDbHistoryTopic(Long previousEndOffset, org.apache.kafka.clients.consumer.KafkaConsumer<String, String> historyConsumer) private org.apache.kafka.clients.admin.ConfiggetKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin) private static booleanvoidCalled to initialize permanent storage of the history.protected voidrecoverRecords(Consumer<HistoryRecord> records) voidstart()Start the history.voidstop()Stop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener).private voidbooleanDetermines if the underlying storage exists (e.g.protected voidstoreRecord(HistoryRecord record) toString()Methods inherited from class io.debezium.relational.history.AbstractDatabaseHistory
record, record, recover, skipUnparseableDdlStatements, storeOnlyCapturedTablesMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface io.debezium.relational.history.DatabaseHistory
recover, recover
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
CLEANUP_POLICY_NAME
- See Also:
-
CLEANUP_POLICY_VALUE
- See Also:
-
RETENTION_MS_NAME
- See Also:
-
RETENTION_MS_MAX
private static final long RETENTION_MS_MAX- See Also:
-
RETENTION_MS_MIN
private static final long RETENTION_MS_MIN -
RETENTION_BYTES_NAME
- See Also:
-
UNLIMITED_VALUE
private static final int UNLIMITED_VALUE- See Also:
-
PARTITION_COUNT
private static final int PARTITION_COUNT- See Also:
-
DEFAULT_TOPIC_REPLICATION_FACTOR_PROP_NAME
The name of broker property defining default replication factor for topics without the explicit setting.- See Also:
-
kafka.server.KafkaConfig.DefaultReplicationFactorProp- Constant Field Values
-
DEFAULT_TOPIC_REPLICATION_FACTOR
private static final short DEFAULT_TOPIC_REPLICATION_FACTORThe default replication factor for the history topic which is used in case the value couldn't be retrieved from the broker.- See Also:
-
TOPIC
-
BOOTSTRAP_SERVERS
-
RECOVERY_POLL_INTERVAL_MS
-
RECOVERY_POLL_ATTEMPTS
-
INTERNAL_CONNECTOR_CLASS
-
INTERNAL_CONNECTOR_ID
-
KAFKA_QUERY_TIMEOUT_MS
-
ALL_FIELDS
-
CONSUMER_PREFIX
- See Also:
-
PRODUCER_PREFIX
- See Also:
-
PARTITION
The one and only partition of the history topic. -
reader
-
topicName
-
consumerConfig
-
producerConfig
-
producer
-
maxRecoveryAttempts
private int maxRecoveryAttempts -
pollInterval
-
checkTopicSettingsExecutor
-
kafkaQueryTimeout
-
USE_KAFKA_24_NEW_TOPIC_CONSTRUCTOR
private static final boolean USE_KAFKA_24_NEW_TOPIC_CONSTRUCTOR
-
-
Constructor Details
-
KafkaDatabaseHistory
public KafkaDatabaseHistory()
-
-
Method Details
-
hasNewTopicConstructorWithOptionals
private static boolean hasNewTopicConstructorWithOptionals() -
configure
public void configure(Configuration config, HistoryRecordComparator comparator, DatabaseHistoryListener listener, boolean useCatalogBeforeSchema) Description copied from interface:DatabaseHistoryConfigure this instance.- Specified by:
configurein interfaceDatabaseHistory- Overrides:
configurein classAbstractDatabaseHistory- Parameters:
config- the configuration for this history storecomparator- the function that should be used to compare history records duringrecovery; may be null if thedefault comparatoris to be usedlistener- TODOuseCatalogBeforeSchema- true if the parsed string for a table contains only 2 items and the first should be used as the catalog and the second as the table name, or false if the first should be used as the schema and the second as the table name
-
start
public void start()Description copied from interface:DatabaseHistoryStart the history.- Specified by:
startin interfaceDatabaseHistory- Overrides:
startin classAbstractDatabaseHistory
-
storeRecord
- Specified by:
storeRecordin classAbstractDatabaseHistory- Throws:
DatabaseHistoryException
-
recoverRecords
- Specified by:
recoverRecordsin classAbstractDatabaseHistory
-
getEndOffsetOfDbHistoryTopic
-
storageExists
public boolean storageExists()Description copied from interface:DatabaseHistoryDetermines if the underlying storage exists (e.g. a Kafka topic, file or similar). Note: storage may exist while history entities not yet written, seeDatabaseHistory.exists() -
exists
public boolean exists()Description copied from interface:DatabaseHistoryDetermines if the database history entity exists; i.e. the storage must have been initialized and the history must have been populated. -
checkTopicSettings
-
stop
public void stop()Description copied from interface:DatabaseHistoryStop recording history and release any resources acquired since#configure(Configuration, HistoryRecordComparator, DatabaseHistoryListener).- Specified by:
stopin interfaceDatabaseHistory- Overrides:
stopin classAbstractDatabaseHistory
-
stopCheckTopicSettingsExecutor
private void stopCheckTopicSettingsExecutor() -
toString
-
consumerConfigPropertyName
-
initializeStorage
public void initializeStorage()Description copied from interface:DatabaseHistoryCalled to initialize permanent storage of the history.- Specified by:
initializeStoragein interfaceDatabaseHistory- Overrides:
initializeStoragein classAbstractDatabaseHistory
-
getDefaultTopicReplicationFactor
private short getDefaultTopicReplicationFactor(org.apache.kafka.clients.admin.AdminClient admin) throws Exception - Throws:
Exception
-
getKafkaBrokerConfig
private org.apache.kafka.clients.admin.Config getKafkaBrokerConfig(org.apache.kafka.clients.admin.AdminClient admin) throws Exception - Throws:
Exception
-
forKafka
-