Class AbstractIncrementalSnapshotTest<T extends org.apache.kafka.connect.source.SourceConnector>
- java.lang.Object
-
- io.debezium.embedded.AbstractConnectorTest
-
- io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotTest<T>
-
- All Implemented Interfaces:
Testing
- Direct Known Subclasses:
AbstractIncrementalSnapshotWithSchemaChangesSupportTest
public abstract class AbstractIncrementalSnapshotTest<T extends org.apache.kafka.connect.source.SourceConnector> extends AbstractConnectorTest
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class io.debezium.embedded.AbstractConnectorTest
AbstractConnectorTest.SourceRecords
-
Nested classes/interfaces inherited from interface io.debezium.util.Testing
Testing.Debug, Testing.Files, Testing.InterruptableFunction, Testing.Network, Testing.Print, Testing.Timer
-
-
Field Summary
Fields Modifier and Type Field Description protected static PathDB_HISTORY_PATHprivate static intMAXIMUM_NO_RECORDS_CONSUMESprotected static intROW_COUNT-
Fields inherited from class io.debezium.embedded.AbstractConnectorTest
engine, logger, logTestName, OFFSET_STORE_PATH, pollTimeoutInMs, skipTestRule
-
-
Constructor Summary
Constructors Constructor Description AbstractIncrementalSnapshotTest()
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description protected StringalterTableAddColumnStatement(String tableName)protected StringalterTableDropColumnStatement(String tableName)protected abstract Configuration.Builderconfig()protected abstract Class<T>connectorClass()protected Map<Integer,Integer>consumeMixedWithIncrementalSnapshot(int recordCount)protected <V> Map<Integer,V>consumeMixedWithIncrementalSnapshot(int recordCount, Function<org.apache.kafka.connect.source.SourceRecord,V> valueConverter, Predicate<Map.Entry<Integer,V>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer)protected Map<Integer,Integer>consumeMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,Integer>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer)protected <V> Map<Integer,V>consumeMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,V>> dataCompleted, Function<org.apache.kafka.connect.data.Struct,Integer> idCalculator, Function<org.apache.kafka.connect.source.SourceRecord,V> valueConverter, String topicName, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer)protected Map<Integer,org.apache.kafka.connect.source.SourceRecord>consumeRecordsMixedWithIncrementalSnapshot(int recordCount)protected Map<Integer,org.apache.kafka.connect.source.SourceRecord>consumeRecordsMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,org.apache.kafka.connect.source.SourceRecord>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer)protected abstract JdbcConnectiondatabaseConnection()protected intgetMaximumEnqueuedRecordCount()Get the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingAbstractConnectorTest.consumeRecord(),AbstractConnectorTest.consumeRecords(int), orAbstractConnectorTest.consumeRecords(int, Consumer).voidinserts()voidinvalidTablesInTheList()protected StringpkFieldName()protected voidpopulate4PkTable(JdbcConnection connection, String tableName)protected voidpopulateTable()protected voidpopulateTable(JdbcConnection connection)protected voidpopulateTable(JdbcConnection connection, String tableName)protected voidsendAdHocSnapshotSignal()protected voidsendAdHocSnapshotSignal(String... dataCollectionIds)protected abstract StringsignalTableName()voidsnapshotOnly()voidsnapshotOnlyWithRestart()voidsnapshotPreceededBySchemaChange()protected voidstartConnector()protected voidstartConnector(io.debezium.engine.DebeziumEngine.CompletionCallback callback)protected voidstartConnector(Function<Configuration.Builder,Configuration.Builder> custConfig)protected voidstartConnector(Function<Configuration.Builder,Configuration.Builder> custConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback)protected StringtableDataCollectionId()protected abstract StringtableName()protected abstract StringtopicName()voidupdates()voidupdatesLargeChunk()voidupdatesWithRestart()protected StringvalueFieldName()protected voidwaitForConnectorToStart()-
Methods inherited from class io.debezium.embedded.AbstractConnectorTest
assertBeginTransaction, assertConfigurationErrors, assertConfigurationErrors, assertConfigurationErrors, assertConnectorIsRunning, assertConnectorNotRunning, assertDelete, assertEndTransaction, assertEngineIsRunning, assertHasNoSourceQuery, assertInsert, assertKey, assertNoConfigurationErrors, assertNoRecordsToConsume, assertOffset, assertOffset, assertOnlyTransactionRecordsToConsume, assertRecordTransactionMetadata, assertSchemaMatchesStruct, assertSchemaMatchesStruct, assertSourceQuery, assertTombstone, assertTombstone, assertUpdate, assertValueField, configValue, consumeAvailableRecords, consumeDmlRecordsByTopic, consumeDmlRecordsByTopic, consumeDmlRecordsByTopic, consumeRecord, consumeRecords, consumeRecords, consumeRecords, consumeRecordsByTopic, consumeRecordsByTopic, consumeRecordsByTopic, debug, getSnapshotMetricsObjectName, getStreamingMetricsObjectName, getStreamingMetricsObjectName, getStreamingNamespace, initializeConnectorTestFramework, isStreamingRunning, isStreamingRunning, isTransactionRecord, loggingCompletion, print, readLastCommittedOffset, readLastCommittedOffsets, setConsumeTimeout, skipAvroValidation, start, start, start, start, start, startAndConsumeTillEnd, startAndConsumeTillEnd, stopConnector, stopConnector, validate, waitForAvailableRecords, waitForConnectorShutdown, waitForSnapshotToBeCompleted, waitForStreamingRunning, waitForStreamingRunning, waitTimeForRecords, waitTimeForRecordsAfterNulls
-
-
-
-
Field Detail
-
ROW_COUNT
protected static final int ROW_COUNT
- See Also:
- Constant Field Values
-
MAXIMUM_NO_RECORDS_CONSUMES
private static final int MAXIMUM_NO_RECORDS_CONSUMES
- See Also:
- Constant Field Values
-
DB_HISTORY_PATH
protected static final Path DB_HISTORY_PATH
-
-
Method Detail
-
databaseConnection
protected abstract JdbcConnection databaseConnection()
-
topicName
protected abstract String topicName()
-
tableName
protected abstract String tableName()
-
signalTableName
protected abstract String signalTableName()
-
config
protected abstract Configuration.Builder config()
-
tableDataCollectionId
protected String tableDataCollectionId()
-
populateTable
protected void populateTable(JdbcConnection connection, String tableName) throws SQLException
- Throws:
SQLException
-
populateTable
protected void populateTable(JdbcConnection connection) throws SQLException
- Throws:
SQLException
-
populateTable
protected void populateTable() throws SQLException- Throws:
SQLException
-
populate4PkTable
protected void populate4PkTable(JdbcConnection connection, String tableName) throws SQLException
- Throws:
SQLException
-
consumeMixedWithIncrementalSnapshot
protected Map<Integer,Integer> consumeMixedWithIncrementalSnapshot(int recordCount) throws InterruptedException
- Throws:
InterruptedException
-
consumeMixedWithIncrementalSnapshot
protected <V> Map<Integer,V> consumeMixedWithIncrementalSnapshot(int recordCount, Function<org.apache.kafka.connect.source.SourceRecord,V> valueConverter, Predicate<Map.Entry<Integer,V>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer) throws InterruptedException
- Throws:
InterruptedException
-
consumeMixedWithIncrementalSnapshot
protected <V> Map<Integer,V> consumeMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,V>> dataCompleted, Function<org.apache.kafka.connect.data.Struct,Integer> idCalculator, Function<org.apache.kafka.connect.source.SourceRecord,V> valueConverter, String topicName, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer) throws InterruptedException
- Throws:
InterruptedException
-
consumeRecordsMixedWithIncrementalSnapshot
protected Map<Integer,org.apache.kafka.connect.source.SourceRecord> consumeRecordsMixedWithIncrementalSnapshot(int recordCount) throws InterruptedException
- Throws:
InterruptedException
-
consumeMixedWithIncrementalSnapshot
protected Map<Integer,Integer> consumeMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,Integer>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer) throws InterruptedException
- Throws:
InterruptedException
-
consumeRecordsMixedWithIncrementalSnapshot
protected Map<Integer,org.apache.kafka.connect.source.SourceRecord> consumeRecordsMixedWithIncrementalSnapshot(int recordCount, Predicate<Map.Entry<Integer,org.apache.kafka.connect.source.SourceRecord>> dataCompleted, Consumer<List<org.apache.kafka.connect.source.SourceRecord>> recordConsumer) throws InterruptedException
- Throws:
InterruptedException
-
valueFieldName
protected String valueFieldName()
-
pkFieldName
protected String pkFieldName()
-
sendAdHocSnapshotSignal
protected void sendAdHocSnapshotSignal(String... dataCollectionIds) throws SQLException
- Throws:
SQLException
-
sendAdHocSnapshotSignal
protected void sendAdHocSnapshotSignal() throws SQLException- Throws:
SQLException
-
startConnector
protected void startConnector(io.debezium.engine.DebeziumEngine.CompletionCallback callback)
-
startConnector
protected void startConnector(Function<Configuration.Builder,Configuration.Builder> custConfig)
-
startConnector
protected void startConnector(Function<Configuration.Builder,Configuration.Builder> custConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback)
-
startConnector
protected void startConnector()
-
waitForConnectorToStart
protected void waitForConnectorToStart()
-
snapshotPreceededBySchemaChange
@FixFor("DBZ-4272") public void snapshotPreceededBySchemaChange() throws Exception
- Throws:
Exception
-
getMaximumEnqueuedRecordCount
protected int getMaximumEnqueuedRecordCount()
Description copied from class:AbstractConnectorTestGet the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingAbstractConnectorTest.consumeRecord(),AbstractConnectorTest.consumeRecords(int), orAbstractConnectorTest.consumeRecords(int, Consumer).By default this method return
100.- Overrides:
getMaximumEnqueuedRecordCountin classAbstractConnectorTest- Returns:
- the maximum number of records that can be enqueued
-
-