Package io.debezium.embedded
Class AbstractConnectorTest
java.lang.Object
io.debezium.embedded.AbstractConnectorTest
- All Implemented Interfaces:
Testing
- Direct Known Subclasses:
AbstractCloudEventsConverterTest,AbstractEventRouterTest,AbstractNotificationsIT,AbstractReselectProcessorTest,AbstractSchemaHistoryTest,AbstractSnapshotTest,EmbeddedEngineTest
An abstract base class for unit testing
SourceConnector implementations using the Debezium EmbeddedEngine
with local file storage.
To use this abstract class, simply create a test class that extends it, and add one or more test methods that
starts the connector using your connector's custom configuration.
Then, your test methods can call consumeRecords(int, Consumer) to consume the specified number
of records (the supplied function gives you a chance to do something with the record).
- Author:
- Randall Hauch
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from interface io.debezium.util.Testing
Testing.Debug, Testing.Files, Testing.InterruptableFunction, Testing.Network, Testing.Print, Testing.Timer -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected BlockingQueue<org.apache.kafka.connect.source.SourceRecord>protected TestingDebeziumEngineprivate ExecutorServiceprotected final AtomicBooleanprivate org.apache.kafka.connect.json.JsonConverterprivate org.apache.kafka.connect.json.JsonDeserializerprivate CountDownLatchprotected final org.slf4j.Loggerorg.junit.rules.TestRuleprotected static final Pathprotected longprivate booleanorg.junit.rules.TestRuleprivate static final Stringprivate org.apache.kafka.connect.json.JsonConverterprivate org.apache.kafka.connect.json.JsonDeserializer -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprotected StringassertBeginTransaction(org.apache.kafka.connect.source.SourceRecord record) protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field) protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int numErrors) protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int minErrorsInclusive, int maxErrorsInclusive) protected voidAssert that the connector is currently running.protected voidAssert that the connector is NOT currently running.protected voidassertDelete(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) protected voidassertEndTransaction(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedEventCount, Map<String, Number> expectedPerTableCount) protected voidAssert that there was no exception in engine that would cause its termination.protected voidassertHasNoSourceQuery(org.apache.kafka.connect.source.SourceRecord record) protected voidassertInsert(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) protected voidprotected voidassertNoConfigurationErrors(org.apache.kafka.common.config.Config config, Field... fields) protected voidAssert that there are no records to consume.protected voidassertOffset(org.apache.kafka.connect.source.SourceRecord record, String offsetField, Object expectedValue) protected voidassertOffset(org.apache.kafka.connect.source.SourceRecord record, Map<String, ?> expectedOffset) protected voidAssert that there are only transaction topic records to be consumed.protected voidassertRecordTransactionMetadata(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedTotalOrder, long expectedCollectionOrder) private voidassertSameValue(Object actual, Object expected) protected voidassertSchemaMatchesStruct(org.apache.kafka.connect.data.SchemaAndValue value) Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.protected voidassertSchemaMatchesStruct(org.apache.kafka.connect.data.Struct struct, org.apache.kafka.connect.data.Schema schema) Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.protected voidassertSourceQuery(org.apache.kafka.connect.source.SourceRecord record, String query) protected voidassertTombstone(org.apache.kafka.connect.source.SourceRecord record) protected voidassertTombstone(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) protected voidassertUpdate(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) protected voidassertValueField(org.apache.kafka.connect.source.SourceRecord record, String fieldPath, Object expectedValue) protected org.apache.kafka.common.config.ConfigValueconfigValue(org.apache.kafka.common.config.Config config, String fieldName) protected intconsumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) Try to consume all of the messages that have already been returned by the connector.protected AbstractConnectorTest.SourceRecordsTry to consume and capture all available records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeDmlRecordsByTopic(int numDmlRecords) Try to consume and capture exactly the specified number of Dml records from the connector.protected intconsumeDmlRecordsByTopic(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed.protected intconsumeDmlRecordsByTopic(int numberDmlRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed.protected org.apache.kafka.connect.source.SourceRecordConsume a single record from the connector.protected intconsumeRecords(int numberOfRecords) Try to consume the specified number of records from the connector, and return the actual number of records that were consumed.protected intconsumeRecords(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed.protected intconsumeRecords(int numberOfRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed.protected AbstractConnectorTest.SourceRecordsconsumeRecordsButSkipUntil(int recordsToRead, BiPredicate<org.apache.kafka.connect.data.Struct, org.apache.kafka.connect.data.Struct> tripCondition) Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords) Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords, boolean assertRecords) Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords, int breakAfterNulls) Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopicUntil(BiPredicate<Integer, org.apache.kafka.connect.source.SourceRecord> condition) Try to consume and capture records untel a codition is satisfied.protected intconsumeRecordsUntil(BiPredicate<Integer, org.apache.kafka.connect.source.SourceRecord> condition, BiFunction<Integer, org.apache.kafka.connect.source.SourceRecord, String> logMessage, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) Try to consume the records from the connector, until a condition is satisfied.protected voiddebug(org.apache.kafka.connect.source.SourceRecord record) protected Consumer<org.apache.kafka.connect.source.SourceRecord>getConsumer(Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop) protected intGet the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingconsumeRecord(),consumeRecords(int), orconsumeRecords(int, Consumer).static ObjectNamegetSnapshotMetricsObjectName(String connector, String server) static ObjectNamegetSnapshotMetricsObjectName(String connector, String server, String task, String database) static ObjectNamestatic ObjectNamegetStreamingMetricsObjectName(String connector, String server) static ObjectNamegetStreamingMetricsObjectName(String connector, String server, String context) static ObjectNamegetStreamingMetricsObjectName(String connector, String server, String context, String task) static ObjectNameprotected static Stringfinal voidstatic booleanisStreamingRunning(String connector, String server) static booleanisStreamingRunning(String connector, String server, String contextName) static booleanisStreamingRunning(String connector, String server, String contextName, String task) static booleanprotected booleanisTransactionRecord(org.apache.kafka.connect.source.SourceRecord record) protected io.debezium.engine.DebeziumEngine.CompletionCallbackCreate aDebeziumEngine.CompletionCallbackthat logs when the engine fails to start the connector or when the connector stops running after completing successfully or due to an errorprotected voidprint(org.apache.kafka.connect.source.SourceRecord record) readLastCommittedOffset(Configuration config, Map<String, T> partition) Utility to read the last committed offset for the specified partition.readLastCommittedOffsets(Configuration config, Collection<Map<String, T>> partitions) Utility to read the last committed offsets for the specified partitions.protected voidsetConsumeTimeout(long timeout, TimeUnit unit) Set the maximum amount of time that theconsumeRecord(),consumeRecords(int), andconsumeRecords(int, Consumer)methods block while waiting for each record before returningnull.protected voidDisable record validation using Avro converter.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.ChangeConsumer changeConsumer) Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback) Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop) Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop, io.debezium.engine.DebeziumEngine.ChangeConsumer changeConsumer) Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstartAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstartAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.final voidStop the connector and block until the connector has completely stopped.voidstopConnector(BooleanConsumer callback) Stop the connector, and return whether the connector was successfully stopped.protected voidvalidate(org.apache.kafka.connect.source.SourceRecord record) Validate that aSourceRecord's key and value can each be converted to a byte[] and then back to an equivalentSourceRecord.protected booleanwaitForAvailableRecords(long timeout, TimeUnit unit) Wait for a maximum amount of time until the first record is available.static voidwaitForConnectorShutdown(String connector, String server) private static voidstatic voidwaitForSnapshotToBeCompleted(String connector, String server) static voidwaitForSnapshotToBeCompleted(String connector, String server, String task, String database) static voidwaitForSnapshotWithCustomMetricsToBeCompleted(String connector, String server, Map<String, String> props) static voidwaitForStreamingRunning(String connector, String server) static voidwaitForStreamingRunning(String connector, String server, String contextName) static voidwaitForStreamingRunning(String connector, String server, String contextName, String task) static voidstatic intstatic int
-
Field Details
-
skipTestRule
public org.junit.rules.TestRule skipTestRule -
OFFSET_STORE_PATH
-
TEST_PROPERTY_PREFIX
- See Also:
-
executor
-
engine
-
consumedLines
-
pollTimeoutInMs
protected long pollTimeoutInMs -
logger
protected final org.slf4j.Logger logger -
isEngineRunning
-
latch
-
keyJsonConverter
private org.apache.kafka.connect.json.JsonConverter keyJsonConverter -
valueJsonConverter
private org.apache.kafka.connect.json.JsonConverter valueJsonConverter -
keyJsonDeserializer
private org.apache.kafka.connect.json.JsonDeserializer keyJsonDeserializer -
valueJsonDeserializer
private org.apache.kafka.connect.json.JsonDeserializer valueJsonDeserializer -
skipAvroValidation
private boolean skipAvroValidation -
logTestName
public org.junit.rules.TestRule logTestName
-
-
Constructor Details
-
AbstractConnectorTest
public AbstractConnectorTest()
-
-
Method Details
-
initializeConnectorTestFramework
public final void initializeConnectorTestFramework() -
stopConnector
public final void stopConnector()Stop the connector and block until the connector has completely stopped. -
stopConnector
Stop the connector, and return whether the connector was successfully stopped.- Parameters:
callback- the function that should be called with whether the connector was successfully stopped; may be null
-
getMaximumEnqueuedRecordCount
protected int getMaximumEnqueuedRecordCount()Get the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingconsumeRecord(),consumeRecords(int), orconsumeRecords(int, Consumer).By default this method return
100.- Returns:
- the maximum number of records that can be enqueued
-
loggingCompletion
protected io.debezium.engine.DebeziumEngine.CompletionCallback loggingCompletion()Create aDebeziumEngine.CompletionCallbackthat logs when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error- Returns:
- the logging
DebeziumEngine.CompletionCallback
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be null
-
startAndConsumeTillEnd
protected void startAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. Records arriving after connector stop must not be ignored.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. The connector will stop immediately when the supplied predicate returns true.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
-
startAndConsumeTillEnd
protected void startAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. Records arriving after connector stop must not be ignored.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback) Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord) Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not neededcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.ChangeConsumer changeConsumer) Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullchangeConsumer-DebeziumEngine.ChangeConsumerinvoked when a record arrives and is stored in the queue
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop) Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not neededcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be nullrecordArrivedListener- function invoked when a record arrives and is stored in the queueignoreRecordsAfterStop-trueif records arriving after stop should be ignored
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop, io.debezium.engine.DebeziumEngine.ChangeConsumer changeConsumer) Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not neededcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be nullrecordArrivedListener- function invoked when a record arrives and is stored in the queueignoreRecordsAfterStop-trueif records arriving after stop should be ignoredchangeConsumer-DebeziumEngine.ChangeConsumerinvoked when a record arrives and is stored in the queue
-
getConsumer
-
setConsumeTimeout
Set the maximum amount of time that theconsumeRecord(),consumeRecords(int), andconsumeRecords(int, Consumer)methods block while waiting for each record before returningnull.- Parameters:
timeout- the timeout; must be positiveunit- the time unit; may not be null
-
consumeRecord
Consume a single record from the connector.- Returns:
- the next record that was returned from the connector, or null if no such record has been produced by the connector
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
Try to consume the specified number of records from the connector, and return the actual number of records that were consumed. Use this method when your test does not care what the records might contain.- Parameters:
numberOfRecords- the number of records that should be consumed- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
protected int consumeRecords(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) throws InterruptedException Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector multiple times in a row till the waiting is terminated.- Parameters:
numberOfRecords- the number of records that should be consumedbreakAfterNulls- the number of allowed runs when no records are receivedrecordConsumer- the function that should be called with each consumed recordassertRecords- true if records serialization should be verified- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsUntil
protected int consumeRecordsUntil(BiPredicate<Integer, org.apache.kafka.connect.source.SourceRecord> condition, BiFunction<Integer, throws InterruptedExceptionorg.apache.kafka.connect.source.SourceRecord, String> logMessage, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) Try to consume the records from the connector, until a condition is satisfied. For slower connectors it is possible to receive no records from the connector multiple times in a row till the waiting is terminated.- Parameters:
condition- the condition that decides that consuming has finishedlogMessage- diagnostic message printedbreakAfterNulls- the number of allowed runs when no records are receivedrecordConsumer- the function that should be called with each consumed recordassertRecords- true if records serialization should be verified- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
protected int consumeRecords(int numberOfRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) throws InterruptedException Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.- Parameters:
numberOfRecords- the number of records that should be consumedrecordConsumer- the function that should be called with each consumed record- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords, int breakAfterNulls) throws InterruptedException Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumedbreakAfterNulls- how many times to wait when no records arrive from the connector- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeAvailableRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeAvailableRecordsByTopic() throws InterruptedExceptionTry to consume and capture all available records from the connector.- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords) throws InterruptedException Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumed- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsButSkipUntil
protected AbstractConnectorTest.SourceRecords consumeRecordsButSkipUntil(int recordsToRead, BiPredicate<org.apache.kafka.connect.data.Struct, org.apache.kafka.connect.data.Struct> tripCondition) throws InterruptedExceptionTry to consume and capture exactly the specified number of records from the connector. The initial records are skipped until the condition is satisfied. This is most useful in corner cases when there can be a duplicate records between snapshot and streaming switch.- Parameters:
recordsToRead- the number of records that should be consumedtripCondition- condition to satisfy to stop skipping records- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopicUntil
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopicUntil(BiPredicate<Integer, org.apache.kafka.connect.source.SourceRecord> condition) throws InterruptedExceptionTry to consume and capture records untel a codition is satisfied.- Parameters:
condition- contition that must be satisifed to terminate reading- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords, boolean assertRecords) throws InterruptedException Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumed- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeDmlRecordsByTopic(int numDmlRecords) throws InterruptedException Try to consume and capture exactly the specified number of Dml records from the connector. While transaction metadata topic records are captured by this method, thenumDmlRecordsshould not include the expected number of records emitted to the transaction topic.- Parameters:
numDmlRecords- the number of Dml records that should be consumed- Returns:
- the collector to which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected int consumeDmlRecordsByTopic(int numberDmlRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) throws InterruptedException Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.- Parameters:
numberDmlRecords- the number of Dml records that should be consumedrecordConsumer- the function that should be called for each consumed record- Returns:
- the actual number of Dml records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected int consumeDmlRecordsByTopic(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) throws InterruptedException Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row until the waiting is terminated. Additionally, while this method will consume and append transaction metadata topic records to the consumer, the returned value only considers Dml records.- Parameters:
numberOfRecords- the number of Dml records that should be consumedbreakAfterNulls- the number of allowed run when no records are consumedrecordConsumer- the function that should be called for each consumed recordassertRecords- true if records serialization should be verified- Returns:
- the actual number of Dml records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
isTransactionRecord
protected boolean isTransactionRecord(org.apache.kafka.connect.source.SourceRecord record) -
consumeAvailableRecords
protected int consumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) Try to consume all of the messages that have already been returned by the connector.- Parameters:
recordConsumer- the function that should be called with each consumed record- Returns:
- the number of records that were consumed
-
waitForAvailableRecords
Wait for a maximum amount of time until the first record is available.- Parameters:
timeout- the maximum amount of time to wait; must not be negativeunit- the time unit fortimeout- Returns:
trueif records are available, orfalseif the timeout occurred and no records are available
-
skipAvroValidation
protected void skipAvroValidation()Disable record validation using Avro converter. -
assertConnectorIsRunning
protected void assertConnectorIsRunning()Assert that the connector is currently running. -
assertConnectorNotRunning
protected void assertConnectorNotRunning()Assert that the connector is NOT currently running. -
assertNoRecordsToConsume
protected void assertNoRecordsToConsume()Assert that there are no records to consume. -
assertOnlyTransactionRecordsToConsume
protected void assertOnlyTransactionRecordsToConsume()Assert that there are only transaction topic records to be consumed. -
assertKey
protected void assertKey(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) -
assertInsert
protected void assertInsert(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) -
assertUpdate
protected void assertUpdate(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) -
assertDelete
protected void assertDelete(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) -
assertSourceQuery
-
assertHasNoSourceQuery
protected void assertHasNoSourceQuery(org.apache.kafka.connect.source.SourceRecord record) -
assertTombstone
protected void assertTombstone(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk) -
assertTombstone
protected void assertTombstone(org.apache.kafka.connect.source.SourceRecord record) -
assertOffset
-
assertOffset
-
assertValueField
-
assertSameValue
-
assertSchemaMatchesStruct
protected void assertSchemaMatchesStruct(org.apache.kafka.connect.data.SchemaAndValue value) Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.- Parameters:
value- the value with a schema; may not be null
-
assertSchemaMatchesStruct
protected void assertSchemaMatchesStruct(org.apache.kafka.connect.data.Struct struct, org.apache.kafka.connect.data.Schema schema) Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.- Parameters:
struct- theStructto validate; may not be nullschema- the expected schema of theStruct; may not be null
-
assertEngineIsRunning
protected void assertEngineIsRunning()Assert that there was no exception in engine that would cause its termination. -
validate
protected void validate(org.apache.kafka.connect.source.SourceRecord record) Validate that aSourceRecord's key and value can each be converted to a byte[] and then back to an equivalentSourceRecord.- Parameters:
record- the record to validate; may not be null
-
print
protected void print(org.apache.kafka.connect.source.SourceRecord record) -
debug
protected void debug(org.apache.kafka.connect.source.SourceRecord record) -
assertConfigurationErrors
protected void assertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int numErrors) -
assertConfigurationErrors
protected void assertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int minErrorsInclusive, int maxErrorsInclusive) -
assertConfigurationErrors
-
assertNoConfigurationErrors
protected void assertNoConfigurationErrors(org.apache.kafka.common.config.Config config, Field... fields) -
configValue
protected org.apache.kafka.common.config.ConfigValue configValue(org.apache.kafka.common.config.Config config, String fieldName) -
readLastCommittedOffset
protected <T> Map<String,Object> readLastCommittedOffset(Configuration config, Map<String, T> partition) Utility to read the last committed offset for the specified partition.- Parameters:
config- the configuration of the engine used to persist the offsetspartition- the partition- Returns:
- the map of partitions to offsets; never null but possibly empty
-
readLastCommittedOffsets
protected <T> Map<Map<String,T>, readLastCommittedOffsetsMap<String, Object>> (Configuration config, Collection<Map<String, T>> partitions) Utility to read the last committed offsets for the specified partitions.- Parameters:
config- the configuration of the engine used to persist the offsetspartitions- the partitions- Returns:
- the map of partitions to offsets; never null but possibly empty
-
assertBeginTransaction
-
assertEndTransaction
-
assertRecordTransactionMetadata
protected void assertRecordTransactionMetadata(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedTotalOrder, long expectedCollectionOrder) -
waitTimeForRecords
public static int waitTimeForRecords() -
waitTimeForRecordsAfterNulls
public static int waitTimeForRecordsAfterNulls() -
waitForSnapshotToBeCompleted
public static void waitForSnapshotToBeCompleted(String connector, String server) throws InterruptedException - Throws:
InterruptedException
-
waitForSnapshotToBeCompleted
public static void waitForSnapshotToBeCompleted(String connector, String server, String task, String database) throws InterruptedException - Throws:
InterruptedException
-
waitForSnapshotWithCustomMetricsToBeCompleted
public static void waitForSnapshotWithCustomMetricsToBeCompleted(String connector, String server, Map<String, String> props) throws InterruptedException- Throws:
InterruptedException
-
waitForSnapshotEvent
private static void waitForSnapshotEvent(String connector, String server, String event, String task, String database) throws InterruptedException - Throws:
InterruptedException
-
waitForStreamingRunning
public static void waitForStreamingRunning(String connector, String server) throws InterruptedException - Throws:
InterruptedException
-
waitForStreamingRunning
-
waitForStreamingRunning
-
waitForStreamingWithCustomMetricsToStart
-
waitForConnectorShutdown
-
isStreamingRunning
-
isStreamingRunning
-
isStreamingRunning
-
isStreamingRunning
-
getSnapshotMetricsObjectName
public static ObjectName getSnapshotMetricsObjectName(String connector, String server) throws MalformedObjectNameException - Throws:
MalformedObjectNameException
-
getSnapshotMetricsObjectName
public static ObjectName getSnapshotMetricsObjectName(String connector, String server, String task, String database) throws MalformedObjectNameException - Throws:
MalformedObjectNameException
-
getSnapshotMetricsObjectName
public static ObjectName getSnapshotMetricsObjectName(String connector, String server, Map<String, String> props) throws MalformedObjectNameException- Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server) throws MalformedObjectNameException - Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server, String context) throws MalformedObjectNameException - Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server, String context, String task) throws MalformedObjectNameException - Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server, Map<String, String> props) throws MalformedObjectNameException- Throws:
MalformedObjectNameException
-
getStreamingNamespace
-