Package io.debezium.embedded
Class AbstractConnectorTest
- java.lang.Object
-
- io.debezium.embedded.AbstractConnectorTest
-
- All Implemented Interfaces:
Testing
- Direct Known Subclasses:
AbstractIncrementalSnapshotTest,EmbeddedEngineTest
public abstract class AbstractConnectorTest extends Object implements Testing
An abstract base class for unit testingSourceConnectorimplementations using the DebeziumEmbeddedEnginewith local file storage.To use this abstract class, simply create a test class that extends it, and add one or more test methods that
starts the connectorusing your connector's custom configuration. Then, your test methods can callconsumeRecords(int, Consumer)to consume the specified number of records (the supplied function gives you a chance to do something with the record).- Author:
- Randall Hauch
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected classAbstractConnectorTest.SourceRecords-
Nested classes/interfaces inherited from interface io.debezium.util.Testing
Testing.Debug, Testing.Files, Testing.InterruptableFunction, Testing.Network, Testing.Print, Testing.Timer
-
-
Field Summary
Fields Modifier and Type Field Description private BlockingQueue<org.apache.kafka.connect.source.SourceRecord>consumedLinesprotected io.debezium.embedded.EmbeddedEngineengineprivate ExecutorServiceexecutorprivate org.apache.kafka.connect.json.JsonConverterkeyJsonConverterprivate org.apache.kafka.connect.json.JsonDeserializerkeyJsonDeserializerprivate CountDownLatchlatchprotected org.slf4j.Loggerloggerorg.junit.rules.TestRulelogTestNameprotected static PathOFFSET_STORE_PATHprotected longpollTimeoutInMsprivate booleanskipAvroValidationorg.junit.rules.TestRuleskipTestRuleprivate static StringTEST_PROPERTY_PREFIXprivate org.apache.kafka.connect.json.JsonConvertervalueJsonConverterprivate org.apache.kafka.connect.json.JsonDeserializervalueJsonDeserializer
-
Constructor Summary
Constructors Constructor Description AbstractConnectorTest()
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description protected StringassertBeginTransaction(org.apache.kafka.connect.source.SourceRecord record)protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field)protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int numErrors)protected voidassertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int minErrorsInclusive, int maxErrorsInclusive)protected voidassertConnectorIsRunning()Assert that the connector is currently running.protected voidassertConnectorNotRunning()Assert that the connector is NOT currently running.protected voidassertDelete(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)protected voidassertEndTransaction(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedEventCount, Map<String,Number> expectedPerTableCount)protected voidassertEngineIsRunning()Assert that there was no exception in engine that would cause its termination.protected voidassertHasNoSourceQuery(org.apache.kafka.connect.source.SourceRecord record)protected voidassertInsert(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)protected voidassertKey(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)protected voidassertNoConfigurationErrors(org.apache.kafka.common.config.Config config, Field... fields)protected voidassertNoRecordsToConsume()Assert that there are no records to consume.protected voidassertOffset(org.apache.kafka.connect.source.SourceRecord record, String offsetField, Object expectedValue)protected voidassertOffset(org.apache.kafka.connect.source.SourceRecord record, Map<String,?> expectedOffset)protected voidassertOnlyTransactionRecordsToConsume()Assert that there are only transaction topic records to be consumed.protected voidassertRecordTransactionMetadata(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedTotalOrder, long expectedCollectionOrder)private voidassertSameValue(Object actual, Object expected)protected voidassertSchemaMatchesStruct(org.apache.kafka.connect.data.SchemaAndValue value)Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.protected voidassertSchemaMatchesStruct(org.apache.kafka.connect.data.Struct struct, org.apache.kafka.connect.data.Schema schema)Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.protected voidassertSourceQuery(org.apache.kafka.connect.source.SourceRecord record, String query)protected voidassertTombstone(org.apache.kafka.connect.source.SourceRecord record)protected voidassertTombstone(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)protected voidassertUpdate(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)protected voidassertValueField(org.apache.kafka.connect.source.SourceRecord record, String fieldPath, Object expectedValue)protected org.apache.kafka.common.config.ConfigValueconfigValue(org.apache.kafka.common.config.Config config, String fieldName)protected intconsumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)Try to consume all of the messages that have already been returned by the connector.protected AbstractConnectorTest.SourceRecordsconsumeDmlRecordsByTopic(int numDmlRecords)Try to consume and capture exactly the specified number of Dml records from the connector.protected intconsumeDmlRecordsByTopic(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords)Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed.protected intconsumeDmlRecordsByTopic(int numberDmlRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed.protected org.apache.kafka.connect.source.SourceRecordconsumeRecord()Consume a single record from the connector.protected intconsumeRecords(int numberOfRecords)Try to consume the specified number of records from the connector, and return the actual number of records that were consumed.protected intconsumeRecords(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords)Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed.protected intconsumeRecords(int numberOfRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords)Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords, boolean assertRecords)Try to consume and capture exactly the specified number of records from the connector.protected AbstractConnectorTest.SourceRecordsconsumeRecordsByTopic(int numRecords, int breakAfterNulls)Try to consume and capture exactly the specified number of records from the connector.protected voiddebug(org.apache.kafka.connect.source.SourceRecord record)protected intgetMaximumEnqueuedRecordCount()Get the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingconsumeRecord(),consumeRecords(int), orconsumeRecords(int, Consumer).static ObjectNamegetSnapshotMetricsObjectName(String connector, String server)static ObjectNamegetStreamingMetricsObjectName(String connector, String server)static ObjectNamegetStreamingMetricsObjectName(String connector, String server, String context)protected static StringgetStreamingNamespace()voidinitializeConnectorTestFramework()static booleanisStreamingRunning(String connector, String server)static booleanisStreamingRunning(String connector, String server, String contextName)protected booleanisTransactionRecord(org.apache.kafka.connect.source.SourceRecord record)protected io.debezium.embedded.EmbeddedEngine.CompletionCallbackloggingCompletion()Create aEmbeddedEngine.CompletionCallbackthat logs when the engine fails to start the connector or when the connector stops running after completing successfully or due to an errorprotected voidprint(org.apache.kafka.connect.source.SourceRecord record)protected <T> Map<String,Object>readLastCommittedOffset(Configuration config, Map<String,T> partition)Utility to read the last committed offset for the specified partition.protected <T> Map<Map<String,T>,Map<String,Object>>readLastCommittedOffsets(Configuration config, Collection<Map<String,T>> partitions)Utility to read the last committed offsets for the specified partitions.protected voidsetConsumeTimeout(long timeout, TimeUnit unit)Set the maximum amount of time that theconsumeRecord(),consumeRecords(int), andconsumeRecords(int, Consumer)methods block while waiting for each record before returningnull.protected voidskipAvroValidation()Disable record validation using Avro converter.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig)Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback)Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop)Start the connector using the supplied connector configuration.protected voidstart(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstartAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig)Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.protected voidstartAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.voidstopConnector()Stop the connector and block until the connector has completely stopped.voidstopConnector(BooleanConsumer callback)Stop the connector, and return whether the connector was successfully stopped.protected voidvalidate(org.apache.kafka.connect.source.SourceRecord record)Validate that aSourceRecord's key and value can each be converted to a byte[] and then back to an equivalentSourceRecord.protected booleanwaitForAvailableRecords(long timeout, TimeUnit unit)Wait for a maximum amount of time until the first record is available.static voidwaitForConnectorShutdown(String connector, String server)static voidwaitForSnapshotToBeCompleted(String connector, String server)static voidwaitForStreamingRunning(String connector, String server)static voidwaitForStreamingRunning(String connector, String server, String contextName)static intwaitTimeForRecords()static intwaitTimeForRecordsAfterNulls()
-
-
-
Field Detail
-
skipTestRule
public org.junit.rules.TestRule skipTestRule
-
OFFSET_STORE_PATH
protected static final Path OFFSET_STORE_PATH
-
TEST_PROPERTY_PREFIX
private static final String TEST_PROPERTY_PREFIX
- See Also:
- Constant Field Values
-
executor
private ExecutorService executor
-
engine
protected io.debezium.embedded.EmbeddedEngine engine
-
consumedLines
private BlockingQueue<org.apache.kafka.connect.source.SourceRecord> consumedLines
-
pollTimeoutInMs
protected long pollTimeoutInMs
-
logger
protected final org.slf4j.Logger logger
-
latch
private CountDownLatch latch
-
keyJsonConverter
private org.apache.kafka.connect.json.JsonConverter keyJsonConverter
-
valueJsonConverter
private org.apache.kafka.connect.json.JsonConverter valueJsonConverter
-
keyJsonDeserializer
private org.apache.kafka.connect.json.JsonDeserializer keyJsonDeserializer
-
valueJsonDeserializer
private org.apache.kafka.connect.json.JsonDeserializer valueJsonDeserializer
-
skipAvroValidation
private boolean skipAvroValidation
-
logTestName
public org.junit.rules.TestRule logTestName
-
-
Method Detail
-
initializeConnectorTestFramework
public final void initializeConnectorTestFramework()
-
stopConnector
public final void stopConnector()
Stop the connector and block until the connector has completely stopped.
-
stopConnector
public void stopConnector(BooleanConsumer callback)
Stop the connector, and return whether the connector was successfully stopped.- Parameters:
callback- the function that should be called with whether the connector was successfully stopped; may be null
-
getMaximumEnqueuedRecordCount
protected int getMaximumEnqueuedRecordCount()
Get the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods usingconsumeRecord(),consumeRecords(int), orconsumeRecords(int, Consumer).By default this method return
100.- Returns:
- the maximum number of records that can be enqueued
-
loggingCompletion
protected io.debezium.embedded.EmbeddedEngine.CompletionCallback loggingCompletion()
Create aEmbeddedEngine.CompletionCallbackthat logs when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error- Returns:
- the logging
EmbeddedEngine.CompletionCallback
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig)
Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be null
-
startAndConsumeTillEnd
protected void startAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig)
Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. Records arriving after connector stop must not be ignored.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)
Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. The connector will stop immediately when the supplied predicate returns true.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
-
startAndConsumeTillEnd
protected void startAndConsumeTillEnd(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)
Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. Records arriving after connector stop must not be ignored.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback)
Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)
Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not neededcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
-
start
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, Configuration connectorConfig, io.debezium.engine.DebeziumEngine.CompletionCallback callback, Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord, Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener, boolean ignoreRecordsAfterStop)
Start the connector using the supplied connector configuration.- Parameters:
connectorClass- the connector class; may not be nullconnectorConfig- the configuration for the connector; may not be nullisStopRecord- the function that will be called to determine if the connector should be stopped before processing this record; may be null if not neededcallback- the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be nullrecordArrivedListener- function invoked when a record arrives and is stored in the queueignoreRecordsAfterStop-trueif records arriving after stop should be ignored
-
setConsumeTimeout
protected void setConsumeTimeout(long timeout, TimeUnit unit)Set the maximum amount of time that theconsumeRecord(),consumeRecords(int), andconsumeRecords(int, Consumer)methods block while waiting for each record before returningnull.- Parameters:
timeout- the timeout; must be positiveunit- the time unit; may not be null
-
consumeRecord
protected org.apache.kafka.connect.source.SourceRecord consumeRecord() throws InterruptedExceptionConsume a single record from the connector.- Returns:
- the next record that was returned from the connector, or null if no such record has been produced by the connector
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
protected int consumeRecords(int numberOfRecords) throws InterruptedExceptionTry to consume the specified number of records from the connector, and return the actual number of records that were consumed. Use this method when your test does not care what the records might contain.- Parameters:
numberOfRecords- the number of records that should be consumed- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
protected int consumeRecords(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) throws InterruptedExceptionTry to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector multiple times in a row till the waiting is terminated.- Parameters:
numberOfRecords- the number of records that should be consumedbreakAfterNulls- the number of allowed runs when no records are receivedrecordConsumer- the function that should be called with each consumed recordassertRecords- true if records serialization should be verified- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecords
protected int consumeRecords(int numberOfRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) throws InterruptedExceptionTry to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.- Parameters:
numberOfRecords- the number of records that should be consumedrecordConsumer- the function that should be called with each consumed record- Returns:
- the actual number of records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords, int breakAfterNulls) throws InterruptedException
Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumedbreakAfterNulls- how many times to wait when no records arrive from the connector- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords) throws InterruptedException
Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumed- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic(int numRecords, boolean assertRecords) throws InterruptedException
Try to consume and capture exactly the specified number of records from the connector.- Parameters:
numRecords- the number of records that should be consumed- Returns:
- the collector into which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected AbstractConnectorTest.SourceRecords consumeDmlRecordsByTopic(int numDmlRecords) throws InterruptedException
Try to consume and capture exactly the specified number of Dml records from the connector. While transaction metadata topic records are captured by this method, thenumDmlRecordsshould not include the expected number of records emitted to the transaction topic.- Parameters:
numDmlRecords- the number of Dml records that should be consumed- Returns:
- the collector to which the records were captured; never null
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected int consumeDmlRecordsByTopic(int numberDmlRecords, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) throws InterruptedExceptionTry to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.- Parameters:
numberDmlRecords- the number of Dml records that should be consumedrecordConsumer- the function that should be called for each consumed record- Returns:
- the actual number of Dml records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
consumeDmlRecordsByTopic
protected int consumeDmlRecordsByTopic(int numberOfRecords, int breakAfterNulls, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer, boolean assertRecords) throws InterruptedExceptionTry to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row until the waiting is terminated. Additionally, while this method will consume and append transaction metadata topic records to the consumer, the returned value only considers Dml records.- Parameters:
numberOfRecords- the number of Dml records that should be consumedbreakAfterNulls- the number of allowed run when no records are consumedrecordConsumer- the function that should be called for each consumed recordassertRecords- true if records serialization should be verified- Returns:
- the actual number of Dml records that were consumed
- Throws:
InterruptedException- if the thread was interrupted while waiting for a record to be returned
-
isTransactionRecord
protected boolean isTransactionRecord(org.apache.kafka.connect.source.SourceRecord record)
-
consumeAvailableRecords
protected int consumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
Try to consume all of the messages that have already been returned by the connector.- Parameters:
recordConsumer- the function that should be called with each consumed record- Returns:
- the number of records that were consumed
-
waitForAvailableRecords
protected boolean waitForAvailableRecords(long timeout, TimeUnit unit)Wait for a maximum amount of time until the first record is available.- Parameters:
timeout- the maximum amount of time to wait; must not be negativeunit- the time unit fortimeout- Returns:
trueif records are available, orfalseif the timeout occurred and no records are available
-
skipAvroValidation
protected void skipAvroValidation()
Disable record validation using Avro converter. Introduced to workaround https://github.com/confluentinc/schema-registry/issues/1693
-
assertConnectorIsRunning
protected void assertConnectorIsRunning()
Assert that the connector is currently running.
-
assertConnectorNotRunning
protected void assertConnectorNotRunning()
Assert that the connector is NOT currently running.
-
assertNoRecordsToConsume
protected void assertNoRecordsToConsume()
Assert that there are no records to consume.
-
assertOnlyTransactionRecordsToConsume
protected void assertOnlyTransactionRecordsToConsume()
Assert that there are only transaction topic records to be consumed.
-
assertKey
protected void assertKey(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)
-
assertInsert
protected void assertInsert(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)
-
assertUpdate
protected void assertUpdate(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)
-
assertDelete
protected void assertDelete(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)
-
assertSourceQuery
protected void assertSourceQuery(org.apache.kafka.connect.source.SourceRecord record, String query)
-
assertHasNoSourceQuery
protected void assertHasNoSourceQuery(org.apache.kafka.connect.source.SourceRecord record)
-
assertTombstone
protected void assertTombstone(org.apache.kafka.connect.source.SourceRecord record, String pkField, int pk)
-
assertTombstone
protected void assertTombstone(org.apache.kafka.connect.source.SourceRecord record)
-
assertOffset
protected void assertOffset(org.apache.kafka.connect.source.SourceRecord record, Map<String,?> expectedOffset)
-
assertOffset
protected void assertOffset(org.apache.kafka.connect.source.SourceRecord record, String offsetField, Object expectedValue)
-
assertValueField
protected void assertValueField(org.apache.kafka.connect.source.SourceRecord record, String fieldPath, Object expectedValue)
-
assertSchemaMatchesStruct
protected void assertSchemaMatchesStruct(org.apache.kafka.connect.data.SchemaAndValue value)
Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.- Parameters:
value- the value with a schema; may not be null
-
assertSchemaMatchesStruct
protected void assertSchemaMatchesStruct(org.apache.kafka.connect.data.Struct struct, org.apache.kafka.connect.data.Schema schema)Assert that the suppliedStructisvalidand itsschemamatches that of the suppliedschema.- Parameters:
struct- theStructto validate; may not be nullschema- the expected schema of theStruct; may not be null
-
assertEngineIsRunning
protected void assertEngineIsRunning()
Assert that there was no exception in engine that would cause its termination.
-
validate
protected void validate(org.apache.kafka.connect.source.SourceRecord record)
Validate that aSourceRecord's key and value can each be converted to a byte[] and then back to an equivalentSourceRecord.- Parameters:
record- the record to validate; may not be null
-
print
protected void print(org.apache.kafka.connect.source.SourceRecord record)
-
debug
protected void debug(org.apache.kafka.connect.source.SourceRecord record)
-
assertConfigurationErrors
protected void assertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int numErrors)
-
assertConfigurationErrors
protected void assertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field, int minErrorsInclusive, int maxErrorsInclusive)
-
assertConfigurationErrors
protected void assertConfigurationErrors(org.apache.kafka.common.config.Config config, Field field)
-
assertNoConfigurationErrors
protected void assertNoConfigurationErrors(org.apache.kafka.common.config.Config config, Field... fields)
-
configValue
protected org.apache.kafka.common.config.ConfigValue configValue(org.apache.kafka.common.config.Config config, String fieldName)
-
readLastCommittedOffset
protected <T> Map<String,Object> readLastCommittedOffset(Configuration config, Map<String,T> partition)
Utility to read the last committed offset for the specified partition.- Parameters:
config- the configuration of the engine used to persist the offsetspartition- the partition- Returns:
- the map of partitions to offsets; never null but possibly empty
-
readLastCommittedOffsets
protected <T> Map<Map<String,T>,Map<String,Object>> readLastCommittedOffsets(Configuration config, Collection<Map<String,T>> partitions)
Utility to read the last committed offsets for the specified partitions.- Parameters:
config- the configuration of the engine used to persist the offsetspartitions- the partitions- Returns:
- the map of partitions to offsets; never null but possibly empty
-
assertBeginTransaction
protected String assertBeginTransaction(org.apache.kafka.connect.source.SourceRecord record)
-
assertEndTransaction
protected void assertEndTransaction(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedEventCount, Map<String,Number> expectedPerTableCount)
-
assertRecordTransactionMetadata
protected void assertRecordTransactionMetadata(org.apache.kafka.connect.source.SourceRecord record, String expectedTxId, long expectedTotalOrder, long expectedCollectionOrder)
-
waitTimeForRecords
public static int waitTimeForRecords()
-
waitTimeForRecordsAfterNulls
public static int waitTimeForRecordsAfterNulls()
-
waitForSnapshotToBeCompleted
public static void waitForSnapshotToBeCompleted(String connector, String server) throws InterruptedException
- Throws:
InterruptedException
-
waitForStreamingRunning
public static void waitForStreamingRunning(String connector, String server) throws InterruptedException
- Throws:
InterruptedException
-
waitForStreamingRunning
public static void waitForStreamingRunning(String connector, String server, String contextName)
-
waitForConnectorShutdown
public static void waitForConnectorShutdown(String connector, String server)
-
isStreamingRunning
public static boolean isStreamingRunning(String connector, String server, String contextName)
-
getSnapshotMetricsObjectName
public static ObjectName getSnapshotMetricsObjectName(String connector, String server) throws MalformedObjectNameException
- Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server) throws MalformedObjectNameException
- Throws:
MalformedObjectNameException
-
getStreamingMetricsObjectName
public static ObjectName getStreamingMetricsObjectName(String connector, String server, String context) throws MalformedObjectNameException
- Throws:
MalformedObjectNameException
-
getStreamingNamespace
protected static String getStreamingNamespace()
-
-