Class AbstractConnectorTest

  • All Implemented Interfaces:
    Testing
    Direct Known Subclasses:
    AbstractIncrementalSnapshotTest, EmbeddedEngineTest

    public abstract class AbstractConnectorTest
    extends Object
    implements Testing
    An abstract base class for unit testing SourceConnector implementations using the Debezium EmbeddedEngine with local file storage.

    To use this abstract class, simply create a test class that extends it, and add one or more test methods that starts the connector using your connector's custom configuration. Then, your test methods can call consumeRecords(int, Consumer) to consume the specified number of records (the supplied function gives you a chance to do something with the record).

    Author:
    Randall Hauch
    • Field Detail

      • skipTestRule

        public org.junit.rules.TestRule skipTestRule
      • OFFSET_STORE_PATH

        protected static final Path OFFSET_STORE_PATH
      • engine

        protected io.debezium.embedded.EmbeddedEngine engine
      • consumedLines

        private BlockingQueue<org.apache.kafka.connect.source.SourceRecord> consumedLines
      • pollTimeoutInMs

        protected long pollTimeoutInMs
      • logger

        protected final org.slf4j.Logger logger
      • keyJsonConverter

        private org.apache.kafka.connect.json.JsonConverter keyJsonConverter
      • valueJsonConverter

        private org.apache.kafka.connect.json.JsonConverter valueJsonConverter
      • keyJsonDeserializer

        private org.apache.kafka.connect.json.JsonDeserializer keyJsonDeserializer
      • valueJsonDeserializer

        private org.apache.kafka.connect.json.JsonDeserializer valueJsonDeserializer
      • skipAvroValidation

        private boolean skipAvroValidation
      • logTestName

        public org.junit.rules.TestRule logTestName
    • Constructor Detail

      • AbstractConnectorTest

        public AbstractConnectorTest()
    • Method Detail

      • initializeConnectorTestFramework

        public final void initializeConnectorTestFramework()
      • stopConnector

        public final void stopConnector()
        Stop the connector and block until the connector has completely stopped.
      • stopConnector

        public void stopConnector​(BooleanConsumer callback)
        Stop the connector, and return whether the connector was successfully stopped.
        Parameters:
        callback - the function that should be called with whether the connector was successfully stopped; may be null
      • getMaximumEnqueuedRecordCount

        protected int getMaximumEnqueuedRecordCount()
        Get the maximum number of messages that can be obtained from the connector and held in-memory before they are consumed by test methods using consumeRecord(), consumeRecords(int), or consumeRecords(int, Consumer).

        By default this method return 100.

        Returns:
        the maximum number of records that can be enqueued
      • loggingCompletion

        protected io.debezium.embedded.EmbeddedEngine.CompletionCallback loggingCompletion()
        Create a EmbeddedEngine.CompletionCallback that logs when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error
        Returns:
        the logging EmbeddedEngine.CompletionCallback
      • start

        protected void start​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                             Configuration connectorConfig)
        Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
      • startAndConsumeTillEnd

        protected void startAndConsumeTillEnd​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                                              Configuration connectorConfig)
        Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. Records arriving after connector stop must not be ignored.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
      • start

        protected void start​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                             Configuration connectorConfig,
                             Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)
        Start the connector using the supplied connector configuration, where upon completion the status of the connector is logged. The connector will stop immediately when the supplied predicate returns true.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
        isStopRecord - the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
      • start

        protected void start​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                             Configuration connectorConfig,
                             io.debezium.engine.DebeziumEngine.CompletionCallback callback)
        Start the connector using the supplied connector configuration.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
        callback - the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
      • start

        protected void start​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                             Configuration connectorConfig,
                             io.debezium.engine.DebeziumEngine.CompletionCallback callback,
                             Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord)
        Start the connector using the supplied connector configuration.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
        isStopRecord - the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
        callback - the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
      • start

        protected void start​(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
                             Configuration connectorConfig,
                             io.debezium.engine.DebeziumEngine.CompletionCallback callback,
                             Predicate<org.apache.kafka.connect.source.SourceRecord> isStopRecord,
                             Consumer<org.apache.kafka.connect.source.SourceRecord> recordArrivedListener,
                             boolean ignoreRecordsAfterStop)
        Start the connector using the supplied connector configuration.
        Parameters:
        connectorClass - the connector class; may not be null
        connectorConfig - the configuration for the connector; may not be null
        isStopRecord - the function that will be called to determine if the connector should be stopped before processing this record; may be null if not needed
        callback - the function that will be called when the engine fails to start the connector or when the connector stops running after completing successfully or due to an error; may be null
        recordArrivedListener - function invoked when a record arrives and is stored in the queue
        ignoreRecordsAfterStop - true if records arriving after stop should be ignored
      • setConsumeTimeout

        protected void setConsumeTimeout​(long timeout,
                                         TimeUnit unit)
        Set the maximum amount of time that the consumeRecord(), consumeRecords(int), and consumeRecords(int, Consumer) methods block while waiting for each record before returning null.
        Parameters:
        timeout - the timeout; must be positive
        unit - the time unit; may not be null
      • consumeRecord

        protected org.apache.kafka.connect.source.SourceRecord consumeRecord()
                                                                      throws InterruptedException
        Consume a single record from the connector.
        Returns:
        the next record that was returned from the connector, or null if no such record has been produced by the connector
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecords

        protected int consumeRecords​(int numberOfRecords)
                              throws InterruptedException
        Try to consume the specified number of records from the connector, and return the actual number of records that were consumed. Use this method when your test does not care what the records might contain.
        Parameters:
        numberOfRecords - the number of records that should be consumed
        Returns:
        the actual number of records that were consumed
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecords

        protected int consumeRecords​(int numberOfRecords,
                                     int breakAfterNulls,
                                     Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer,
                                     boolean assertRecords)
                              throws InterruptedException
        Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector multiple times in a row till the waiting is terminated.
        Parameters:
        numberOfRecords - the number of records that should be consumed
        breakAfterNulls - the number of allowed runs when no records are received
        recordConsumer - the function that should be called with each consumed record
        assertRecords - true if records serialization should be verified
        Returns:
        the actual number of records that were consumed
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecords

        protected int consumeRecords​(int numberOfRecords,
                                     Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
                              throws InterruptedException
        Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.
        Parameters:
        numberOfRecords - the number of records that should be consumed
        recordConsumer - the function that should be called with each consumed record
        Returns:
        the actual number of records that were consumed
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecordsByTopic

        protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic​(int numRecords,
                                                                            int breakAfterNulls)
                                                                     throws InterruptedException
        Try to consume and capture exactly the specified number of records from the connector.
        Parameters:
        numRecords - the number of records that should be consumed
        breakAfterNulls - how many times to wait when no records arrive from the connector
        Returns:
        the collector into which the records were captured; never null
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecordsByTopic

        protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic​(int numRecords)
                                                                     throws InterruptedException
        Try to consume and capture exactly the specified number of records from the connector.
        Parameters:
        numRecords - the number of records that should be consumed
        Returns:
        the collector into which the records were captured; never null
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeRecordsByTopic

        protected AbstractConnectorTest.SourceRecords consumeRecordsByTopic​(int numRecords,
                                                                            boolean assertRecords)
                                                                     throws InterruptedException
        Try to consume and capture exactly the specified number of records from the connector.
        Parameters:
        numRecords - the number of records that should be consumed
        Returns:
        the collector into which the records were captured; never null
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeDmlRecordsByTopic

        protected AbstractConnectorTest.SourceRecords consumeDmlRecordsByTopic​(int numDmlRecords)
                                                                        throws InterruptedException
        Try to consume and capture exactly the specified number of Dml records from the connector. While transaction metadata topic records are captured by this method, the numDmlRecords should not include the expected number of records emitted to the transaction topic.
        Parameters:
        numDmlRecords - the number of Dml records that should be consumed
        Returns:
        the collector to which the records were captured; never null
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeDmlRecordsByTopic

        protected int consumeDmlRecordsByTopic​(int numberDmlRecords,
                                               Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
                                        throws InterruptedException
        Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row till the waiting is terminated.
        Parameters:
        numberDmlRecords - the number of Dml records that should be consumed
        recordConsumer - the function that should be called for each consumed record
        Returns:
        the actual number of Dml records that were consumed
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • consumeDmlRecordsByTopic

        protected int consumeDmlRecordsByTopic​(int numberOfRecords,
                                               int breakAfterNulls,
                                               Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer,
                                               boolean assertRecords)
                                        throws InterruptedException
        Try to consume the specified number of records from the connector, calling the given function for each, and return the actual number of Dml records that were consumed. For slower connectors it is possible to receive no records from the connector at most 3 times in a row until the waiting is terminated. Additionally, while this method will consume and append transaction metadata topic records to the consumer, the returned value only considers Dml records.
        Parameters:
        numberOfRecords - the number of Dml records that should be consumed
        breakAfterNulls - the number of allowed run when no records are consumed
        recordConsumer - the function that should be called for each consumed record
        assertRecords - true if records serialization should be verified
        Returns:
        the actual number of Dml records that were consumed
        Throws:
        InterruptedException - if the thread was interrupted while waiting for a record to be returned
      • isTransactionRecord

        protected boolean isTransactionRecord​(org.apache.kafka.connect.source.SourceRecord record)
      • consumeAvailableRecords

        protected int consumeAvailableRecords​(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
        Try to consume all of the messages that have already been returned by the connector.
        Parameters:
        recordConsumer - the function that should be called with each consumed record
        Returns:
        the number of records that were consumed
      • waitForAvailableRecords

        protected boolean waitForAvailableRecords​(long timeout,
                                                  TimeUnit unit)
        Wait for a maximum amount of time until the first record is available.
        Parameters:
        timeout - the maximum amount of time to wait; must not be negative
        unit - the time unit for timeout
        Returns:
        true if records are available, or false if the timeout occurred and no records are available
      • skipAvroValidation

        protected void skipAvroValidation()
        Disable record validation using Avro converter. Introduced to workaround https://github.com/confluentinc/schema-registry/issues/1693
      • assertConnectorIsRunning

        protected void assertConnectorIsRunning()
        Assert that the connector is currently running.
      • assertConnectorNotRunning

        protected void assertConnectorNotRunning()
        Assert that the connector is NOT currently running.
      • assertNoRecordsToConsume

        protected void assertNoRecordsToConsume()
        Assert that there are no records to consume.
      • assertOnlyTransactionRecordsToConsume

        protected void assertOnlyTransactionRecordsToConsume()
        Assert that there are only transaction topic records to be consumed.
      • assertKey

        protected void assertKey​(org.apache.kafka.connect.source.SourceRecord record,
                                 String pkField,
                                 int pk)
      • assertInsert

        protected void assertInsert​(org.apache.kafka.connect.source.SourceRecord record,
                                    String pkField,
                                    int pk)
      • assertUpdate

        protected void assertUpdate​(org.apache.kafka.connect.source.SourceRecord record,
                                    String pkField,
                                    int pk)
      • assertDelete

        protected void assertDelete​(org.apache.kafka.connect.source.SourceRecord record,
                                    String pkField,
                                    int pk)
      • assertSourceQuery

        protected void assertSourceQuery​(org.apache.kafka.connect.source.SourceRecord record,
                                         String query)
      • assertHasNoSourceQuery

        protected void assertHasNoSourceQuery​(org.apache.kafka.connect.source.SourceRecord record)
      • assertTombstone

        protected void assertTombstone​(org.apache.kafka.connect.source.SourceRecord record,
                                       String pkField,
                                       int pk)
      • assertTombstone

        protected void assertTombstone​(org.apache.kafka.connect.source.SourceRecord record)
      • assertOffset

        protected void assertOffset​(org.apache.kafka.connect.source.SourceRecord record,
                                    Map<String,​?> expectedOffset)
      • assertOffset

        protected void assertOffset​(org.apache.kafka.connect.source.SourceRecord record,
                                    String offsetField,
                                    Object expectedValue)
      • assertValueField

        protected void assertValueField​(org.apache.kafka.connect.source.SourceRecord record,
                                        String fieldPath,
                                        Object expectedValue)
      • assertSameValue

        private void assertSameValue​(Object actual,
                                     Object expected)
      • assertSchemaMatchesStruct

        protected void assertSchemaMatchesStruct​(org.apache.kafka.connect.data.SchemaAndValue value)
        Assert that the supplied Struct is valid and its schema matches that of the supplied schema.
        Parameters:
        value - the value with a schema; may not be null
      • assertSchemaMatchesStruct

        protected void assertSchemaMatchesStruct​(org.apache.kafka.connect.data.Struct struct,
                                                 org.apache.kafka.connect.data.Schema schema)
        Assert that the supplied Struct is valid and its schema matches that of the supplied schema.
        Parameters:
        struct - the Struct to validate; may not be null
        schema - the expected schema of the Struct; may not be null
      • assertEngineIsRunning

        protected void assertEngineIsRunning()
        Assert that there was no exception in engine that would cause its termination.
      • validate

        protected void validate​(org.apache.kafka.connect.source.SourceRecord record)
        Validate that a SourceRecord's key and value can each be converted to a byte[] and then back to an equivalent SourceRecord.
        Parameters:
        record - the record to validate; may not be null
      • print

        protected void print​(org.apache.kafka.connect.source.SourceRecord record)
      • debug

        protected void debug​(org.apache.kafka.connect.source.SourceRecord record)
      • assertConfigurationErrors

        protected void assertConfigurationErrors​(org.apache.kafka.common.config.Config config,
                                                 Field field,
                                                 int numErrors)
      • assertConfigurationErrors

        protected void assertConfigurationErrors​(org.apache.kafka.common.config.Config config,
                                                 Field field,
                                                 int minErrorsInclusive,
                                                 int maxErrorsInclusive)
      • assertConfigurationErrors

        protected void assertConfigurationErrors​(org.apache.kafka.common.config.Config config,
                                                 Field field)
      • assertNoConfigurationErrors

        protected void assertNoConfigurationErrors​(org.apache.kafka.common.config.Config config,
                                                   Field... fields)
      • configValue

        protected org.apache.kafka.common.config.ConfigValue configValue​(org.apache.kafka.common.config.Config config,
                                                                         String fieldName)
      • readLastCommittedOffset

        protected <T> Map<String,​Object> readLastCommittedOffset​(Configuration config,
                                                                       Map<String,​T> partition)
        Utility to read the last committed offset for the specified partition.
        Parameters:
        config - the configuration of the engine used to persist the offsets
        partition - the partition
        Returns:
        the map of partitions to offsets; never null but possibly empty
      • readLastCommittedOffsets

        protected <T> Map<Map<String,​T>,​Map<String,​Object>> readLastCommittedOffsets​(Configuration config,
                                                                                                       Collection<Map<String,​T>> partitions)
        Utility to read the last committed offsets for the specified partitions.
        Parameters:
        config - the configuration of the engine used to persist the offsets
        partitions - the partitions
        Returns:
        the map of partitions to offsets; never null but possibly empty
      • assertBeginTransaction

        protected String assertBeginTransaction​(org.apache.kafka.connect.source.SourceRecord record)
      • assertEndTransaction

        protected void assertEndTransaction​(org.apache.kafka.connect.source.SourceRecord record,
                                            String expectedTxId,
                                            long expectedEventCount,
                                            Map<String,​Number> expectedPerTableCount)
      • assertRecordTransactionMetadata

        protected void assertRecordTransactionMetadata​(org.apache.kafka.connect.source.SourceRecord record,
                                                       String expectedTxId,
                                                       long expectedTotalOrder,
                                                       long expectedCollectionOrder)
      • waitTimeForRecords

        public static int waitTimeForRecords()
      • waitTimeForRecordsAfterNulls

        public static int waitTimeForRecordsAfterNulls()
      • waitForStreamingRunning

        public static void waitForStreamingRunning​(String connector,
                                                   String server,
                                                   String contextName)
      • waitForConnectorShutdown

        public static void waitForConnectorShutdown​(String connector,
                                                    String server)
      • isStreamingRunning

        public static boolean isStreamingRunning​(String connector,
                                                 String server)
      • isStreamingRunning

        public static boolean isStreamingRunning​(String connector,
                                                 String server,
                                                 String contextName)
      • getStreamingNamespace

        protected static String getStreamingNamespace()