public abstract class AbstractConnectorTest extends Object implements Testing
SourceConnector implementations using the Debezium EmbeddedEngine
with local file storage.
To use this abstract class, simply create a test class that extends it, and add one or more test methods that
starts the connector using your connector's custom configuration.
Then, your test methods can call consumeRecords(int, Consumer) to consume the specified number
of records (the supplied function gives you a chance to do something with the record).
Testing.Debug, Testing.Files, Testing.InterruptableFunction, Testing.Network, Testing.Print, Testing.Timer| Modifier and Type | Field and Description |
|---|---|
private BlockingQueue<org.apache.kafka.connect.source.SourceRecord> |
consumedLines |
private io.debezium.embedded.EmbeddedEngine |
engine |
private ExecutorService |
executor |
private CountDownLatch |
latch |
protected org.slf4j.Logger |
logger |
protected static Path |
OFFSET_STORE_PATH |
protected long |
pollTimeoutInMs |
| Constructor and Description |
|---|
AbstractConnectorTest() |
| Modifier and Type | Method and Description |
|---|---|
protected void |
append(Object obj,
StringBuilder sb) |
protected void |
assertConnectorIsRunning()
Assert that the connector is currently running.
|
protected void |
assertConnectorNotRunning()
Assert that the connector is NOT currently running.
|
protected void |
assertNoRecordsToConsume()
Assert that there are no records to consume.
|
protected int |
consumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
Try to consume all of the messages that have already been returned by the connector.
|
protected org.apache.kafka.connect.source.SourceRecord |
consumeRecord()
Consume a single record from the connector.
|
protected int |
consumeRecords(int numberOfRecords)
Try to consume the specified number of records from the connector, and return the actual number of records that were
consumed.
|
protected int |
consumeRecords(int numberOfRecords,
Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
Try to consume the specified number of records from the connector, calling the given function for each, and return the
actual number of records that were consumed.
|
protected int |
getMaximumEnqueuedRecordCount()
Get the maximum number of messages that can be obtained from the connector and held in-memory before they are
consumed by test methods using
consumeRecord(), consumeRecords(int), or
consumeRecords(int, Consumer). |
void |
initializeConnectorTestFramework() |
protected void |
print(org.apache.kafka.connect.source.SourceRecord record) |
protected void |
setConsumeTimeout(long timeout,
TimeUnit unit)
Set the maximum amount of time that the
consumeRecord(), consumeRecords(int), and
consumeRecords(int, Consumer) methods block while waiting for each record before returning null. |
protected void |
start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
io.debezium.config.Configuration connectorConfig)
Start the connector using the supplied connector configuration, where upon completion the status of the connector is
logged.
|
protected void |
start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass,
io.debezium.config.Configuration connectorConfig,
io.debezium.embedded.EmbeddedEngine.CompletionCallback callback)
Start the connector using the supplied connector configuration.
|
void |
stopConnector()
Stop the connector and block until the connector has completely stopped.
|
void |
stopConnector(BooleanConsumer callback)
Stop the connector, and return whether the connector was successfully stopped.
|
protected boolean |
waitForAvailableRecords(long timeout,
TimeUnit unit)
Wait for a maximum amount of time until the first record is available.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitdebug, once, once, print, print, printError, printError, printError, resetBeforeEachTest, time, timeprotected static final Path OFFSET_STORE_PATH
private ExecutorService executor
private io.debezium.embedded.EmbeddedEngine engine
private BlockingQueue<org.apache.kafka.connect.source.SourceRecord> consumedLines
protected long pollTimeoutInMs
protected final org.slf4j.Logger logger
private CountDownLatch latch
public final void initializeConnectorTestFramework()
throws Exception
Exceptionpublic final void stopConnector()
public void stopConnector(BooleanConsumer callback)
callback - the function that should be called with whether the connector was successfully stopped; may be nullprotected int getMaximumEnqueuedRecordCount()
consumeRecord(), consumeRecords(int), or
consumeRecords(int, Consumer).
By default this method return 100.
protected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, io.debezium.config.Configuration connectorConfig)
connectorClass - the connector class; may not be nullconnectorConfig - the configuration for the connector; may not be nullprotected void start(Class<? extends org.apache.kafka.connect.source.SourceConnector> connectorClass, io.debezium.config.Configuration connectorConfig, io.debezium.embedded.EmbeddedEngine.CompletionCallback callback)
connectorClass - the connector class; may not be nullconnectorConfig - the configuration for the connector; may not be nullcallback - the function that will be called when the engine fails to start the connector or when the connector
stops running after completing successfully or due to an error; may be nullprotected void setConsumeTimeout(long timeout,
TimeUnit unit)
consumeRecord(), consumeRecords(int), and
consumeRecords(int, Consumer) methods block while waiting for each record before returning null.timeout - the timeout; must be positiveunit - the time unit; may not be nullprotected org.apache.kafka.connect.source.SourceRecord consumeRecord()
throws InterruptedException
InterruptedException - if the thread was interrupted while waiting for a record to be returnedprotected int consumeRecords(int numberOfRecords)
throws InterruptedException
numberOfRecords - the number of records that should be consumedInterruptedException - if the thread was interrupted while waiting for a record to be returnedprotected int consumeRecords(int numberOfRecords,
Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
throws InterruptedException
numberOfRecords - the number of records that should be consumedrecordConsumer - the function that should be called with each consumed recordInterruptedException - if the thread was interrupted while waiting for a record to be returnedprotected int consumeAvailableRecords(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
recordConsumer - the function that should be called with each consumed recordprotected boolean waitForAvailableRecords(long timeout,
TimeUnit unit)
timeout - the maximum amount of time to wait; must not be negativeunit - the time unit for timeouttrue if records are available, or false if the timeout occurred and no records are availableprotected void assertConnectorIsRunning()
protected void assertConnectorNotRunning()
protected void assertNoRecordsToConsume()
protected void print(org.apache.kafka.connect.source.SourceRecord record)
protected void append(Object obj, StringBuilder sb)
Copyright © 2016 JBoss by Red Hat. All rights reserved.