Package io.debezium.connector.binlog
Class BinlogConnectorIT<C extends org.apache.kafka.connect.source.SourceConnector,P extends io.debezium.connector.binlog.BinlogPartition,O extends io.debezium.connector.binlog.BinlogOffsetContext<?>>
java.lang.Object
io.debezium.embedded.AbstractConnectorTest
io.debezium.embedded.async.AbstractAsyncEngineConnectorTest
io.debezium.connector.binlog.AbstractBinlogConnectorIT<C>
io.debezium.connector.binlog.BinlogConnectorIT<C,P,O>
- All Implemented Interfaces:
BinlogConnectorTest<C>,Testing
public abstract class BinlogConnectorIT<C extends org.apache.kafka.connect.source.SourceConnector,P extends io.debezium.connector.binlog.BinlogPartition,O extends io.debezium.connector.binlog.BinlogOffsetContext<?>>
extends AbstractBinlogConnectorIT<C>
- Author:
- Randall Hauch
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprotected static classprivate static classNested classes/interfaces inherited from class io.debezium.embedded.AbstractConnectorTest
AbstractConnectorTest.SourceRecordsNested classes/interfaces inherited from interface io.debezium.util.Testing
Testing.Debug, Testing.Files, Testing.InterruptableFunction, Testing.Network, Testing.Print, Testing.Timer -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate Configurationprivate final UniqueDatabaseprivate final UniqueDatabaseprivate static final intprivate static final intprivate static final intprivate final UniqueDatabaseprivate static final PathFields inherited from class io.debezium.embedded.AbstractConnectorTest
consumedLines, engine, isEngineRunning, logger, logTestName, OFFSET_STORE_PATH, pollTimeoutInMs, skipTestRule -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidprotected abstract voidassertBinlogPosition(long offsetPosition, long beforeInsertsPosition) protected voidassertInvalidConfiguration(org.apache.kafka.common.config.Config result) protected abstract voidprotected voidassertValidConfiguration(org.apache.kafka.common.config.Config result) voidprotected abstract PcreatePartition(String serverName, String databaseName) private voidprivate org.apache.kafka.connect.data.StructgetAfter(org.apache.kafka.connect.source.SourceRecord record) protected UniqueDatabaseprotected StringgetExpectedQuery(String statement) private Optional<org.apache.kafka.connect.header.Header>getHeaderField(org.apache.kafka.connect.source.SourceRecord record, String fieldName) private Optional<org.apache.kafka.connect.header.Header>getPKUpdateNewKeyHeader(org.apache.kafka.connect.source.SourceRecord record) private Optional<org.apache.kafka.connect.header.Header>getPKUpdateOldKeyHeader(org.apache.kafka.connect.source.SourceRecord record) protected abstract Fieldprotected abstract Stringprotected abstract OloadOffsets(Configuration configuration, Map<String, ?> offsets) voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for a DELETE over a single row is parsed into the resulting event.voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_rows_event, then the issue multiple INSERTs, the appropriate SQL statements are parsed into the resulting events.voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the issue single multi-row INSERT, the appropriate SQL statements are parsed into the resulting events.voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then issue a multi-row DELETE, the resulting events get the original SQL statement.voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an UPDATE over a single row is parsed into the resulting event.voidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an UPDATE over a single row is parsed into the resulting event.private List<org.apache.kafka.connect.source.SourceRecord>voidprivate voidshouldConsumeAllEventsFromDatabaseUsingSnapshotByField(Field dbIncludeListField, int serverId) voidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidvoidSpecifying the adaptive time.precision.mode is no longer valid and a configuration validation problem should be reported when that configuration option is used.voidvoidvoidvoidvoidvoidvoidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events but configure the connector to NOT include the query, it will not be included in the event.voidThis test case validates that if you disable MySQL option binlog_rows_query_log_events or the Maria option binlog_annotate_row_events, then the original SQL statement for an INSERT statement is NOT parsed into the resulting event.voidvoidVerifies that the connector doesn't run with an invalid configuration.voidvoidvoidThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an INSERT statement is parsed into the resulting event.voidvoidvoidvoidvoidvoidvoidvoidvoidvoidValidates that SNAPSHOT_LOCKING_MODE 'none' is valid with all snapshot modesvoidvoidvoidvoidvoidvoidprotected <T> voidvalidateConfigField(org.apache.kafka.common.config.Config config, Field field, T expectedValue) protected abstract org.apache.kafka.common.config.ConfigvalidateConfiguration(Configuration configuration) private voidwaitForStreamingRunning(String serverName) Methods inherited from class io.debezium.connector.binlog.AbstractBinlogConnectorIT
isMariaDb, isMySQL5, isPerconaServerMethods inherited from class io.debezium.embedded.async.AbstractAsyncEngineConnectorTest
createEngine, createEngineBuilderMethods inherited from class io.debezium.embedded.AbstractConnectorTest
assertBeginTransaction, assertConfigurationErrors, assertConfigurationErrors, assertConfigurationErrors, assertConnectorIsRunning, assertConnectorNotRunning, assertDelete, assertEndTransaction, assertEngineIsRunning, assertHasNoSourceQuery, assertInsert, assertKey, assertNoConfigurationErrors, assertNoRecordsToConsume, assertOffset, assertOffset, assertOnlyTransactionRecordsToConsume, assertRecordTransactionMetadata, assertSchemaMatchesStruct, assertSchemaMatchesStruct, assertSourceQuery, assertTombstone, assertTombstone, assertUpdate, assertValueField, configValue, consumeAvailableRecords, consumeAvailableRecordsByTopic, consumeDmlRecordsByTopic, consumeDmlRecordsByTopic, consumeDmlRecordsByTopic, consumeRecord, consumeRecords, consumeRecords, consumeRecords, consumeRecordsButSkipUntil, consumeRecordsByTopic, consumeRecordsByTopic, consumeRecordsByTopic, consumeRecordsByTopicUntil, consumeRecordsUntil, debug, getConsumer, getMaximumEnqueuedRecordCount, getSnapshotMetricsObjectName, getSnapshotMetricsObjectName, getSnapshotMetricsObjectName, getStreamingMetricsObjectName, getStreamingMetricsObjectName, getStreamingMetricsObjectName, getStreamingMetricsObjectName, getStreamingNamespace, initializeConnectorTestFramework, isStreamingRunning, isStreamingRunning, isStreamingRunning, isStreamingRunning, isTransactionRecord, loggingCompletion, print, readLastCommittedOffset, readLastCommittedOffsets, setConsumeTimeout, skipAvroValidation, start, start, start, start, start, start, start, startAndConsumeTillEnd, startAndConsumeTillEnd, stopConnector, stopConnector, storeOffsets, validate, waitForAvailableRecords, waitForAvailableRecords, waitForConnectorShutdown, waitForEngineShutdown, waitForSnapshotToBeCompleted, waitForSnapshotToBeCompleted, waitForSnapshotWithCustomMetricsToBeCompleted, waitForStreamingRunning, waitForStreamingRunning, waitForStreamingRunning, waitForStreamingWithCustomMetricsToStart, waitTimeForEngine, waitTimeForRecords, waitTimeForRecordsAfterNullsMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface io.debezium.connector.binlog.BinlogConnectorTest
getConnectorClass, getConnectorName, getTestDatabaseConnection, getTestDatabaseConnection, getTestReplicaDatabaseConnection
-
Field Details
-
SCHEMA_HISTORY_PATH
-
DATABASE
-
DATABASE_CUSTOM_SNAPSHOT
-
RO_DATABASE
-
PRODUCTS_TABLE_EVENT_COUNT
private static final int PRODUCTS_TABLE_EVENT_COUNT- See Also:
-
ORDERS_TABLE_EVENT_COUNT
private static final int ORDERS_TABLE_EVENT_COUNT- See Also:
-
INITIAL_EVENT_COUNT
private static final int INITIAL_EVENT_COUNT- See Also:
-
config
-
-
Constructor Details
-
BinlogConnectorIT
public BinlogConnectorIT()
-
-
Method Details
-
beforeEach
public void beforeEach() -
afterEach
public void afterEach() -
getDatabase
-
shouldNotStartWithInvalidConfiguration
public void shouldNotStartWithInvalidConfiguration()Verifies that the connector doesn't run with an invalid configuration. This does not actually connect to the MySQL server. -
shouldFailToValidateInvalidConfiguration
public void shouldFailToValidateInvalidConfiguration() -
shouldValidateAcceptableConfiguration
public void shouldValidateAcceptableConfiguration() -
validateConfiguration
protected abstract org.apache.kafka.common.config.Config validateConfiguration(Configuration configuration) -
assertInvalidConfiguration
protected void assertInvalidConfiguration(org.apache.kafka.common.config.Config result) -
assertValidConfiguration
protected void assertValidConfiguration(org.apache.kafka.common.config.Config result) -
validateConfigField
protected <T> void validateConfigField(org.apache.kafka.common.config.Config config, Field field, T expectedValue) -
shouldValidateLockingModeNoneWithValidSnapshotModeConfiguration
Validates that SNAPSHOT_LOCKING_MODE 'none' is valid with all snapshot modes -
getSnapshotLockingModeField
-
getSnapshotLockingModeNone
-
assertSnapshotLockingModeIsNone
-
getPKUpdateNewKeyHeader
private Optional<org.apache.kafka.connect.header.Header> getPKUpdateNewKeyHeader(org.apache.kafka.connect.source.SourceRecord record) -
getPKUpdateOldKeyHeader
private Optional<org.apache.kafka.connect.header.Header> getPKUpdateOldKeyHeader(org.apache.kafka.connect.source.SourceRecord record) -
getHeaderField
-
shouldConsumeAllEventsFromDatabaseUsingSnapshot
public void shouldConsumeAllEventsFromDatabaseUsingSnapshot() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeAllEventsFromDatabaseUsingSnapshotOld
public void shouldConsumeAllEventsFromDatabaseUsingSnapshotOld() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeAllEventsFromDatabaseUsingSnapshotByField
private void shouldConsumeAllEventsFromDatabaseUsingSnapshotByField(Field dbIncludeListField, int serverId) throws SQLException, InterruptedException - Throws:
SQLExceptionInterruptedException
-
createPartition
-
loadOffsets
-
assertBinlogPosition
protected abstract void assertBinlogPosition(long offsetPosition, long beforeInsertsPosition) -
shouldUseOverriddenSelectStatementDuringSnapshotting
public void shouldUseOverriddenSelectStatementDuringSnapshotting() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldUseMultipleOverriddenSelectStatementsDuringSnapshotting
public void shouldUseMultipleOverriddenSelectStatementsDuringSnapshotting() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldIgnoreAlterTableForNonCapturedTablesNotStoredInHistory
@FixFor("DBZ-977") public void shouldIgnoreAlterTableForNonCapturedTablesNotStoredInHistory() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldSaveSetCharacterSetWhenStoringOnlyCapturededTables
@FixFor("DBZ-1201") public void shouldSaveSetCharacterSetWhenStoringOnlyCapturededTables() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldProcessCreateUniqueIndex
@FixFor("DBZ-1246") public void shouldProcessCreateUniqueIndex() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldIgnoreAlterTableForNonCapturedTablesStoredInHistory
@FixFor("DBZ-977") public void shouldIgnoreAlterTableForNonCapturedTablesStoredInHistory() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldIgnoreCreateIndexForNonCapturedTablesNotStoredInHistory
@FixFor("DBZ-1264") public void shouldIgnoreCreateIndexForNonCapturedTablesNotStoredInHistory() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldReceiveSchemaForNonWhitelistedTablesAndDatabases
@FixFor("DBZ-683") public void shouldReceiveSchemaForNonWhitelistedTablesAndDatabases() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldHandleIncludeListTables
@FixFor("DBZ-1546") public void shouldHandleIncludeListTables() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldHandleIncludedTables
- Throws:
SQLExceptionInterruptedException
-
dropDatabases
- Throws:
SQLException
-
getAfter
private org.apache.kafka.connect.data.Struct getAfter(org.apache.kafka.connect.source.SourceRecord record) -
shouldConsumeEventsWithNoSnapshot
- Throws:
SQLExceptionInterruptedException
-
shouldConsumeEventsWithNonGracefulDisconnect
@FixFor("DBZ-7570 - workaround") public void shouldConsumeEventsWithNonGracefulDisconnect() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeEventsWithIncludedColumns
@FixFor("DBZ-1962") public void shouldConsumeEventsWithIncludedColumns() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeEventsWithIncludedColumnsForKeywordNamedTable
@FixFor("DBZ-2525") public void shouldConsumeEventsWithIncludedColumnsForKeywordNamedTable() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeEventsWithMaskedAndBlacklistedColumns
public void shouldConsumeEventsWithMaskedAndBlacklistedColumns() throws SQLException, InterruptedException- Throws:
SQLExceptionInterruptedException
-
shouldConsumeEventsWithMaskedHashedColumns
@FixFor("DBZ-1692") public void shouldConsumeEventsWithMaskedHashedColumns() throws InterruptedException- Throws:
InterruptedException
-
shouldConsumeEventsWithTruncatedColumns
@FixFor("DBZ-1972") public void shouldConsumeEventsWithTruncatedColumns() throws InterruptedException- Throws:
InterruptedException
-
shouldEmitTombstoneOnDeleteByDefault
- Throws:
Exception
-
shouldEmitNoTombstoneOnDelete
- Throws:
Exception
-
shouldEmitNoSavepoints
- Throws:
Exception
-
shouldNotParseQueryIfServerOptionDisabled
This test case validates that if you disable MySQL option binlog_rows_query_log_events or the Maria option binlog_annotate_row_events, then the original SQL statement for an INSERT statement is NOT parsed into the resulting event.- Throws:
Exception
-
shouldNotParseQueryIfConnectorNotConfiguredTo
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events but configure the connector to NOT include the query, it will not be included in the event.- Throws:
Exception
-
shouldParseQueryIfAvailableAndConnectorOptionEnabled
@FixFor("DBZ-706") public void shouldParseQueryIfAvailableAndConnectorOptionEnabled() throws ExceptionThis test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an INSERT statement is parsed into the resulting event.- Throws:
Exception
-
parseMultipleInsertStatements
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_rows_event, then the issue multiple INSERTs, the appropriate SQL statements are parsed into the resulting events.- Throws:
Exception
-
parseMultipleRowInsertStatement
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the issue single multi-row INSERT, the appropriate SQL statements are parsed into the resulting events.- Throws:
Exception
-
parseDeleteQuery
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for a DELETE over a single row is parsed into the resulting event.- Throws:
Exception
-
parseMultiRowDeleteQuery
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then issue a multi-row DELETE, the resulting events get the original SQL statement.- Throws:
Exception
-
parseUpdateQuery
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an UPDATE over a single row is parsed into the resulting event.- Throws:
Exception
-
parseMultiRowUpdateQuery
This test case validates that if you enable MySQL option binlog_rows_query_log_events or the MariaDB option binlog_annotate_row_events, then the original SQL statement for an UPDATE over a single row is parsed into the resulting event.- Throws:
Exception
-
shouldFailToValidateAdaptivePrecisionMode
Specifying the adaptive time.precision.mode is no longer valid and a configuration validation problem should be reported when that configuration option is used. -
testEmptySchemaLogWarningWithDatabaseWhitelist
- Throws:
Exception
-
testNoEmptySchemaLogWarningWithDatabaseWhitelist
- Throws:
Exception
-
testEmptySchemaWarningWithTableWhitelist
- Throws:
Exception
-
testNoEmptySchemaWarningWithTableWhitelist
- Throws:
Exception
-
shouldRewriteIdentityKey
@FixFor("DBZ-1015") public void shouldRewriteIdentityKey() throws InterruptedException, SQLException- Throws:
InterruptedExceptionSQLException
-
shouldRewriteIdentityKeyWithWhitespace
@FixFor("DBZ-2957") public void shouldRewriteIdentityKeyWithWhitespace() throws InterruptedException, SQLException- Throws:
InterruptedExceptionSQLException
-
shouldRewriteIdentityKeyWithMsgKeyColumnsFieldRegexValidation
@FixFor("DBZ-2957") public void shouldRewriteIdentityKeyWithMsgKeyColumnsFieldRegexValidation() throws InterruptedException, SQLException- Throws:
InterruptedExceptionSQLException
-
shouldOutputRecordsInCloudEventsFormat
- Throws:
Exception
-
waitForStreamingRunning
- Throws:
InterruptedException
-
recordsForTopicForRoProductsTable
private List<org.apache.kafka.connect.source.SourceRecord> recordsForTopicForRoProductsTable(AbstractConnectorTest.SourceRecords records) -
shouldEmitHeadersOnPrimaryKeyUpdate
- Throws:
Exception
-
shouldEmitNoEventsForSkippedCreateOperations
- Throws:
Exception
-
shouldEmitNoEventsForSkippedUpdateAndDeleteOperations
@FixFor("DBZ-1895") public void shouldEmitNoEventsForSkippedUpdateAndDeleteOperations() throws Exception- Throws:
Exception
-
testNoEmptySchemaLogWarningWithSnapshotNever
- Throws:
Exception
-
shouldNotUseOffsetWhenSnapshotIsAlways
- Throws:
Exception
-
testDmlInChangeEvents
- Throws:
Exception
-
shouldNotSendTombstonesWhenNotSupportedByHandler
- Throws:
Exception
-
shouldEmitTruncateOperation
- Throws:
Exception
-
getExpectedQuery
-