Package io.debezium.connector.mysql
Class MySqlOffsetContext
- All Implemented Interfaces:
OffsetContext
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate longprivate Stringstatic final Stringstatic final Stringprivate final IncrementalSnapshotContext<TableId>private booleanstatic final Stringprivate Stringprivate longprivate longprivate Stringprivate intprivate static final Stringprivate booleanprivate final org.apache.kafka.connect.data.Schemastatic final Stringprivate final TransactionContextprivate StringFields inherited from class io.debezium.pipeline.CommonOffsetContext
sourceInfo -
Constructor Summary
ConstructorsConstructorDescriptionMySqlOffsetContext(boolean snapshot, boolean snapshotCompleted, TransactionContext transactionContext, IncrementalSnapshotContext<TableId> incrementalSnapshotContext, SourceInfo sourceInfo) MySqlOffsetContext(MySqlConnectorConfig connectorConfig, boolean snapshot, boolean snapshotCompleted, SourceInfo sourceInfo) -
Method Summary
Modifier and TypeMethodDescriptionvoidvoidvoidCapture that we're starting a new event.voiddatabaseEvent(String database, Instant timestamp) voidevent(DataCollectionId tableId, Instant timestamp) longGet the number of events after the last transaction BEGIN that we've already processed.org.apache.kafka.connect.data.SchemagtidSet()Get the string representation of the GTID range for the MySQL binary log file.static MySqlOffsetContextinitial(MySqlConnectorConfig config) booleanbooleanoffsetUsingPosition(long rowsToSkip) voidvoidprivate voidintGet the number of rows beyond thelast completely processed eventto be skipped upon restart.voidsetBinlogServerId(long serverId) voidsetBinlogStartPoint(String binlogFilename, long positionOfFirstEvent) Set the position in the MySQL binlog where we will start reading.voidsetBinlogThread(long threadId) voidsetCompletedGtidSet(String gtidSet) Set the GTID set that captures all of the GTID transactions that have been completely processed.voidsetEventPosition(long positionOfCurrentEvent, long eventSizeInBytes) Set the position within the MySQL binary log file of the current event.voidsetInitialSkips(long restartEventsToSkip, int restartRowsToSkip) voidSet the original SQL query.voidsetRowNumber(int eventRowNumber, int totalNumberOfRows) Given the row number within a binlog event and the total number of rows in that event, compute the Kafka Connect offset that is be included in the produced change event describing the row.private voidvoidRecord that a new GTID transaction has been started and has been included in the set of GTIDs known to the MySQL server.voidvoidtableEvent(String database, Set<TableId> tableIds, Instant timestamp) toString()Methods inherited from class io.debezium.pipeline.CommonOffsetContext
getSourceInfo, incrementalSnapshotEvents, markSnapshotRecord, postSnapshotCompletion
-
Field Details
-
SNAPSHOT_COMPLETED_KEY
- See Also:
-
EVENTS_TO_SKIP_OFFSET_KEY
- See Also:
-
TIMESTAMP_KEY
- See Also:
-
GTID_SET_KEY
- See Also:
-
NON_GTID_TRANSACTION_ID_FORMAT
- See Also:
-
sourceInfoSchema
private final org.apache.kafka.connect.data.Schema sourceInfoSchema -
snapshotCompleted
private boolean snapshotCompleted -
transactionContext
-
incrementalSnapshotContext
-
restartGtidSet
-
currentGtidSet
-
restartBinlogFilename
-
restartBinlogPosition
private long restartBinlogPosition -
restartRowsToSkip
private int restartRowsToSkip -
restartEventsToSkip
private long restartEventsToSkip -
currentEventLengthInBytes
private long currentEventLengthInBytes -
inTransaction
private boolean inTransaction -
transactionId
-
-
Constructor Details
-
MySqlOffsetContext
public MySqlOffsetContext(boolean snapshot, boolean snapshotCompleted, TransactionContext transactionContext, IncrementalSnapshotContext<TableId> incrementalSnapshotContext, SourceInfo sourceInfo) -
MySqlOffsetContext
public MySqlOffsetContext(MySqlConnectorConfig connectorConfig, boolean snapshot, boolean snapshotCompleted, SourceInfo sourceInfo)
-
-
Method Details
-
getOffset
-
offsetUsingPosition
-
getSourceInfoSchema
public org.apache.kafka.connect.data.Schema getSourceInfoSchema() -
isSnapshotRunning
public boolean isSnapshotRunning() -
isSnapshotCompleted
public boolean isSnapshotCompleted() -
preSnapshotStart
public void preSnapshotStart() -
preSnapshotCompletion
public void preSnapshotCompletion() -
setTransactionId
private void setTransactionId() -
resetTransactionId
private void resetTransactionId() -
getTransactionId
-
setInitialSkips
public void setInitialSkips(long restartEventsToSkip, int restartRowsToSkip) -
initial
-
event
-
databaseEvent
-
tableEvent
-
getTransactionContext
-
getIncrementalSnapshotContext
-
setBinlogStartPoint
Set the position in the MySQL binlog where we will start reading.- Parameters:
binlogFilename- the name of the binary log file; may not be nullpositionOfFirstEvent- the position in the binary log file to begin processing
-
setCompletedGtidSet
Set the GTID set that captures all of the GTID transactions that have been completely processed.- Parameters:
gtidSet- the string representation of the GTID set; may not be null, but may be an empty string if no GTIDs have been previously processed
-
gtidSet
Get the string representation of the GTID range for the MySQL binary log file.- Returns:
- the string representation of the binlog GTID ranges; may be null
-
startGtid
Record that a new GTID transaction has been started and has been included in the set of GTIDs known to the MySQL server.- Parameters:
gtid- the string representation of a specific GTID that has been begun; may not be nullgtidSet- the string representation of GTID set that includes the newly begun GTID; may not be null
-
getSource
-
startNextTransaction
public void startNextTransaction() -
commitTransaction
public void commitTransaction() -
completeEvent
public void completeEvent()Capture that we're starting a new event. -
setEventPosition
public void setEventPosition(long positionOfCurrentEvent, long eventSizeInBytes) Set the position within the MySQL binary log file of the current event.- Parameters:
positionOfCurrentEvent- the position within the binary log file of the current eventeventSizeInBytes- the size in bytes of this event
-
setQuery
Set the original SQL query.- Parameters:
query- the original SQL query that generated the event.
-
changeEventCompleted
public void changeEventCompleted() -
eventsToSkipUponRestart
public long eventsToSkipUponRestart()Get the number of events after the last transaction BEGIN that we've already processed.- Returns:
- the number of events in the transaction that have been processed completely
- See Also:
-
rowsToSkipUponRestart
public int rowsToSkipUponRestart()Get the number of rows beyond thelast completely processed eventto be skipped upon restart.- Returns:
- the number of rows to be skipped
-
setRowNumber
public void setRowNumber(int eventRowNumber, int totalNumberOfRows) Given the row number within a binlog event and the total number of rows in that event, compute the Kafka Connect offset that is be included in the produced change event describing the row.This method should always be called before
AbstractSourceInfo.struct().- Parameters:
eventRowNumber- the 0-based row number within the event for which the offset is to be producedtotalNumberOfRows- the total number of rows within the event being processed- See Also:
-
setBinlogServerId
public void setBinlogServerId(long serverId) -
setBinlogThread
public void setBinlogThread(long threadId) -
toString
-