@NotThreadSafe final class SourceInfo extends Object
The source partition information describes the database whose log is being consumed. Typically, the
database is identified by the host address port number of the MySQL server and the name of the database. Here's a JSON-like
representation of an example database:
{
"server" : "production-server"
}
The source offset information is included in each event and captures where the connector should restart
if this event's offset is the last one recorded. The offset includes the binlog filename,
the position of the first event in the binlog, the
number of events to skip, and the
number of rows to also skip.
Here's a JSON-like representation of an example offset:
{
"server_id": 112233,
"ts_sec": 1465937,
"gtid": "db58b0ae-2c10-11e6-b284-0242ac110002:199",
"file": "mysql-bin.000003",
"pos" = 990,
"event" = 0,
"row": 0,
"snapshot": true
}
The "gtids" field only appears in offsets produced when GTIDs are enabled. The "snapshot" field only appears in
offsets produced when the connector is in the middle of a snapshot. And finally, the "ts" field contains the
seconds since Unix epoch (since Jan 1, 1970) of the MySQL event; the message envelopes also have a
timestamp, but that timestamp is the milliseconds since since Jan 1, 1970.
Each change event envelope also includes the source struct that contains MySQL information about that
particular event, including a mixture the fields from the partition (which is renamed in the source to
make more sense), the binlog filename and position where the event can be found, and when GTIDs are enabled the GTID of the
transaction in which the event occurs. Like with the offset, the "snapshot" field only appears for events produced
when the connector is in the middle of a snapshot. Note that this information is likely different than the offset information,
since the connector may need to restart from either just after the most recently completed transaction or the beginning
of the most recently started transaction (whichever appears later in the binlog).
Here's a JSON-like representation of the source for the metadata for an event that corresponds to the above partition and offset:
{
"name": "production-server",
"server_id": 112233,
"ts_sec": 1465937,
"gtid": "db58b0ae-2c10-11e6-b284-0242ac110002:199",
"file": "mysql-bin.000003",
"pos" = 1081,
"row": 0,
"snapshot": true,
"thread" : 1,
"db" : "inventory",
"table" : "products"
}
| Constructor and Description |
|---|
SourceInfo() |
| Modifier and Type | Method and Description |
|---|---|
String |
binlogFilename()
Get the name of the MySQL binary log file that has last been processed.
|
long |
binlogPosition()
Get the position within the MySQL binary log file of the next event to be processed.
|
private boolean |
booleanOffsetValue(Map<String,?> values,
String key) |
void |
commitTransaction() |
void |
completeEvent()
Capture that we're starting a new event.
|
void |
completeSnapshot()
Denote that a snapshot has completed successfully.
|
long |
eventsToSkipUponRestart()
Get the number of events after the last transaction BEGIN that we've already processed.
|
String |
gtidSet()
Get the string representation of the GTID range for the MySQL binary log file.
|
static boolean |
isPositionAtOrBefore(Document recorded,
Document desired,
Predicate<String> gtidFilter)
|
boolean |
isSnapshotInEffect()
Determine whether a snapshot is currently in effect.
|
private long |
longOffsetValue(Map<String,?> values,
String key) |
void |
markLastSnapshot()
Denote that a snapshot will be complete after one last record.
|
Map<String,?> |
offset()
Get the Kafka Connect detail about the source "offset", which describes the position within the source where we last
have last read.
|
Map<String,?> |
offsetForRow(int eventRowNumber,
int totalNumberOfRows)
Given the row number within a binlog event and the total number of rows in that event, compute and return the
Kafka Connect offset that is be included in the produced change event describing the row.
|
private Map<String,?> |
offsetUsingPosition(long rowsToSkip) |
Map<String,String> |
partition()
Get the Kafka Connect detail about the source "partition", which describes the portion of the source that we are
consuming.
|
protected long |
restartBinlogPosition()
Get the position within the MySQL binary log file of the most recently processed event.
|
int |
rowsToSkipUponRestart()
Get the number of rows beyond the
last completely processed event to be skipped
upon restart. |
org.apache.kafka.connect.data.Schema |
schema()
|
String |
serverName()
Get the logical identifier of the database that is the source of the events.
|
void |
setBinlogServerId(long serverId)
Set the server ID as found within the MySQL binary log file.
|
void |
setBinlogStartPoint(String binlogFilename,
long positionOfFirstEvent)
Set the position in the MySQL binlog where we will start reading.
|
void |
setBinlogThread(long threadId)
Set the identifier of the MySQL thread that generated the most recent event.
|
void |
setBinlogTimestampSeconds(long timestampInSeconds)
Set the number of seconds since Unix epoch (January 1, 1970) as found within the MySQL binary log file.
|
void |
setCompletedGtidSet(String gtidSet)
Set the GTID set that captures all of the GTID transactions that have been completely processed.
|
void |
setEventPosition(long positionOfCurrentEvent,
long eventSizeInBytes)
Set the position within the MySQL binary log file of the current event.
|
void |
setOffset(Map<String,?> sourceOffset)
Set the source offset, as read from Kafka Connect.
|
void |
setServerName(String logicalId)
Set the database identifier.
|
void |
startGtid(String gtid,
String gtidSet)
Record that a new GTID transaction has been started and has been included in the set of GTIDs known to the MySQL server.
|
void |
startNextTransaction() |
void |
startSnapshot()
Denote that a snapshot is being (or has been) started.
|
org.apache.kafka.connect.data.Struct |
struct()
|
org.apache.kafka.connect.data.Struct |
struct(TableId tableId)
|
String |
toString() |
public static final String SERVER_ID_KEY
public static final String SERVER_NAME_KEY
public static final String SERVER_PARTITION_KEY
public static final String GTID_SET_KEY
public static final String GTID_KEY
public static final String EVENTS_TO_SKIP_OFFSET_KEY
public static final String BINLOG_FILENAME_OFFSET_KEY
public static final String BINLOG_POSITION_OFFSET_KEY
public static final String BINLOG_ROW_IN_EVENT_OFFSET_KEY
public static final String TIMESTAMP_KEY
public static final String SNAPSHOT_KEY
public static final String THREAD_KEY
public static final String DB_NAME_KEY
public static final String TABLE_NAME_KEY
public static final org.apache.kafka.connect.data.Schema SCHEMA
private String currentGtidSet
private String currentGtid
private String currentBinlogFilename
private long currentBinlogPosition
private int currentRowNumber
private long currentEventLengthInBytes
private String restartGtidSet
private String restartBinlogFilename
private long restartBinlogPosition
private long restartEventsToSkip
private int restartRowsToSkip
private boolean inTransaction
private String serverName
private long serverId
private long binlogTimestampSeconds
private long threadId
private boolean lastSnapshot
private boolean nextSnapshot
public void setServerName(String logicalId)
logicalId - the logical identifier for the database; may not be nullpublic Map<String,String> partition()
database server.
The resulting map is mutable for efficiency reasons (this information rarely changes), but should not be mutated.
public void setBinlogStartPoint(String binlogFilename, long positionOfFirstEvent)
binlogFilename - the name of the binary log file; may not be nullpositionOfFirstEvent - the position in the binary log file to begin processingpublic void setEventPosition(long positionOfCurrentEvent,
long eventSizeInBytes)
positionOfCurrentEvent - the position within the binary log file of the current eventeventSizeInBytes - the size in bytes of this eventpublic Map<String,?> offset()
public Map<String,?> offsetForRow(int eventRowNumber, int totalNumberOfRows)
This method should always be called before struct().
eventRowNumber - the 0-based row number within the event for which the offset is to be producedtotalNumberOfRows - the total number of rows within the event being processedstruct()public org.apache.kafka.connect.data.Schema schema()
Schema; never nullstruct()public org.apache.kafka.connect.data.Struct struct()
Struct representation of the source partition() and offset() information. The Struct
complies with the SCHEMA for the MySQL connector.
This method should always be called after offsetForRow(int, int).
Struct; never nullschema()public org.apache.kafka.connect.data.Struct struct(TableId tableId)
Struct representation of the source partition() and offset() information. The Struct
complies with the SCHEMA for the MySQL connector.
This method should always be called after offsetForRow(int, int).
tableId - the table that should be included in the struct; may be nullStruct; never nullschema()public boolean isSnapshotInEffect()
true if a snapshot is in effect, or false otherwisepublic void startNextTransaction()
public void completeEvent()
public long eventsToSkipUponRestart()
completeEvent(),
startNextTransaction()public void commitTransaction()
public void startGtid(String gtid, String gtidSet)
gtid - the string representation of a specific GTID that has been begun; may not be nullgtidSet - the string representation of GTID set that includes the newly begun GTID; may not be nullpublic void setCompletedGtidSet(String gtidSet)
gtidSet - the string representation of the GTID set; may not be null, but may be an empty string if no GTIDs
have been previously processedpublic void setBinlogServerId(long serverId)
serverId - the server ID found within the binary log filepublic void setBinlogTimestampSeconds(long timestampInSeconds)
timestampInSeconds - the timestamp in seconds found within the binary log filepublic void setBinlogThread(long threadId)
threadId - the thread identifier; may be negative if not knownpublic void startSnapshot()
public void markLastSnapshot()
public void completeSnapshot()
public void setOffset(Map<String,?> sourceOffset)
sourceOffset - the previously-recorded Kafka Connect source offsetorg.apache.kafka.connect.errors.ConnectException - if any offset parameter values are missing, invalid, or of the wrong typepublic String gtidSet()
public String binlogFilename()
setpublic long binlogPosition()
setprotected long restartBinlogPosition()
setpublic int rowsToSkipUponRestart()
last completely processed event to be skipped
upon restart.public String serverName()
setpublic static boolean isPositionAtOrBefore(Document recorded, Document desired, Predicate<String> gtidFilter)
offset is at or before the point in time of the second
offset, where the offsets are given in JSON representation of the maps returned by offset().
This logic makes a significant assumption: once a MySQL server/cluster has GTIDs enabled, they will never be disabled. This is the only way to compare a position with a GTID to a position without a GTID, and we conclude that any position with a GTID is *after* the position without.
When both positions have GTIDs, then we compare the positions by using only the GTIDs. Of course, if the GTIDs are the same, then we also look at whether they have snapshots enabled.
recorded - the position obtained from recorded history; never nulldesired - the desired position that we want to obtain, which should be after some recorded positions,
at some recorded positions, and before other recorded positions; never nullgtidFilter - the predicate function that will return true if a GTID source is to be included, or
false if a GTID source is to be excluded; may be null if no filtering is to be donetrue if the recorded position is at or before the desired position; or false otherwiseCopyright © 2017 JBoss by Red Hat. All rights reserved.