public class SnapshotReader extends AbstractReader
MySqlSchema.| Modifier and Type | Class and Description |
|---|---|
protected static interface |
SnapshotReader.RecordRecorder |
| Modifier and Type | Field and Description |
|---|---|
private boolean |
minimalBlocking |
private Runnable |
onSuccessfulCompletion |
private SnapshotReader.RecordRecorder |
recorder |
private Thread |
thread |
context, logger| Constructor and Description |
|---|
SnapshotReader(MySqlTaskContext context)
Create a snapshot reader.
|
| Modifier and Type | Method and Description |
|---|---|
private Statement |
createStatement(Connection connection) |
private Statement |
createStatementWithLargeResultSet(Connection connection)
Create a JDBC statement that can be used for large result sets.
|
protected void |
doCleanup()
The reader has completed sending all
enqueued records, so clean up any resources
that remain. |
protected void |
doStart()
Start the snapshot and return immediately.
|
protected void |
doStop()
Stop the snapshot from running.
|
protected void |
enqueueSchemaChanges(String dbName,
String ddlStatement) |
protected void |
execute()
Perform the snapshot using the same logic as the "mysqldump" utility.
|
SnapshotReader |
generateInsertEvents()
Set this reader's
execution to produce a Envelope.Operation.CREATE event for
each row. |
SnapshotReader |
generateReadEvents()
Set this reader's
execution to produce a Envelope.Operation.READ event for each
row. |
private void |
logRolesForCurrentUser(JdbcConnection mysql) |
private void |
logServerInformation(JdbcConnection mysql) |
SnapshotReader |
onSuccessfulCompletion(Runnable onSuccessfulCompletion)
Set the non-blocking function that should be called upon successful completion of the snapshot, which is after the
snapshot generates its final record and all such records have been
polled. |
protected void |
recordRowAsInsert(RecordMakers.RecordsForTable recordMaker,
Object[] row,
long ts) |
protected void |
recordRowAsRead(RecordMakers.RecordsForTable recordMaker,
Object[] row,
long ts) |
protected org.apache.kafka.connect.source.SourceRecord |
replaceOffset(org.apache.kafka.connect.source.SourceRecord record)
Utility method to replace the offset in the given record with the latest.
|
SnapshotReader |
useMinimalBlocking(boolean minimalBlocking)
Set whether this reader's
execution should block other transactions as minimally as possible by
releasing the read lock as early as possible. |
completeSuccessfully, enqueueRecord, failed, failed, isRunning, poll, start, stop, wrapprivate boolean minimalBlocking
private SnapshotReader.RecordRecorder recorder
private volatile Thread thread
private volatile Runnable onSuccessfulCompletion
public SnapshotReader(MySqlTaskContext context)
context - the task context in which this reader is running; may not be nullpublic SnapshotReader onSuccessfulCompletion(Runnable onSuccessfulCompletion)
polled.onSuccessfulCompletion - the function; may be nullpublic SnapshotReader useMinimalBlocking(boolean minimalBlocking)
execution should block other transactions as minimally as possible by
releasing the read lock as early as possible. Although the snapshot process should obtain a consistent snapshot even
when releasing the lock as early as possible, it may be desirable to explicitly hold onto the read lock until execution
completes. In such cases, holding onto the lock will prevent all updates to the database during the snapshot process.minimalBlocking - true if the lock is to be released as early as possible, or false if the lock
is to be held for the entire executionpublic SnapshotReader generateReadEvents()
execution to produce a Envelope.Operation.READ event for each
row.public SnapshotReader generateInsertEvents()
execution to produce a Envelope.Operation.CREATE event for
each row.protected void doStart()
AbstractReader.poll() until that method returns null.doStart in class AbstractReaderprotected void doStop()
doStop in class AbstractReaderprotected void doCleanup()
AbstractReaderenqueued records, so clean up any resources
that remain.doCleanup in class AbstractReaderprotected void execute()
private Statement createStatementWithLargeResultSet(Connection connection) throws SQLException
By default, the MySQL Connector/J driver retrieves all rows for ResultSets and stores them in memory. In most cases this
is the most efficient way to operate and, due to the design of the MySQL network protocol, is easier to implement.
However, when ResultSets that have a large number of rows or large values, the driver may not be able to allocate
heap space in the JVM and may result in an OutOfMemoryError. See
DBZ-94 for details.
This method handles such cases using the
recommended
technique for MySQL by creating the JDBC Statement with forward-only cursor
and read-only concurrency flags, and with a minimum value
fetch size hint.
connection - the JDBC connection; may not be nullSQLException - if there is a problem creating the statementprivate Statement createStatement(Connection connection) throws SQLException
SQLExceptionprivate void logServerInformation(JdbcConnection mysql)
private void logRolesForCurrentUser(JdbcConnection mysql)
protected org.apache.kafka.connect.source.SourceRecord replaceOffset(org.apache.kafka.connect.source.SourceRecord record)
record - the recordprotected void recordRowAsRead(RecordMakers.RecordsForTable recordMaker, Object[] row, long ts) throws InterruptedException
InterruptedExceptionprotected void recordRowAsInsert(RecordMakers.RecordsForTable recordMaker, Object[] row, long ts) throws InterruptedException
InterruptedExceptionCopyright © 2016 JBoss by Red Hat. All rights reserved.