Uses of Class
io.debezium.connector.cassandra.RowData
-
Packages that use RowData Package Description io.debezium.connector.cassandra -
-
Uses of RowData in io.debezium.connector.cassandra
Fields in io.debezium.connector.cassandra declared as RowData Modifier and Type Field Description private RowDataRecord. rowDataMethods in io.debezium.connector.cassandra that return RowData Modifier and Type Method Description RowDataFieldFilterSelector.FieldFilter. apply(RowData rowData)RowDataRowData. copy()private static RowDataSnapshotProcessor. extractRowData(com.datastax.driver.core.Row row, List<com.datastax.driver.core.ColumnMetadata> columns, Set<String> partitionKeyNames, Set<String> clusteringKeyNames, Object executionTime)This function extracts the relevant row data fromRowand updates the maximum writetime for each row.RowDataRecord. getRowData()Methods in io.debezium.connector.cassandra with parameters of type RowData Modifier and Type Method Description RowDataFieldFilterSelector.FieldFilter. apply(RowData rowData)private voidRecordMaker. createRecord(String cluster, OffsetPosition offsetPosition, KeyspaceTable keyspaceTable, boolean snapshot, Instant tsMicro, RowData data, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, boolean markOffset, io.debezium.function.BlockingConsumer<Record> consumer, Record.Operation operation)voidRecordMaker. delete(String cluster, OffsetPosition offsetPosition, KeyspaceTable keyspaceTable, boolean snapshot, Instant tsMicro, RowData data, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, boolean markOffset, io.debezium.function.BlockingConsumer<Record> consumer)voidRecordMaker. insert(String cluster, OffsetPosition offsetPosition, KeyspaceTable keyspaceTable, boolean snapshot, Instant tsMicro, RowData data, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, boolean markOffset, io.debezium.function.BlockingConsumer<Record> consumer)private voidCommitLogReadHandlerImpl. populateClusteringColumns(RowData after, org.apache.cassandra.db.rows.Row row, org.apache.cassandra.db.partitions.PartitionUpdate pu)private voidCommitLogReadHandlerImpl. populatePartitionColumns(RowData after, org.apache.cassandra.db.partitions.PartitionUpdate pu)private voidCommitLogReadHandlerImpl. populateRegularColumns(RowData after, org.apache.cassandra.db.rows.Row row, CommitLogReadHandlerImpl.RowType rowType, KeyValueSchema schema)voidRecordMaker. update(String cluster, OffsetPosition offsetPosition, KeyspaceTable keyspaceTable, boolean snapshot, Instant tsMicro, RowData data, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, boolean markOffset, io.debezium.function.BlockingConsumer<Record> consumer)Constructors in io.debezium.connector.cassandra with parameters of type RowData Constructor Description ChangeRecord(SourceInfo source, RowData rowData, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, Record.Operation op, boolean markOffset)Record(SourceInfo source, RowData rowData, org.apache.kafka.connect.data.Schema keySchema, org.apache.kafka.connect.data.Schema valueSchema, Record.Operation op, boolean shouldMarkOffset, long ts)TombstoneRecord(SourceInfo source, RowData rowData, org.apache.kafka.connect.data.Schema keySchema)
-