All Classes and Interfaces
Class
Description
Tracks number of active partitions per each task
Provides all configuration properties for Spanner connector
A variant of
BiConsumer that can be blocked and interrupted.Represents a supplier of results.
This class allows to publish the latest buffered value
once per time period, except the case: if the value is required
to be published immediately.
The
ChangeStream interface should be implemented by class
that querying partitionsExecutes streaming queries to the Spanner database
Common contract for all types of the Spanner Change Stream events
The
ChangeStreamEventConsumer is callback from ChangeStream class, supplies spanner eventSuper class for change stream exceptions
Maps Change Stream events from the raw format into specific DTOs
Wrapper on top of Spanner result set,
which provides additional info
Metadata for the Spanner record
Holds the schema of all database tables tracked by a Change Stream
Used to validate the Spanner Change Stream provided in configuration
Check if the same partition is already streaming by other tasks.
A child partition represents a new partition that should be queried.
This operation finds out whether new partition will be processing
by the current task or shared with other
Specific DTO for Spanner Change Stream Child event
Tracks number of partitions
Clear partition from the shared section of the task state,
after partition was picked up by another task
DTO for Spanner DB column
DTO for Spanner DB column type
Helper for parsing column data
Generates Kafka Connect Schema for Spanner data types
CommittingRecordsStreamingChangeEventSource<P extends io.debezium.pipeline.spi.Partition,O extends io.debezium.pipeline.spi.OffsetContext>
Validates all the properties of the Connector configuration
Context to store validation results and config
When merge of partitions happens, it is possible
that several tasks will process the same child partition.
Checks if the connection to database could be established by given configuration
Notify user about the finishing of connector work
Factory for
ChangeStreamDaoFactory for
DatabaseClientSpecific DTO for Spanner Change Stream Data-change event
Data types supported by Spanner
Tracks the delay which Spanner connector waits for
the next Change Stream Event
A Wrapper class for epoch offset
The change stream fail exception.
Maps field values from JsonNode, based on field schema
Validates specific configuration fields
Checks what partitions are ready for streaming
Tracking Finish State of a Partition when handling kafka connect commit, finish event.
Specific DTO for Spanner Finish partition event
When to finish Spanner partition:
right after streaming was finished for this partition
or only after record has been successfully committed to Kafka
A stub version of the scaler monitor
Specific DTO for Spanner Heartbeat event
Utility class to determine initial partition constants and methods.
Utility to parse JsonNode data into various data types
Serializes Java objects into JSON
Creates Kafka Admin Client based on configuration.
Utility to retrieve information about Kafka Consumer Group
Provides functionality to create and change Rebalance and Sync topics
Uses Kafka Admin Client to receive collection of partitions
for Kafka topic.
Kafka record schema for Spanner DB tables
Kafka record schema for Spanner DB table
Builds Kafka record schema for Spanner DB table
Utility to calculate various of connector latencies
Tracks different latencies during the record lifecycle,
starting from the commit in the database and finishing after
receiving the confirmation that record has been committed
to Kafka.
This class contains all the logic for the leader task functionality.
Provides a Leader Task functionality, after the rebalance event happens.
Utility for logging objects in JSON format
Creates threads form watermark calculations
Calculates watermark based on offsets of all partitions
A wrapper class for watermark data
The
LowWatermarkProvider interface should be implemented by class
that calculates low watermarkGenerates watermark update messages to output topics with the latest
watermark value
Helper for parsing JSON data
Used to update metric values of the Spanner Connector.
Publishes
MetricEventRepresents a modification in a table emitted within a
DataChangeEvent.Represents the type of modification applied in the
DataChangeEvent.Notifies that new partition has been created
Tracks total number of issued queries
Tracks the time duration between requesting and receiving
offsets from Kafka Connect.
Provides an interface for actions which
should be done after task state was changed
The change stream fail exception.
Exception thrown while parsing json string in DTOs classes
A partition represents a Spanner partition.
A listener for the various state querying partition.
Creates
Partition from PartitionState,
retrieves offset for itProvides API for operations on Spanner partitions
Stores offset, makes offset map for kafka connect
Tracks time difference between now and the start of the partition
Retrieves offsets from Kafka Connect
and publishes appropriate metrics
Monitors partition querying.
Contains information about the current state
of the Spanner partition
Notifies that status of the partition has been changed
Change the status of partition:
PartitionStateEnumPartition thread pool, a thread is created for each partition token
Creates Kafka producer, based on configuration
Utility to calculate quantiles for streaming data
DTO for transferring Rebalance Event information,
between application layers.
Provides a logic for processing Rebalance Events.
Tracks number of actual and expected responses during the last Rebalance Event
Creates Kafka consumer of the Rebalance topic,
based on configuration.
Listens for Rebalance Event from the Rebalance-topic,
propagates information about it: Member ID, Generation ID,
is current task a Leader or not
further for processing
Remove finished partition from the task state, as it is not needed anymore
Executes action after specified delay,
if this action will not be overridden by a new one.
Tracks total number of Runtime errors
Provides functionality to read Spanner DB table and stream schema
Stores schema of the Spanner change stream and database tables
Validates incoming row against stored schema
Information provided by Spanner connector in source field or offsets
Creates
SourceInfo from the input DataChangeEventProvides basic functionality of the Spanner SourceTask implementations
Creates Spanner Data Change Events
Coordinates Spanner ChangeEventSource to execute them in order
Creates SpannerStreamingChangeEventSource
and SnapshotChangeEventSource
Creates
SnapshotChangeEventSourceMetrics and StreamingChangeEventSourceMetricsRepresents a change applied to Spanner database and emits one or more corresponding change records.
This class queries the change stream, andd sends the received records to ChangeStream Service.
Factory for
SpannerChangeStreamThis class queries the change stream, sends child partitions to SynchronizedPartitionManager,
and updates the last commit timestamp for each partition.
Provides implementation for the Spanner Source Connector
Configuration API for the Spanner connector
Runtime common spanner exception
Spanner implementation for Debezium's CDC SourceTask
Handles all types during the Connector runtime
and propagates them to ChangeEventQueue
Spanner dispatcher for data change and schema change events.
Enables metrics metadata to be extracted
from the Spanner event
Tracks total and remaining capacity of the Spanner Queue
Generates Spanner Heartbeat messages
Creates
SpannerHeartbeat based on configured propertiesCollects metrics of the Spanner connector
Spanner metrics which are available on JMX
Implementation of
OffsetContext.Describes the Spanner source partition
Contains schema for each DB table
Metrics related to the snapshot phase of the Spanner connector.
Builds Struct from the SourceInfo
Contains contextual information and objects scoped to the lifecycle of Debezium's
SourceTask implementations.Processes all types of Spanner Stream Events
Implementation of metrics related to the streaming phase of the Spanner connector
Checks if database table is included for streaming
Validate if start and end timestamps of connector match each other
This class provides functionality to calculate statistics:
min, max, avg values, percentiles.
Contains all information about the spanner event
Internal queue which holds Spanner Events
before they will be processed
Tracks the number of intervals waiting but not receiving Heartbeat
records from the Spanner Change Stream
The change stream fail exception.
What to do if partition get stuck
Notifies that state of the task has been changed
and other tasks should be aware of it
Creates Kafka consumer of the Sync topic,
based on configuration.
Maps the SyncEventProtos.SyncEvent protocol buffer to TaskSyncEvent class
Provides a logic for processing Sync Events of different types:
New Epoch, Rebalance Answers, Regular events
Utility to merge incoming task states
with the current task state
DTO for transferring Sync Event information,
between application layers.
Protobuf enum
io.debezium.connector.spanner.scale.proto.MessageTypeProtobuf type
io.debezium.connector.spanner.scale.proto.PartitionStateProtobuf type
io.debezium.connector.spanner.scale.proto.PartitionStateProtobuf enum
io.debezium.connector.spanner.scale.proto.StateProtobuf type
io.debezium.connector.spanner.scale.proto.SyncEventProtobuf type
io.debezium.connector.spanner.scale.proto.SyncEventProtobuf type
io.debezium.connector.spanner.scale.proto.TaskStateProtobuf type
io.debezium.connector.spanner.scale.proto.TaskStateMaps the TaskSyncEvent class to the SyncEventProtos.SyncEvent protocol buffer, which is the
storage format used for the internal Sync Topic.
This class coordinates between the connector producers and consumers:
The RebalancingEventListener producer produces events that are consumed by the RebalanceHandler.
This class produces events depending on the type of record received from the change
stream (i.e.
Provides the identifier for Spanner DB table
Contains schema for all table columns
Check what partitions are ready to
and schedule them for streaming
Take partition to work, which was shared by other task
This task contains the functionality to rebalance change stream partitions from obsolete tasks to
survived tasks after a rebalance event.
This task contains the functionality to rebalance change stream partitions from obsolete tasks
to survived tasks after a rebalance event.
Rebalancing partitions across tasks
Calculates a new number of tasks which should be
present in the connector after the scaling
Checks if the current tasks count is okay for
the current load or needs to be scaled-out/in
Monitors tasks states to decide when tasks
should be scaled-out/in
Creates
TaskScalerMonitor based on configurationThis class returns the initial task count upon startup.
Utility to calculate metrics required for
task auto-scaling, based on internal states
of current tasks
Represents the internal state of connector task
Interface to mark all types of internal state change events
This class processes all types of TaskStateChangeEvents (i.e.
Owns queue of
TaskStateChangeEvent elements,
polls them in the separate thread and sends them to
TaskStateChangeEventHandler for further processingUtility for grouping and filtering tasks,
which survived and not after the Rebalance Event
Represents state of the current task and collected
incremental states of other tasks taken from
the Sync Topic
Holds the current state of the connector's task.
Exposes internal state of the task
Tasks are exchanging information about their internal states,
using TaskSyncEvent published to the Sync topic.
Consumes messages from the Sync Topic
Sends Sync Events with task internal state updates to Kafka Sync topic
Utility to generate unique connector task identifiers
Utility for waiting a specified duration of time
Represents the capture type of a change stream.