Class KafkaCluster.Usage

java.lang.Object
io.debezium.kafka.KafkaCluster.Usage
Enclosing class:
KafkaCluster

public class KafkaCluster.Usage extends Object
A set of methods to use a running KafkaCluster.
  • Constructor Details

    • Usage

      public Usage()
  • Method Details

    • getConsumerProperties

      public Properties getConsumerProperties(String groupId, String clientId, org.apache.kafka.clients.consumer.OffsetResetStrategy autoOffsetReset)
      Get a new set of properties for consumers that want to talk to this server.
      Parameters:
      groupId - the group ID for the consumer; may not be null
      clientId - the optional identifier for the client; may be null if not needed
      autoOffsetReset - how to pick a starting offset when there is no initial offset in ZooKeeper or if an offset is out of range; may be null for the default to be used
      Returns:
      the mutable consumer properties
      See Also:
    • getProducerProperties

      public Properties getProducerProperties(String clientId)
      Get a new set of properties for producers that want to talk to this server.
      Parameters:
      clientId - the optional identifier for the client; may be null if not needed
      Returns:
      the mutable producer properties
      See Also:
    • createProducer

      public <K, V> KafkaCluster.InteractiveProducer<K,V> createProducer(String producerName, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer)
      Create an simple producer that can be used to write messages to the cluster.
      Parameters:
      producerName - the name of the producer; may not be null
      keySerializer - the serializer for the keys; may not be null
      valueSerializer - the serializer for the values; may not be null
      Returns:
      the object that can be used to produce messages; never null
    • createProducer

      public KafkaCluster.InteractiveProducer<String,io.debezium.document.Document> createProducer(String producerName)
      Create an simple producer that can be used to write Document messages to the cluster.
      Parameters:
      producerName - the name of the producer; may not be null
      Returns:
      the object that can be used to produce messages; never null
    • createConsumer

      public <K, V> KafkaCluster.InteractiveConsumer<K,V> createConsumer(String groupId, String clientId, String topicName, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer, Runnable completion)
      Create an simple consumer that can be used to read messages from the cluster.
      Parameters:
      groupId - the name of the group; may not be null
      clientId - the name of the client; may not be null
      topicName - the name of the topic to read; may not be null and may not be empty
      keyDeserializer - the deserializer for the keys; may not be null
      valueDeserializer - the deserializer for the values; may not be null
      completion - the function to call when the consumer terminates; may be null
      Returns:
      the running interactive consumer; never null
    • createConsumer

      public <K, V> KafkaCluster.InteractiveConsumer<K,V> createConsumer(String groupId, String clientId, Set<String> topicNames, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer, Runnable completion)
      Create an simple consumer that can be used to read messages from the cluster.
      Parameters:
      groupId - the name of the group; may not be null
      clientId - the name of the client; may not be null
      topicNames - the names of the topics to read; may not be null and may not be empty
      keyDeserializer - the deserializer for the keys; may not be null
      valueDeserializer - the deserializer for the values; may not be null
      completion - the function to call when the consumer terminates; may be null
      Returns:
      the running interactive consumer; never null
    • createConsumer

      public KafkaCluster.InteractiveConsumer<String,io.debezium.document.Document> createConsumer(String groupId, String clientId, String topicName, Runnable completion)
      Create an simple consumer that can be used to read messages from the cluster.
      Parameters:
      groupId - the name of the group; may not be null
      clientId - the name of the client; may not be null
      topicName - the name of the topic to read; may not be null and may not be empty
      completion - the function to call when the consumer terminates; may be null
      Returns:
      the running interactive consumer; never null
    • createConsumer

      public KafkaCluster.InteractiveConsumer<String,io.debezium.document.Document> createConsumer(String groupId, String clientId, Set<String> topicNames, Runnable completion)
      Create an simple consumer that can be used to read messages from the cluster.
      Parameters:
      groupId - the name of the group; may not be null
      clientId - the name of the client; may not be null
      topicNames - the names of the topics to read; may not be null and may not be empty
      completion - the function to call when the consumer terminates; may be null
      Returns:
      the running interactive consumer; never null
    • produce

      public <K, V> void produce(String producerName, Consumer<KafkaCluster.InteractiveProducer<String,io.debezium.document.Document>> producer)
      Use the supplied function to asynchronously produce Document messages and write them to the cluster.
      Parameters:
      producerName - the name of the producer; may not be null
      producer - the function that will asynchronously
    • produce

      public <K, V> void produce(String producerName, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer, Consumer<KafkaCluster.InteractiveProducer<K,V>> producer)
      Use the supplied function to asynchronously produce messages and write them to the cluster.
      Parameters:
      producerName - the name of the producer; may not be null
      keySerializer - the serializer for the keys; may not be null
      valueSerializer - the serializer for the values; may not be null
      producer - the function that will asynchronously
    • produce

      public <K, V> void produce(String producerName, int messageCount, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valueSerializer, Runnable completionCallback, Supplier<org.apache.kafka.clients.producer.ProducerRecord<K,V>> messageSupplier)
      Use the supplied function to asynchronously produce messages and write them to the cluster.
      Parameters:
      producerName - the name of the producer; may not be null
      messageCount - the number of messages to produce; must be positive
      keySerializer - the serializer for the keys; may not be null
      valueSerializer - the serializer for the values; may not be null
      completionCallback - the function to be called when the producer is completed; may be null
      messageSupplier - the function to produce messages; may not be null
    • produceStrings

      public void produceStrings(int messageCount, Runnable completionCallback, Supplier<org.apache.kafka.clients.producer.ProducerRecord<String,String>> messageSupplier)
      Use the supplied function to asynchronously produce messages with String keys and values, and write them to the cluster.
      Parameters:
      messageCount - the number of messages to produce; must be positive
      completionCallback - the function to be called when the producer is completed; may be null
      messageSupplier - the function to produce messages; may not be null
    • produceDocuments

      public void produceDocuments(int messageCount, Runnable completionCallback, Supplier<org.apache.kafka.clients.producer.ProducerRecord<String,io.debezium.document.Document>> messageSupplier)
      Use the supplied function to asynchronously produce messages with String keys and Document values, and write them to the cluster.
      Parameters:
      messageCount - the number of messages to produce; must be positive
      completionCallback - the function to be called when the producer is completed; may be null
      messageSupplier - the function to produce messages; may not be null
    • produceIntegers

      public void produceIntegers(int messageCount, Runnable completionCallback, Supplier<org.apache.kafka.clients.producer.ProducerRecord<String,Integer>> messageSupplier)
      Use the supplied function to asynchronously produce messages with String keys and Integer values, and write them to the cluster.
      Parameters:
      messageCount - the number of messages to produce; must be positive
      completionCallback - the function to be called when the producer is completed; may be null
      messageSupplier - the function to produce messages; may not be null
    • produceIntegers

      public void produceIntegers(String topic, int messageCount, int initialValue, Runnable completionCallback)
      Asynchronously produce messages with String keys and sequential Integer values, and write them to the cluster.
      Parameters:
      topic - the name of the topic to which the messages should be written; may not be null
      messageCount - the number of messages to produce; must be positive
      initialValue - the first integer value to produce
      completionCallback - the function to be called when the producer is completed; may be null
    • produceStrings

      public void produceStrings(String topic, int messageCount, Runnable completionCallback, Supplier<String> valueSupplier)
      Asynchronously produce messages with monotonically increasing String keys and values obtained from the supplied function, and write them to the cluster.
      Parameters:
      topic - the name of the topic to which the messages should be written; may not be null
      messageCount - the number of messages to produce; must be positive
      completionCallback - the function to be called when the producer is completed; may be null
      valueSupplier - the value supplier; may not be null
    • produceDocuments

      public void produceDocuments(String topic, int messageCount, Runnable completionCallback, Supplier<io.debezium.document.Document> valueSupplier)
      Asynchronously produce messages with monotonically increasing String keys and values obtained from the supplied function, and write them to the cluster.
      Parameters:
      topic - the name of the topic to which the messages should be written; may not be null
      messageCount - the number of messages to produce; must be positive
      completionCallback - the function to be called when the producer is completed; may be null
      valueSupplier - the value supplier; may not be null
    • consume

      public <K, V> void consume(String groupId, String clientId, org.apache.kafka.clients.consumer.OffsetResetStrategy autoOffsetReset, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer, BooleanSupplier continuation, org.apache.kafka.clients.consumer.OffsetCommitCallback offsetCommitCallback, Runnable completion, Collection<String> topics, Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>> consumerFunction)
      Use the supplied function to asynchronously consume messages from the cluster.
      Parameters:
      groupId - the name of the group; may not be null
      clientId - the name of the client; may not be null
      autoOffsetReset - how to pick a starting offset when there is no initial offset in ZooKeeper or if an offset is out of range; may be null for the default to be used
      keyDeserializer - the deserializer for the keys; may not be null
      valueDeserializer - the deserializer for the values; may not be null
      continuation - the function that determines if the consumer should continue; may not be null
      offsetCommitCallback - the callback that should be used after committing offsets; may be null if offsets are not to be committed
      completion - the function to call when the consumer terminates; may be null
      topics - the set of topics to consume; may not be null or empty
      consumerFunction - the function to consume the messages; may not be null
    • consumeDocuments

      public void consumeDocuments(BooleanSupplier continuation, Runnable completion, Collection<String> topics, Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<String,io.debezium.document.Document>> consumerFunction)
      Asynchronously consume all messages from the cluster.
      Parameters:
      continuation - the function that determines if the consumer should continue; may not be null
      completion - the function to call when all messages have been consumed; may be null
      topics - the set of topics to consume; may not be null or empty
      consumerFunction - the function to consume the messages; may not be null
    • consumeStrings

      public void consumeStrings(BooleanSupplier continuation, Runnable completion, Collection<String> topics, Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<String,String>> consumerFunction)
      Asynchronously consume all messages from the cluster.
      Parameters:
      continuation - the function that determines if the consumer should continue; may not be null
      completion - the function to call when all messages have been consumed; may be null
      topics - the set of topics to consume; may not be null or empty
      consumerFunction - the function to consume the messages; may not be null
    • consumeIntegers

      public void consumeIntegers(BooleanSupplier continuation, Runnable completion, Collection<String> topics, Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<String,Integer>> consumerFunction)
      Asynchronously consume all messages from the cluster.
      Parameters:
      continuation - the function that determines if the consumer should continue; may not be null
      completion - the function to call when all messages have been consumed; may be null
      topics - the set of topics to consume; may not be null or empty
      consumerFunction - the function to consume the messages; may not be null
    • consumeStrings

      public void consumeStrings(String topicName, int count, long timeout, TimeUnit unit, Runnable completion, BiPredicate<String,String> consumer)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
      consumer - the function to consume the messages; may not be null
    • consumeDocuments

      public void consumeDocuments(String topicName, int count, long timeout, TimeUnit unit, Runnable completion, BiPredicate<String,io.debezium.document.Document> consumer)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
      consumer - the function to consume the messages; may not be null
    • consumeIntegers

      public void consumeIntegers(String topicName, int count, long timeout, TimeUnit unit, Runnable completion, BiPredicate<String,Integer> consumer)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
      consumer - the function to consume the messages; may not be null
    • consumeStrings

      public void consumeStrings(String topicName, int count, long timeout, TimeUnit unit, Runnable completion)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
    • consumeDocuments

      public void consumeDocuments(String topicName, int count, long timeout, TimeUnit unit, Runnable completion)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
    • consumeIntegers

      public void consumeIntegers(String topicName, int count, long timeout, TimeUnit unit, Runnable completion)
      Asynchronously consume all messages on the given topic from the cluster.
      Parameters:
      topicName - the name of the topic; may not be null
      count - the expected number of messages to read before terminating; may not be null
      timeout - the maximum time that this consumer should run before terminating; must be positive
      unit - the unit of time for the timeout; may not be null
      completion - the function to call when all messages have been consumed; may be null
    • continueIfNotExpired

      protected BooleanSupplier continueIfNotExpired(BooleanSupplier continuation, long timeout, TimeUnit unit)