public class KafkaProducer<K,V> extends Object implements WriteStream<KafkaProducerRecord<K,V>>
The provides global control over writing a record.
NOTE: This class has been automatically generated from theoriginal non RX-ified interface using Vert.x codegen.| Modifier and Type | Field and Description |
|---|---|
static io.vertx.lang.rx.TypeArg<KafkaProducer> |
__TYPE_ARG |
io.vertx.lang.rx.TypeArg<K> |
__typeArg_0 |
io.vertx.lang.rx.TypeArg<V> |
__typeArg_1 |
| Constructor and Description |
|---|
KafkaProducer(KafkaProducer delegate) |
KafkaProducer(Object delegate,
io.vertx.lang.rx.TypeArg<K> typeArg_0,
io.vertx.lang.rx.TypeArg<V> typeArg_1) |
| Modifier and Type | Method and Description |
|---|---|
void |
close()
Close the producer
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
void |
close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
KafkaProducer<K,V> |
drainHandler(Handler<Void> handler)
Set a drain handler on the stream.
|
void |
end()
Ends the stream.
|
void |
end(Handler<AsyncResult<Void>> handler)
Same as
WriteStream.end() but with an handler called when the operation completes |
void |
end(KafkaProducerRecord<K,V> data)
Same as
WriteStream.end() but writes some data to the stream before ending. |
void |
end(KafkaProducerRecord<K,V> data,
Handler<AsyncResult<Void>> handler)
Same as but with an
handler called when the operation completes |
boolean |
equals(Object o) |
KafkaProducer<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the write stream.
|
KafkaProducer<K,V> |
flush(Handler<Void> completionHandler)
Invoking this method makes all buffered records immediately available to write
|
KafkaProducer |
getDelegate() |
int |
hashCode() |
static <K,V> KafkaProducer<K,V> |
newInstance(KafkaProducer arg) |
static <K,V> KafkaProducer<K,V> |
newInstance(KafkaProducer arg,
io.vertx.lang.rx.TypeArg<K> __typeArg_K,
io.vertx.lang.rx.TypeArg<V> __typeArg_V) |
KafkaProducer<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<PartitionInfo>>> handler)
Get the partition metadata for the give topic.
|
Completable |
rxClose()
Close the producer
|
Completable |
rxClose(long timeout)
Close the producer
|
Completable |
rxEnd()
Same as
WriteStream.end() but with an handler called when the operation completes |
Completable |
rxEnd(KafkaProducerRecord<K,V> data)
Same as but with an
handler called when the operation completes |
Single<List<PartitionInfo>> |
rxPartitionsFor(String topic)
Get the partition metadata for the give topic.
|
Single<RecordMetadata> |
rxSend(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic
|
Completable |
rxWrite(KafkaProducerRecord<K,V> data) |
KafkaProducer<K,V> |
send(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic
|
KafkaProducer<K,V> |
send(KafkaProducerRecord<K,V> record,
Handler<AsyncResult<RecordMetadata>> handler)
Asynchronously write a record to a topic
|
KafkaProducer<K,V> |
setWriteQueueMaxSize(int i)
Set the maximum size of the write queue to
maxSize. |
<any> |
toObserver() |
String |
toString() |
<any> |
toSubscriber() |
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> kafkaProducerRecord)
Write some data to the stream.
|
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> data,
Handler<AsyncResult<Void>> handler)
Same as but with an
handler called when the operation completes |
boolean |
writeQueueFull()
This will return
true if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int) |
clone, finalize, getClass, notify, notifyAll, wait, wait, waitnewInstance, newInstancenewInstancepublic static final io.vertx.lang.rx.TypeArg<KafkaProducer> __TYPE_ARG
public final io.vertx.lang.rx.TypeArg<K> __typeArg_0
public final io.vertx.lang.rx.TypeArg<V> __typeArg_1
public KafkaProducer(KafkaProducer delegate)
public KafkaProducer getDelegate()
getDelegate in interface StreamBasegetDelegate in interface WriteStream<KafkaProducerRecord<K,V>>public <any> toObserver()
public <any> toSubscriber()
public void end()
Once the stream has ended, it cannot be used any more.
end in interface WriteStream<KafkaProducerRecord<K,V>>public void end(Handler<AsyncResult<Void>> handler)
WriteStream.end() but with an handler called when the operation completesend in interface WriteStream<KafkaProducerRecord<K,V>>handler - public Completable rxEnd()
WriteStream.end() but with an handler called when the operation completespublic void end(KafkaProducerRecord<K,V> data)
WriteStream.end() but writes some data to the stream before ending.end in interface WriteStream<KafkaProducerRecord<K,V>>data - the data to writepublic void end(KafkaProducerRecord<K,V> data, Handler<AsyncResult<Void>> handler)
handler called when the operation completesend in interface WriteStream<KafkaProducerRecord<K,V>>data - handler - public Completable rxEnd(KafkaProducerRecord<K,V> data)
handler called when the operation completesdata - public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationpublic static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationpublic static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config)
vertx - Vert.x instance to useconfig - Kafka producer configurationpublic static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
vertx - Vert.x instance to useconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationpublic KafkaProducer<K,V> exceptionHandler(Handler<Throwable> handler)
WriteStreamexceptionHandler in interface StreamBaseexceptionHandler in interface WriteStream<KafkaProducerRecord<K,V>>handler - the exception handlerpublic KafkaProducer<K,V> write(KafkaProducerRecord<K,V> kafkaProducerRecord)
WriteStreamWriteStream.writeQueueFull() method before writing. This is done automatically if using a Pump.write in interface WriteStream<KafkaProducerRecord<K,V>>kafkaProducerRecord - the data to writepublic KafkaProducer<K,V> setWriteQueueMaxSize(int i)
WriteStreammaxSize. You will still be able to write to the stream even
if there is more than maxSize items in the write queue. This is used as an indicator by classes such as
Pump to provide flow control.
The value is defined by the implementation of the stream, e.g in bytes for a
NetSocket, the number of Message for a
MessageProducer, etc...setWriteQueueMaxSize in interface WriteStream<KafkaProducerRecord<K,V>>i - the max size of the write streampublic boolean writeQueueFull()
WriteStreamtrue if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int)writeQueueFull in interface WriteStream<KafkaProducerRecord<K,V>>public KafkaProducer<K,V> drainHandler(Handler<Void> handler)
WriteStreamPump for an example of this being used.
The stream implementation defines when the drain handler, for example it could be when the queue size has been
reduced to maxSize / 2.drainHandler in interface WriteStream<KafkaProducerRecord<K,V>>handler - the handlerpublic KafkaProducer<K,V> write(KafkaProducerRecord<K,V> data, Handler<AsyncResult<Void>> handler)
WriteStreamhandler called when the operation completeswrite in interface WriteStream<KafkaProducerRecord<K,V>>public Completable rxWrite(KafkaProducerRecord<K,V> data)
public KafkaProducer<K,V> send(KafkaProducerRecord<K,V> record)
record - record to writepublic KafkaProducer<K,V> send(KafkaProducerRecord<K,V> record, Handler<AsyncResult<RecordMetadata>> handler)
record - record to writehandler - handler called on operation completedpublic Single<RecordMetadata> rxSend(KafkaProducerRecord<K,V> record)
record - record to writepublic KafkaProducer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
topic - topic partition for which getting partitions infohandler - handler called on operation completedpublic Single<List<PartitionInfo>> rxPartitionsFor(String topic)
topic - topic partition for which getting partitions infopublic KafkaProducer<K,V> flush(Handler<Void> completionHandler)
completionHandler - handler called on operation completedpublic void close()
public void close(Handler<AsyncResult<Void>> completionHandler)
completionHandler - handler called on operation completedpublic Completable rxClose()
public void close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
timeout - timeout to wait for closingcompletionHandler - handler called on operation completedpublic Completable rxClose(long timeout)
timeout - timeout to wait for closingpublic static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg)
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
Copyright © 2021 Eclipse. All rights reserved.