At the heart of Vert.x is a set of Java APIs that we call Vert.x Core
Vert.x core provides functionality for things like:
-
Writing TCP clients and servers
-
Writing HTTP clients and servers including support for WebSockets
-
The Event bus
-
Shared data - local maps and clustered distributed maps
-
Periodic and delayed actions
-
Deploying and undeploying Verticles
-
Datagram Sockets
-
DNS client
-
File system access
-
High availability
-
Clustering
The functionality in core is fairly low level - you won’t find stuff like database access, authorisation or high level web functionality here - that kind of stuff you’ll find in Vert.x ext (extensions).
Vert.x core is small and lightweight. You just use the parts you want. It’s also entirely embeddable in your existing applications - we don’t force you to structure your applications in a special way just so you can use Vert.x.
You can use core from any of the other languages that Vert.x supports. But here’a a cool bit - we don’t force you to use the Java API directly from, say, JavaScript or Ruby - after all, different languages have different conventions and idioms, and it would be odd to force Java idioms on Ruby developers (for example). Instead, we automatically generate an idiomatic equivalent of the core Java APIs for each language.
From now on we’ll just use the word core to refer to Vert.x core.
Let’s discuss the different concepts and features in core.
In the beginning there was Vert.x
Note
|
Much of this is Java specific - need someway of swapping in language specific parts |
You can’t do much in Vert.x-land unless you can commune with a Vertx
object!
It’s the control centre of Vert.x and is how you do pretty much everything, including creating clients and servers, getting a reference to the event bus, setting timers, as well as many other things.
So how do you get an instance?
If you’re embedding Vert.x then you simply create an instance as follows:
import io.vertx.groovy.core.Vertx
def vertx = Vertx.vertx()
If you’re using Verticles
Note
|
Most applications will only need a single Vert.x instance, but it’s possible to create multiple Vert.x instances if you require, for example, isolation between the event bus or different groups of servers and clients. |
Specifying options when creating a Vertx object
When creating a Vertx object you can also specify options if the defaults aren’t right for you:
import io.vertx.groovy.core.Vertx
def vertx = Vertx.vertx([
workerPoolSize:40
])
The VertxOptions
object has many settings and allows you to configure things like clustering,
high availability, pool sizes and various other settings. The Javadoc describes all the settings in detail.
Creating a clustered Vert.x object
If you’re creating a clustered Vert.x (See the section on the event bus for more information on clustering the event bus), then you will normally use the asynchronous variant to create the Vertx object.
This is because it usually takes some time (maybe a few seconds) for the different Vert.x instances in a cluster to group together. During that time, we don’t want to block the calling thread, so we give the result to you asynchronously.
Are you fluent?
You may have noticed that in the previous examples a fluent API was used.
A fluent API is where multiple methods calls can be chained together. For example:
request.response().putHeader("Content-Type", "text/plain").write("some text").end()
This is a common pattern throughout Vert.x APIs, so get used to it.
Chaining calls like this allows you to write code that’s a little bit less verbose. Of course, if you don’t like the fluent approach we don’t force you to do it that way, you can happily ignore it if you prefer and write your code like this:
def response = request.response()
response.putHeader("Content-Type", "text/plain")
response.write("some text")
response.end()
Don’t call us, we’ll call you.
The Vert.x APIs are largely event driven. This means that when things happen in Vert.x that you are interested in, Vert.x will call you by sending you events.
Some example events are:
-
a timer has fired
-
some data has arrived on a socket,
-
some data has been read from disk
-
an exception has occurred
-
an HTTP server has received a request
You handle events by providing handlers to the Vert.x APIs. For example to receive a timer event every second you would do:
vertx.setPeriodic(1000, { id ->
// This handler will get called every second
println("timer fired!")
})
Or to receive an HTTP request:
// Respond to each http request with "Hello World"
server.requestHandler({ request ->
// This handler will be called every time an HTTP request is received at the server
request.response().end("hello world!")
})
Some time later when Vert.x has an event to pass to your handler Vert.x will call it asynchronously.
This leads us to some important concepts in Vert.x:
Don’t block me!
With very few exceptions (i.e. some file system operations ending in 'Sync'), none of the APIs in Vert.x block the calling thread.
If a result can be provided immediately, it will be returned immediately, otherwise you will usually provide a handler to receive events some time later.
Because none of the Vert.x APIs block threads that means you can use Vert.x to handle a lot of concurrency using just a small number of threads.
With a conventional blocking API the calling thread might block when:
-
Reading data from a socket
-
Writing data to disk
-
Sending a message to a recipient and waiting for a reply.
-
… Many other situations
In all the above cases, when your thread is waiting for a result it can’t do anything else - it’s effectively useless.
This means that if you want a lot of concurrency using blocking APIs then you need a lot of threads to prevent your application grinding to a halt.
Threads have overhead in terms of the memory they require (e.g. for their stack) and in context switching.
For the levels of concurrency required in many modern applications, a blocking approach just doesn’t scale.
Reactor and Multi-Reactor
We mentioned before that Vert.x APIs are event driven - Vert.x passes events to handlers when they are available.
In most cases Vert.x calls your handlers using a thread called an event loop.
As nothing in Vert.x or your application blocks, the event loop can merrily run around delivering events to different handlers in succession as they arrive.
Because nothing blocks, an event loop can potentially deliver huge amounts of events in a short amount of time. For example a single event loop can handle many thousands of HTTP requests very quickly.
We call this the Reactor Pattern.
You may have heard of this before - for example Node.js implements this pattern.
In a standard reactor implementation there is a single event loop thread which runs around in a loop delivering all events to all handlers as they arrive.
The trouble with a single thread is it can only run on a single core at any one time, so if you want your single threaded reactor application (e.g. your Node.js application) to scale over your multi-core server you have to start up and manage many different processes.
Vert.x works differently here. Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden.
This means a single Vertx process can scale across your server, unlike Node.js.
We call this pattern the Multi-Reactor Pattern to distinguish it from the single threaded reactor pattern.
Note
|
Even though a Vertx instance maintains multiple event loops, any particular handler will never be executed concurrently, and in most cases (with the exception of worker verticles) will always be called using the exact same event loop. |
The Golden Rule - Don’t Block the Event Loop
We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if you block the event loop yourself in a handler.
If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the event loops in Vertx instance then your application will grind to a complete halt!
So don’t do it! You have been warned.
Examples of blocking include:
-
Thread.sleep()
-
Waiting on a lock
-
Waiting on a mutex or monitor (e.g. synchronized section)
-
Doing a long lived database operation and waiting for a result
-
Doing a complex calculation that takes some significant time.
-
Spinning in a loop
If any of the above stop the event loop from doing anything else for a significant amount of time then you should go immediately to the naughty step, and await further instructions.
So… what is a significant amount of time?
How long is a piece of string? It really depends on your application and the amount of concurrency you require.
If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request can’t take more than 0.1 ms to process, so you can’t block for any more time than that.
The maths is not hard and shall be left as an exercise for the reader.
If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for some time. If you see warnings like these in your logs, then you should investigate.
Thread vertx-eventloop-thread-3 has been blocked for 20458 ms
Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.
If you want to turn of these warnings or change the settings, you can do that in the
VertxOptions
object before creating the Vertx object.
Running blocking code
In a perfect world, there will be no war or hunger, all APIs will be written asynchronously and bunny rabbits will skip hand-in-hand with baby lambs across sunny green meadows.
But.. the real world is not like that. (Have you watched the news lately?)
Fact is, many, if not most libraries, especially in the JVM ecosystem have synchronous APIs and many of the methods are likely to block. A good example is the JDBC API - it’s inherently asynchronous, and no matter how hard it tries, Vert.x cannot sprinkle magic pixie dust on it to make it asynchronous.
We’re not going to rewrite everything to be asynchronous overnight so we need to provide you a way to use "traditional" blocking APIs safely within a Vert.x application.
As discussed before, you can’t call blocking operations directly from an event loop, as that would prevent it from doing any other useful work. So how can you do this?
It’s done by calling executeBlocking
specifying both the blocking code to execute and a
result handler to be called back asynchronous when the blocking code has been executed.
vertx.executeBlocking({ future ->
// Call some blocking API that takes a significant amount of time to return
def result = someAPI.blockingMethod("hello")
future.complete(result)
}, { res ->
println("The result is: ${res.result()}")
})
An alternative way to run blocking code is to use a worker verticle
Verticles
Vert.x comes with a simple, scalable, actor-like deployment and concurrency model out of the box that you can use to save you writing your own.
This model is entirely optional and Vert.x does not force you to create your applications in this way if you don’t want to..
The model does not claim to be a strict actor-model implementation, but it does share similarities especially with respect to concurrency, scaling and deployment.
To use this model, you write your code as set of verticles.
Verticles are chunks of code that get deployed and run by Vert.x. Verticles can be written in any of the languages that Vert.x supports and a single application can include verticles written in multiple languages.
You can think of a verticle as a bit like an actor in the Actor Model.
An application would typically be composed of many verticle instances running in the same Vert.x instance at the same time. The different verticle instances communicate with each other by sending messages on the event bus.
Writing Verticles
Todo verticles for Groovy.
Verticle Types
There are three different types of verticles:
- Standard Verticles
-
These are the most common and useful type - they are always executed using an event loop thread. We’ll discuss this more in the next section.
- Worker Verticles
-
These run using a thread from the worker pool. An instance is never executed concurrently by more than one thread.
- Multi-threaded worker verticles
-
These run using a thread from the worker pool. An instance can be executed concurrently by more than one thread.
Standard verticles
Standard verticles are assigned an event loop thread when they are created and the start method is called with that event loop. When you call any other methods that takes a handler on a core API from an event loop then Vert.x will guarantee that those handlers, when called, will be executed on the same event loop.
This means we can guarantee that all the code in your verticle instance is always executed on the same event loop (as long as you don’t create your own threads and call it!).
This means you can write all the code in your application as single threaded and let Vert.x worrying about the threading and scaling. No more worrying about synchronized and volatile any more, and you also avoid many other cases of race conditions and deadlock so prevalent when doing hand-rolled 'traditional' multi-threaded application development.
Worker verticles
A worker verticle is just like a standard verticle but it’s executed not using an event loop, but using a thread from the Vert.x worker thread pool.
Worker verticles are designed for calling blocking code, as they won’t block any event loops.
If you don’t want to use a worker verticle to run blocking code, you can also run inline blocking code directly while on an event loop.
If you want to deploy a verticle as a worker verticle you do that with worker
.
def options = [
worker:true
]
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options)
Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can executed by different threads at different times.
Multi-threaded worker verticles
A multi-threaded worker verticle is just like a normal worker verticle but it can be executed concurrently by different threads.
Warning
|
Multi-threaded worker verticles are an advanced feature and most applications will have no need for them. Because of the concurrency in these verticles you have to be very careful to keep the verticle in a consistent state using standard Java techniques for multi-threaded programming. |
Deploying verticles programmatically
You can deploy a verticle using one of the deployVerticle
method, specifying a verticle
name or you can pass in a verticle instance you have already created yourself.
Note
|
Deploying Verticle instances is Java only. |
Code not translatable
You can also deploy verticles by specifying the verticle name.
The verticle name is used to look up the specific VerticleFactory
that will be used to
instantiate the actual verticle instance(s).
Different verticle factories are available for instantiating verticles in different languages and for various other reasons such as loading services and getting verticles from Maven at run-time.
This allows you to deploy verticles written in any language from any other language that Vert.x supports.
Here’s an example of deploying some different types of verticles:
// Deploy a Java verticle - the name is the fully qualified class name of the verticle class
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle")
// Deploy a JavaScript verticle
vertx.deployVerticle("verticles/myverticle.js")
// Deploy a Ruby verticle verticle
vertx.deployVerticle("verticles/my_verticle.rb")
Rules for mapping a verticle name to a verticle factory
When deploying verticle(s) using a name, the name is used to select the actual verticle factory that will instantiate the verticle(s).
Verticle names can have a prefix - which is a string followed by a colon, which if present will be used to look-up the factory, e.g.
js:foo.js // Use the JavaScript verticle factory groovy:com.mycompany.SomeGroovyCompiledVerticle // Use the Groovy verticle factory service:com.mycompany:myorderservice // Uses the service verticle factory
If no prefix is present, Vert.x will look for a suffix and use that to lookup the factory, e.g.
foo.js // Will also use the JavaScript verticle factory SomeScript.groovy // Will use the Groovy verticle factory
If no prefix or suffix is present, Vert.x will assume it’s a Java fully qualified class name (FQCN) and try and instantiate that.
How are Verticle Factories located?
Most Verticle factories are loaded from the classpath and registered at Vert.x startup.
You can also programmatically register and unregister verticle factories using registerVerticleFactory
and unregisterVerticleFactory
if you wish.
Waiting for deployment to complete
Verticle deployment is asynchronous and may complete some time after the call to deploy has returned.
If you want to be notified when deployment is complete you can deploy specifying a completion handler:
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", { res ->
if (res.succeeded()) {
println("Deployment id is: ${res.result()}")
} else {
println("Deployment failed!")
}
})
The completion handler will be passed a result containing the deployment ID string, if deployment succeeded.
This deployment ID can be used later if you want to undeploy the deployment.
Undeploying verticle deployments
Deployments can be undeployed with undeploy
.
Un-deployment is itself asynchronous so if you want to be notified when un-deployment is complete you can deploy specifying a completion handler:
vertx.undeploy(deploymentID, { res ->
if (res.succeeded()) {
println("Undeployed ok")
} else {
println("Undeploy failed!")
}
})
Specifying number of verticle instances
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
def options = [
instances:16
]
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options)
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
Passing configuration to a verticle
Configuration in the form of JSON can be passed to a verticle at deployment time:
def config = [
name:"tim",
directory:"/blah"
]
def options = [
config:config
]
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options)
This configuration is then available via the Context
object.
TODO
Accessing environment variables in a Verticle
TODO
Verticle Isolation Groups
By default, Vert.x has a flat classpath. I.e, it does everything, including deploying verticles without messing with class-loaders. In the majority of cases this is the simplest, clearest and sanest thing to do.
However, in some cases you may want to deploy a verticle so the classes of that verticle are isolated from others in your application.
This might be the case, for example, if you want to deploy two different versions of a verticle with the same class name in the same Vert.x instance, or if you have two different verticles which use different versions of the same jar library.
Warning
|
Use this feature with caution. Class-loaders can be a can of worms, and can make debugging difficult, amongst other things. |
Here’s an example of using an isolation group to isolate a verticle deployment.
Code not translatable
Isolation groups are identified by a name, and the name can be used between different deployments if you want them to share an isolated class-loader.
Extra classpath entries can also be provided with extraClasspath
so they
can locate resources that are isolated to them.
High Availability
Verticles can be deployed with High Availability (HA) enabled.
TODO
Running Verticles from the command line
You can use Vert.x directly in your Maven or Gradle projects in the normal way by adding a dependency to the Vert.x core library and hacking from there.
However you can also run Vert.x verticles directly from the command line if you wish.
To do this you need to download and install a Vert.x distribution, and add the bin
directory of the installation
to your PATH
environment variable. Also make sure you have a Java 8 JDK on your PATH
.
You can now run verticles by using the vertx run
command. Here are some examples:
# Run a JavaScript verticle vertx run my_verticle.js # Run a Ruby verticle vertx run a_n_other_verticle.rb # Run a Groovy script verticle, clustered vertx run FooVerticle.groovy -cluster
You can even run Java source verticles without compiling them first!
vertx run SomeJavaSourceFile.java
Vert.x will compile the Java source file on the fly before running it. This is really useful for quickly prototyping verticles and great for demos. No need to set-up a Maven or Gradle build first to get going!
For full information on the various options available when executing vertx
on the command line,
type vertx
at the command line.
Causing Vert.x to exit
Threads maintained by Vert.x instances are not daemon threads so they will prevent the JVM from exiting.
If you are embedding Vert.x and you have finished with it, you can call close
to close it
down.
This will shut-down all internal thread pools and close other resources, and will allow the JVM to exit.
The Context object
TODO
Executing periodic and delayed actions
It’s very common in Vert.x to want to perform an action after a delay, or periodically.
In standard verticles you can’t just make the thread sleep to introduce a delay, as that will block the event loop thread.
Instead you use Vert.x timers. Timers can be one-shot or periodic. We’ll discuss both
One-shot Timers
A one shot timer calls an event handler after a certain delay, expressed in milliseconds.
To set a timer to fire once you use setTimer
method passing in the delay and a handler
def timerID = vertx.setTimer(1000, { id ->
println("And one second later this is printed")
})
println("First this is printed")
The return value is a unique timer id which can later be used to cancel the timer. The handler is also passed the timer id.
Periodic Timers
You can also set a timer to fire periodically by using setPeriodic
.
There will be an initial delay equal to the period.
The return value of setPeriodic
is a unique timer id (long). This can be later used if the timer needs to be cancelled.
The argument passed into the timer event handler is also the unique timer id:
def timerID = vertx.setPeriodic(1000, { id ->
println("And every second this is printed")
})
println("First this is printed")
Cancelling timers
To cancel a periodic timer, call cancelTimer
specifying the timer id. For example:
vertx.cancelTimer(timerID)
Automatic clean-up in verticles
If you’re creating timers from inside verticles, those timers will be automatically closed when the verticle is undeployed.
The Event Bus
The event bus
is the nervous system of Vert.x.
There is a single event bus instance for every Vert.x instance and it is obtained using the method eventBus
.
The event bus allows different parts of your application to communicate with each other irrespective of what language they are written in, and whether they’re in the same Vert.x instance, or in a different Vert.x instance.
It can even be bridged to allow client side JavaScript running in a browser to communicate on the same event bus.
The event bus forms a distributed peer-to-peer messaging system spanning multiple server nodes and multiple browsers.
The event bus supports publish/subscribe, point to point, and request-response messaging.
The event bus API is very simple. It basically involves registering handlers, unregistering handlers and sending and publishing messages.
First some theory:
The Theory
Addressing
Messages are sent on the event bus to an address.
Vert.x doesn’t bother with any fancy addressing schemes. In Vert.x an address is simply a string. Any string is valid. However it is wise to use some kind of scheme, e.g. using periods to demarcate a namespace.
Some examples of valid addresses are europe.news.feed1, acme.games.pacman, sausages, and X.
Handlers
Messages are received in handlers. You register a handler at an address.
Many different handlers can be registered at the same address.
A single handler can be registered at many different addresses.
Publish / subscribe messaging
The event bus supports publishing messages.
Messages are published to an address. Publishing means delivering the message to all handlers that are registered at that address.
This is the familiar publish/subscribe messaging pattern.
Point to point and Request-Response messaging
The event bus also supports point to point messaging.
Messages are sent to an address. Vert.x will then route it to just one of the handlers registered at that address.
If there is more than one handler registered at the address, one will be chosen using a non-strict round-robin algorithm.
With point to point messaging, an optional reply handler can be specified when sending the message.
When a message is received by a recipient, and has been handled, the recipient can optionally decide to reply to the message. If they do so the reply handler will be called.
When the reply is received back at the sender, it too can be replied to. This can be repeated ad-infinitum, and allows a dialog to be set-up between two different verticles.
This is a common messaging pattern called the request-response pattern.
Best-effort delivery
Vert.x does it’s best to deliver messages and won’t consciously throw them away. This is called best-effort delivery.
However, in case of failure of all or parts of the event bus, there is a possibility messages will be lost.
If your application cares about lost messages, you should code your handlers to be idempotent, and your senders to retry after recovery.
Types of messages
Out of the box Vert.x allows any primitive/simple type, String, or buffers
to
be sent as messages.
However it’s a convention and common practice in Vert.x to send messages as JSON
JSON is very easy to create, read and parse in all the languages that Vert.x supports so it has become a kind of lingua franca for Vert.x.
However you are not forced to use JSON if you don’t want to.
The event bus is very flexible and also supports sending arbitrary objects over the event bus.
You do this by defining a codec
for the objects you want to send.
The Event Bus API
Let’s jump into the API
Getting the event bus
You get a reference to the event bus as follows:
def eb = vertx.eventBus()
There is a single instance of the event bus per Vert.x instance.
Registering Handlers
This simplest way to register a handler is using consumer
.
Here’s an example:
def eb = vertx.eventBus()
eb.consumer("news.uk.sport", { message ->
println("I have received a message: ${message.body()}")
})
When a message arrives for your handler, your handler will be called, passing in the message
.
The object returned from call to consumer() is an instance of MessageConsumer
This object can subsequently be used to unregister the handler, or use the handler as a stream.
Alternatively you can use consumer
to
to return a MessageConsumer with no handler set, and then set the handler on that. For example:
def eb = vertx.eventBus()
def consumer = eb.consumer("news.uk.sport")
consumer.handler({ message ->
println("I have received a message: ${message.body()}")
})
When registering a handler on a clustered event bus, it can take some time for the registration to reach all nodes of the cluster.
If you want to be notified when this has completed, you can register a completion handler
on the MessageConsumer object.
consumer.completionHandler({ res ->
if (res.succeeded()) {
println("The handler registration has reached all nodes")
} else {
println("Registration failed!")
}
})
Un-registering Handlers
To unregister a handler, call unregister
.
If you are on a clustered event bus, un-registering can take some time to propagate across the nodes, if you want to
be notified when this is complete use unregister
.
consumer.unregister({ res ->
if (res.succeeded()) {
println("The handler un-registration has reached all nodes")
} else {
println("Un-registration failed!")
}
})
Publishing messages
Publishing a message is simple. Just use publish
specifying the
address to publish it to.
eventBus.publish("news.uk.sport", "Yay! Someone kicked a ball")
That message will then be delivered to all handlers registered against the address news.uk.sport.
Sending messages
Sending a message will result in only one handler registered at the address receiving the message. This is the point to point messaging pattern. The handler is chosen in a non-strict round-robin fashion.
You can send a message with send
eventBus.send("news.uk.sport", "Yay! Someone kicked a ball")
Setting headers on messages
Messages sent over the event bus can also contain headers. This can be specified by providing a
DeliveryOptions
when sending or publishing:
Code not translatable
The Message object
The object you receive in a message handler is a Message
.
The body
of the message corresponds to the object that was sent or published.
The headers of the message are available with headers
.
Replying to messages
Sometimes after you send a message you want to receive a reply from the recipient. This is known as the request-response pattern.
To do this you can specify a reply handler when sending the message.
When the receiver receives the message they can reply to it by calling reply
.
When this happens it causes a reply to be sent back to the sender and the reply handler is invoked with the reply.
An example will make this clear:
The receiver:
def consumer = eventBus.consumer("news.uk.sport")
consumer.handler({ message ->
println("I have received a message: ${message.body()}")
message.reply("how interesting!")
})
The sender:
eventBus.send("news.uk.sport", "Yay! Someone kicked a ball across a patch of grass", { ar ->
if (ar.succeeded()) {
println("Received reply: ${ar.result().body()}")
}
})
The replies themselves can also be replied to so you can create a dialog between two different parties consisting of multiple rounds.
Sending with timeouts
When sending a message with a reply handler you can specify a timeout in the DeliveryOptions
.
If a reply is not received within that time, the reply handler will be called with a failure.
The default timeout is 30 seconds.
Send Failures
Message sends can fail for other reasons, including:
-
There are no handlers available to send the message to
-
The recipient has explicitly failed the message using
fail
In all cases the reply handler will be called with the specific failure.
Message Codecs
Message codecs are available exclusively with the Java api.
Clustered Event Bus
The event bus doesn’t just exist in a single Vert.x instance. By clustering different Vert.x instances together on your network they can form a single, distributed, event bus.
Clustering programmatically
If you’re creating your Vert.x instance programmatically you get a clustered event bus by configuring the Vert.x instance as clustered;
import io.vertx.groovy.core.Vertx
def options = [:]
Vertx.clusteredVertx(options, { res ->
if (res.succeeded()) {
def vertx = res.result()
def eventBus = vertx.eventBus()
println("We now have a clustered event bus: ${eventBus}")
} else {
println("Failed: ${res.cause()}")
}
})
You should also make sure you have a ClusterManager
implementation on your classpath,
for example the default .
Clustering on the command line
You can run Vert.x clustered on the command line with
vertx run MyVerticle -cluster
Automatic clean-up in verticles
If you’re registering event bus handlers from inside verticles, those handlers will be automatically unregistered when the verticle is undeployed.
JSON
Todo json for Groovy.
Buffers
Most data is shuffled around inside Vert.x using buffers.
A buffer is a sequence of zero or more bytes that can read from or written to and which expands automatically as necessary to accommodate any bytes written to it. You can perhaps think of a buffer as smart byte array.
Creating buffers
Buffers can create by using one of the static Buffer.buffer
methods.
Buffers can be initialised from strings or byte arrays, or empty buffers can be created.
Here are some examples of creating buffers:
Create a new empty buffer:
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer()
Create a buffer from a String. The String will be encoded in the buffer using UTF-8.
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer("some string")
Create a buffer from a String: The String will be encoded using the specified encoding, e.g:
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer("some string", "UTF-16")
Create a buffer with an initial size hint. If you know your buffer will have a certain amount of data written to it you can create the buffer and specify this size. This makes the buffer initially allocate that much memory and is more efficient than the buffer automatically resizing multiple times as data is written to it.
Note that buffers created this way are empty. It does not create a buffer filled with zeros up to the specified size.
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer(10000)
Writing to a Buffer
There are two ways to write to a buffer: appending, and random access. In either case buffers will always expand automatically to encompass the bytes. It’s not possible to get an with a buffer.
Appending to a Buffer
To append to a buffer, you use the methods. Append methods exist for appending various different types.
The return value of the methods is the buffer itself, so these can be chained:
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer()
buff.appendInt(123).appendString("hello\n")
socket.write(buff)
Random access buffer writes
You can also write into the buffer at a specific index, by using the methods. Set methods exist for various different data types. All the set methods take an index as the first argument - this represents the position in the buffer where to start writing the data.
The buffer will always expand as necessary to accommodate the data.
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer()
buff.setInt(1000, 123)
buff.setString(0, "hello")
Reading from a Buffer
Data is read from a buffer using the methods. Get methods exist for various datatypes. The first argument to these methods is an index in the buffer from where to get the data.
import io.vertx.groovy.core.buffer.Buffer
def buff = Buffer.buffer()
for (def i = 0;i < buff.length();4) {
println("int value at ${i} is ${buff.getInt(i)}")
}
Buffer length
Use length
to obtain the length of the buffer.
The length of a buffer is the index of the byte in the buffer with the largest index + 1.
Copying buffers
Use copy
to make a copy of the buffer
Slicing buffers
A sliced buffer is a new buffer which backs onto the original buffer, i.e. it does not copy the underlying data.
Use slice
to create a sliced buffers
Buffer re-use
After writing a buffer to a socket or other similar place, they cannot be re-used.
Writing TCP servers and clients
Vert.x allows you to easily write non blocking TCP clients and servers.
Creating a TCP server
The simplest way to create a TCP server, using all default options is as follows:
def server = vertx.createNetServer()
Configuring a TCP server
If you don’t want the default, a server can be configured by passing in a NetServerOptions
instance when creating it:
def options = [
port:4321
]
def server = vertx.createNetServer(options)
Start the Server Listening
To tell the server to listen for incoming requests you use one of the listen
alternatives.
To tell the server to listen at the host and port as specified in the options:
def server = vertx.createNetServer()
server.listen()
Or to specify the host and port in the call to listen, ignoring what is configured in the options:
def server = vertx.createNetServer()
server.listen(1234, "localhost")
The default host is 0.0.0.0
which means 'listen on all available addresses' and the default port is 0
, which is a
special value that instructs the server to find a random unused local port and use that.
The actual bind is asynchronous so the server might not actually be listening until some time after the call to listen has returned.
If you want to be notified when the server is actually listening you can provide a handler to the listen
call.
For example:
def server = vertx.createNetServer()
server.listen(1234, "localhost", { res ->
if (res.succeeded()) {
println("Server is now listening!")
} else {
println("Failed to bind!")
}
})
Listening on a random port
If 0
is used as the listening port, the server will find an unused random port to listen on.
To find out the real port the server is listening on you can call actualPort
.
def server = vertx.createNetServer()
server.listen(0, "localhost", { res ->
if (res.succeeded()) {
println("Server is now listening on actual port: ${server.actualPort()}")
} else {
println("Failed to bind!")
}
})
Getting notified of incoming connections
To be notified when a connection is made you need to set a connectHandler
:
def server = vertx.createNetServer()
server.connectHandler({ socket ->
// Handle the connection in here
})
When a connection is made the handler will be called with an instance of NetSocket
.
This is a socket-like interface to the actual connection, and allows you to read and write data as well as do various other things like close the socket.
Reading data from the socket
To read data from the socket you set the handler
on the
socket.
This handler will be called with an instance of Buffer
every time data is received on
the socket.
def server = vertx.createNetServer()
server.connectHandler({ socket ->
socket.handler({ buffer ->
println("I received some bytes: ${buffer.length()}")
})
})
Writing data to a socket
You write to a socket using one of write
.
import io.vertx.groovy.core.buffer.Buffer
// Write a buffer
def buffer = Buffer.buffer().appendFloat(12.34f).appendInt(123)
socket.write(buffer)
// Write a string in UTF-8 encoding
socket.write("some data")
// Write a string using the specified encoding
socket.write("some data", "UTF-16")
Write operations are asynchronous and may not occur until some time after the call to write has returned.
Closed handler
If you want to be notified when a socket is closed, you can set a closeHandler
on it:
socket.closeHandler({ v ->
println("The socket has been closed")
})
Handling exceptions
You can set an exceptionHandler
to receive any
exceptions that happen on the socket.
Event bus write handler
Every socket automatically registers a handler on the event bus, and when any buffers are received in this handler, it writes them to itself.
This enables you to write data to a socket which is potentially in a completely different verticle or even in a different Vert.x instance by sending the buffer to the address of that handler.
The address of the handler is given by writeHandlerID
Local and remote addresses
The local address of a NetSocket
can be retrieved using localAddress
.
The remote address, (i.e. the address of the other end of the connection) of a NetSocket
can be retrieved using remoteAddress
.
Sending files
Files can be written to the socket directly using sendFile
. This can be a very
efficient way to send files, as it can be handled by the OS kernel directly where supported by the operating system.
socket.sendFile("myfile.dat")
Streaming sockets
Instances of NetSocket
are also ReadStream
and
WriteStream
instances so they can be used to pump data to or from other
read and write streams.
See the chapter on streams and pumps for more information.
Upgrading connections to SSL/TLS
A non SSL/TLS connection can be upgraded to SSL/TLS using upgradeToSsl
.
The server or client must be configured for SSL/TLS for this to work correctly. Please see the chapter on SSL/TLS for more information.
Closing a TCP Server
Call close
to close the server. Closing the server closes any open connections
and releases all server resources.
The close is actually asynchronous and might not complete until some time after the call has returned. If you want to be notified when the actual close has completed then you can pass in a handler.
This handler will then be called when the close has fully completed.
server.close({ res ->
if (res.succeeded()) {
println("Server is now closed")
} else {
println("close failed")
}
})
Automatic clean-up in verticles
If you’re creating TCP servers and clients from inside verticles, those servers and clients will be automatically closed when the verticle is undeployed.
Scaling - sharing TCP servers
The handlers of any TCP server are always executed on the same event loop thread.
This means that if you are running on a server with a lot of cores, and you only have this one instance deployed then you will have at most one core utilised on your server.
In order to utilise more cores of your server you will need to deploy more instances of the server.
You can instantiate more instances programmatically in your code:
// Create a few instances so we can utilise cores
for (def i = 0;i < 10;i++) {
def server = vertx.createNetServer()
server.connectHandler({ socket ->
socket.handler({ buffer ->
// Just echo back the data
socket.write(buffer)
})
})
server.listen(1234, "localhost")
}
or, if you are using verticles you can simply deploy more instances of your server verticle by using the -instances
option
on the command line:
vertx run com.mycompany.MyVerticle -instances 10
or when programmatically deploying your verticle
def options = [
instances:10
]
vertx.deployVerticle("com.mycompany.MyVerticle", options)
Once you do this you will find the echo server works functionally identically to before, but all your cores on your server can be utilised and more work can be handled.
At this point you might be asking yourself 'How can you have more than one server listening on the same host and port? Surely you will get port conflicts as soon as you try and deploy more than one instance?'
Vert.x does a little magic here.*
When you deploy another server on the same host and port as an existing server it doesn’t actually try and create a new server listening on the same host/port.
Instead it internally maintains just a single server, and, as incoming connections arrive it distributes them in a round-robin fashion to any of the connect handlers.
Consequently Vert.x TCP servers can scale over available cores while each instance remains single threaded.
Creating a TCP client
The simplest way to create a TCP client, using all default options is as follows:
def client = vertx.createNetClient()
Configuring a TCP client
If you don’t want the default, a client can be configured by passing in a NetClientOptions
instance when creating it:
def options = [
connectTimeout:10000
]
def client = vertx.createNetClient(options)
Making connections
To make a connection to a server you use connect
,
specifying the port and host of the server and a handler that will be called with a result containing the
NetSocket
when connection is successful or with a failure if connection failed.
def options = [
connectTimeout:10000
]
def client = vertx.createNetClient(options)
client.connect(4321, "localhost", { res ->
if (res.succeeded()) {
println("Connected!")
def socket = res.result()
} else {
println("Failed to connect: ${res.cause().getMessage()}")
}
})
Configuring connection attempts
A client can be configured to automatically retry connecting to the server in the event that it cannot connect.
This is configured with reconnectInterval
and
reconnectAttempts
.
Note
|
Currently Vert.x will not attempt to reconnect if a connection fails, reconnect attempts and interval only apply to creating initial connections. |
def options = [:]
options.reconnectAttempts = 10.reconnectInterval = 500
def client = vertx.createNetClient(options)
By default, multiple connection attempts are disabled.
Configuring servers and clients to work with SSL/TLS
TCP clients and servers can be configured to use Transport Layer Security - earlier versions of TLS were known as SSL.
The APIs of the servers and clients are identical whether or not SSL/TLS is used, and it’s enabled by configuring
the NetClientOptions
or NetServerOptions
instances used
to create the servers or clients.
Specifying key/certificate for the server
SSL/TLS servers usually provide certificates to clients in order verify their identity to clients.
Certificates/keys can be configured for servers in several ways:
The first method is by specifying the location of a Java key-store which contains the certificate and private key.
Java key stores can be managed with the keytool utility which ships with the JDK.
The password for the key store should also be provided:
def options = [
ssl:true,
keyStoreOptions:[
path:"/path/to/your/server-keystore.jks",
password:"password-of-your-keystore"
]
]
def server = vertx.createNetServer(options)
Alternatively you can read the key store yourself as a buffer and provide that directly:
def myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-keystore.jks")
def jksOptions = [
value:myKeyStoreAsABuffer,
password:"password-of-your-keystore"
]
def options = [
ssl:true,
keyStoreOptions:jksOptions
]
def server = vertx.createNetServer(options)
Key/certificate in PKCS#12 format (http://en.wikipedia.org/wiki/PKCS_12), usually with the .pfx
or the .p12
extension can also be loaded in a similar fashion than JKS key stores:
def options = [
ssl:true,
pfxKeyCertOptions:[
path:"/path/to/your/server-keystore.pfx",
password:"password-of-your-keystore"
]
]
def server = vertx.createNetServer(options)
Buffer configuration is also supported:
def myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-keystore.pfx")
def pfxOptions = [
value:myKeyStoreAsABuffer,
password:"password-of-your-keystore"
]
def options = [
ssl:true,
pfxKeyCertOptions:pfxOptions
]
def server = vertx.createNetServer(options)
Another way of providing server private key and certificate separately using .pem
files.
def options = [
ssl:true,
pemKeyCertOptions:[
keyPath:"/path/to/your/server-key.pem",
certPath:"/path/to/your/server-cert.pem"
]
]
def server = vertx.createNetServer(options)
Buffer configuration is also supported:
def myKeyAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-key.pem")
def myCertAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-cert.pem")
def pemOptions = [
keyValue:myKeyAsABuffer,
certValue:myCertAsABuffer
]
def options = [
ssl:true,
pemKeyCertOptions:pemOptions
]
def server = vertx.createNetServer(options)
Keep in mind that pem configuration, the private key is not crypted.
Specifying trust for the server
SSL/TLS servers can use a certificate authority in order to verify the identity of the clients.
Certificate authorities can be configured for servers in several ways:
Java trust stores can be managed with the keytool utility which ships with the JDK.
The password for the trust store should also be provided:
def options = [
ssl:true,
clientAuthRequired:true,
trustStoreOptions:[
path:"/path/to/your/truststore.jks",
password:"password-of-your-truststore"
]
]
def server = vertx.createNetServer(options)
Alternatively you can read the trust store yourself as a buffer and provide that directly:
def myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.jks")
def options = [
ssl:true,
clientAuthRequired:true,
trustStoreOptions:[
value:myTrustStoreAsABuffer,
password:"password-of-your-truststore"
]
]
def server = vertx.createNetServer(options)
Certificate authority in PKCS#12 format (http://en.wikipedia.org/wiki/PKCS_12), usually with the .pfx
or the .p12
extension can also be loaded in a similar fashion than JKS trust stores:
def options = [
ssl:true,
clientAuthRequired:true,
pfxTrustOptions:[
path:"/path/to/your/truststore.pfx",
password:"password-of-your-truststore"
]
]
def server = vertx.createNetServer(options)
Buffer configuration is also supported:
def myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.pfx")
def options = [
ssl:true,
clientAuthRequired:true,
pfxTrustOptions:[
value:myTrustStoreAsABuffer,
password:"password-of-your-truststore"
]
]
def server = vertx.createNetServer(options)
Another way of providing server certificate authority using a list .pem
files.
def options = [
ssl:true,
clientAuthRequired:true,
pemTrustOptions:[
certPaths:[
"/path/to/your/server-ca.pem"
]
]
]
def server = vertx.createNetServer(options)
Buffer configuration is also supported:
def myCaAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-ca.pfx")
def options = [
ssl:true,
clientAuthRequired:true,
pemTrustOptions:[
certValues:[
myCaAsABuffer
]
]
]
def server = vertx.createNetServer(options)
Enabling SSL/TLS on the client
Net Clients can also be easily configured to use SSL. They have the exact same API when using SSL as when using standard sockets.
To enable SSL on a NetClient the function setSSL(true) is called.
Client trust configuration
If the trustALl
is set to true on the client, then the client will
trust all server certificates. The connection will still be encrypted but this mode is vulnerable to 'man in the middle' attacks. I.e. you can’t
be sure who you are connecting to. Use this with caution. Default value is false.
def options = [
ssl:true,
trustAll:true
]
def client = vertx.createNetClient(options)
If trustAll
is not set then a client trust store must be
configured and should contain the certificates of the servers that the client trusts.
Likewise server configuration, the client trust can be configured in several ways:
The first method is by specifying the location of a Java trust-store which contains the certificate authority.
It is just a standard Java key store, the same as the key stores on the server side. The client
trust store location is set by using the function path
on the
jks options
. If a server presents a certificate during connection which is not
in the client trust store, the connection attempt will not succeed.
def options = [
ssl:true,
trustStoreOptions:[
path:"/path/to/your/truststore.jks",
password:"password-of-your-truststore"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.jks")
def options = [
ssl:true,
trustStoreOptions:[
value:myTrustStoreAsABuffer,
password:"password-of-your-truststore"
]
]
def client = vertx.createNetClient(options)
Certificate authority in PKCS#12 format (http://en.wikipedia.org/wiki/PKCS_12), usually with the .pfx
or the .p12
extension can also be loaded in a similar fashion than JKS trust stores:
def options = [
ssl:true,
pfxTrustOptions:[
path:"/path/to/your/truststore.pfx",
password:"password-of-your-truststore"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.pfx")
def options = [
ssl:true,
pfxTrustOptions:[
value:myTrustStoreAsABuffer,
password:"password-of-your-truststore"
]
]
def client = vertx.createNetClient(options)
Another way of providing server certificate authority using a list .pem
files.
def options = [
ssl:true,
pemTrustOptions:[
certPaths:[
"/path/to/your/ca-cert.pem"
]
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/ca-cert.pem")
def options = [
ssl:true,
pemTrustOptions:[
certValues:[
myTrustStoreAsABuffer
]
]
]
def client = vertx.createNetClient(options)
Specifying key/certificate for the client
If the server requires client authentication then the client must present its own certificate to the server when connecting. The client can be configured in several ways:
The first method is by specifying the location of a Java key-store which contains the key and certificate.
Again it’s just a regular Java key store. The client keystore location is set by using the function
path
on the
jks options
.
def options = [
ssl:true,
keyStoreOptions:[
path:"/path/to/your/client-keystore.jks",
password:"password-of-your-keystore"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-keystore.jks")
def jksOptions = [
value:myKeyStoreAsABuffer,
password:"password-of-your-keystore"
]
def options = [
ssl:true,
keyStoreOptions:jksOptions
]
def client = vertx.createNetClient(options)
Key/certificate in PKCS#12 format (http://en.wikipedia.org/wiki/PKCS_12), usually with the .pfx
or the .p12
extension can also be loaded in a similar fashion than JKS key stores:
def options = [
ssl:true,
pfxKeyCertOptions:[
path:"/path/to/your/client-keystore.pfx",
password:"password-of-your-keystore"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-keystore.pfx")
def pfxOptions = [
value:myKeyStoreAsABuffer,
password:"password-of-your-keystore"
]
def options = [
ssl:true,
pfxKeyCertOptions:pfxOptions
]
def client = vertx.createNetClient(options)
Another way of providing server private key and certificate separately using .pem
files.
def options = [
ssl:true,
pemKeyCertOptions:[
keyPath:"/path/to/your/client-key.pem",
certPath:"/path/to/your/client-cert.pem"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myKeyAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-key.pem")
def myCertAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-cert.pem")
def pemOptions = [
keyValue:myKeyAsABuffer,
certValue:myCertAsABuffer
]
def options = [
ssl:true,
pemKeyCertOptions:pemOptions
]
def client = vertx.createNetClient(options)
Keep in mind that pem configuration, the private key is not crypted.
Revoking certificate authorities
Trust can be configured to use a certificate revocation list (CRL) for revoked certificates that should no
longer be trusted. The crlPath
configures
the crl list to use:
def options = [
ssl:true,
trustStoreOptions:trustOptions,
crlPaths:[
"/path/to/your/crl.pem"
]
]
def client = vertx.createNetClient(options)
Buffer configuration is also supported:
def myCrlAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/crl.pem")
def options = [
ssl:true,
trustStoreOptions:trustOptions,
crlValues:[
myCrlAsABuffer
]
]
def client = vertx.createNetClient(options)
Writing HTTP servers and clients
Vert.x allows you to easily write non blocking HTTP clients and servers.
Creating an HTTP Server
The simplest way to create an HTTP server, using all default options is as follows:
def server = vertx.createHttpServer()
Configuring an HTTP server
If you don’t want the default, a server can be configured by passing in a HttpServerOptions
instance when creating it:
def options = [
maxWebsocketFrameSize:1000000
]
def server = vertx.createHttpServer(options)
Start the Server Listening
To tell the server to listen for incoming requests you use one of the listen
alternatives.
To tell the server to listen at the host and port as specified in the options:
def server = vertx.createHttpServer()
server.listen()
Or to specify the host and port in the call to listen, ignoring what is configured in the options:
def server = vertx.createHttpServer()
server.listen(8080, "myhost.com")
The default host is 0.0.0.0
which means 'listen on all available addresses' and the default port is 80
.
The actual bind is asynchronous so the server might not actually be listening until some time after the call to listen has returned.
If you want to be notified when the server is actually listening you can provide a handler to the listen
call.
For example:
def server = vertx.createHttpServer()
server.listen(8080, "myhost.com", { res ->
if (res.succeeded()) {
println("Server is now listening!")
} else {
println("Failed to bind!")
}
})
Getting notified of incoming requests
To be notified when a request arrives you need to set a requestHandler
:
def server = vertx.createHttpServer()
server.requestHandler({ request ->
// Handle the request in here
})
Handling requests
When a request arrives, the request handler is called passing in an instance of HttpServerRequest
.
This object represents the server side HTTP request.
The handler is called when the headers of the request have been fully read.
If the request contains a body, that body will arrive at the server some time after the request handler has been called.
The server request object allows you to retrieve the uri
,
path
, params
and
headers
, amongst other things.
Each server request object is associated with one server response object. You use
response
to get a reference to the HttpServerResponse
object.
Here’s a simple example of a server handling a request and replying with "hello world" to it.
vertx.createHttpServer().requestHandler({ request ->
request.response().end("Hello world")
}).listen(8080)
Request version
The version of HTTP specified in the request can be retrieved with version
Request method
Use method
to retrieve the HTTP method of the request.
(i.e. whether it’s GET, POST, PUT, DELETE, HEAD, OPTIONS, etc).
Request URI
Use uri
to retrieve the URI of the request.
Note that this is the actual URI as passed in the HTTP request, and it’s almost always a relative URI.
The URI is as defined in Section 5.1.2 of the HTTP specification - Request-URI
Request path
Use path
to return the path part of the URI
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the path would be
/a/b/c/page.html
Request query
Use query
to return the query part of the URI
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the query would be
param1=abc¶m2=xyz
Request headers
Use headers
to return the headers of the HTTP request.
This returns an instance of MultiMap
- which is like a normal Map or Hash but allows multiple
values for the same key - this is because HTTP allows multiple header values with the same key.
It also has case-insensitive keys, that means you can do the following:
def headers = request.headers()
// Get the User-Agent:
println("User agent is ${headers.get("user-agent")}")
// You can also do this and get the same result:
println("User agent is ${headers.get("User-Agent")}")
Request parameters
Use params
to return the parameters of the HTTP request.
Just like headers
this returns an instance of MultiMap
as there can be more than one parameter with the same name.
Request parameters are sent on the request URI, after the path. For example if the URI was:
/page.html?param1=abc¶m2=xyz
Then the parameters would contain the following:
param1: 'abc' param2: 'xyz
Note that these request parameters are retrieved from the URL of the request. If you have form attributes that
have been sent as part of the submission of an HTML form submitted in the body of a multi-part/form-data
request
then they will not appear in the params here.
Remote address
The address of the sender of the request can be retrieved with remoteAddress
.
Absolute URI
The URI passed in an HTTP request is usually relative. If you wish to retrieve the absolute URI corresponding
to the request, you can get it with absoluteURI
End handler
The endHandler
of the request is invoked when the entire request,
including any body has been fully read.
Reading Data from the Request Body
Often an HTTP request contains a body that we want to read. As previously mentioned the request handler is called when just the headers of the request have arrived so the request object does not have a body at that point.
This is because the body may be very large (e.g. a file upload) and we don’t generally want to buffer the entire body in memory before handing it to you, as that could cause the server to exhaust available memory.
To receive the body, you can use the handler
on the request,
this will get called every time a chunk of the request body arrives. Here’s an example:
request.handler({ buffer ->
println("I have received a chunk of the body of length ${buffer.length()}")
})
The object passed into the handler is a Buffer
, and the handler can be called
multiple times as data arrives from the network, depending on the size of the body.
In some cases (e.g. if the body is small) you will want to aggregate the entire body in memory, so you could do the aggregation yourself as follows:
import io.vertx.groovy.core.buffer.Buffer
// Create an empty buffer
def totalBuffer = Buffer.buffer()
request.handler({ buffer ->
println("I have received a chunk of the body of length ${buffer.length()}")
totalBuffer.appendBuffer(buffer)
})
request.endHandler({ v ->
println("Full body received, length = ${totalBuffer.length()}")
})
This is such a common case, that Vert.x provides a bodyHandler
to do this
for you. The body handler is called once when all the body has been received:
request.bodyHandler({ totalBuffer ->
println("Full body received, length = ${totalBuffer.length()}")
})
Pumping requests
The request object is a ReadStream
so you can pump the request body to any
WriteStream
instance.
See the chapter on streams and pumps for a detailed explanation.
Handling HTML forms
HTML forms can be submitted with either a content type of application/x-www-form-urlencoded
or multipart/form-data
.
For url encoded forms, the form attributes are encoded in the url, just like normal query parameters.
For multi-part forms they are encoded in the request body, and as such are not available until the entire body has been read from the wire.
Multi-part forms can also contain file uploads.
If you want to retrieve the attributes of a multi-part form you should tell Vert.x that you expect to receive
such a form before any of the body is read by calling setExpectMultipart
with true, and then you should retrieve the actual attributes using formAttributes
once the entire body has been read:
server.requestHandler({ request ->
request.setExpectMultipart(true)
request.endHandler({ v ->
// The body has now been fully read, so retrieve the form attributes
def formAttributes = request.formAttributes()
})
})
Handling form file uploads
Vert.x can also handle file uploads which are encoded in a multi-part request body.
To receive file uploads you tell Vert.x to expect a multi-part form and set an
uploadHandler
on the request.
This handler will be called once for every upload that arrives on the server.
The object passed into the handler is a HttpServerFileUpload
instance.
server.requestHandler({ request ->
request.setExpectMultipart(true)
request.uploadHandler({ upload ->
println("Got a file upload ${upload.name()}")
})
})
File uploads can be large we don’t provide the entire upload in a single buffer as that might result in memory exhaustion, instead, the upload data is received in chunks:
request.uploadHandler({ upload ->
upload.handler({ chunk ->
println("Received a chunk of the upload of length ${chunk.length()}")
})
})
The upload object is a ReadStream
so you can pump the request body to any
WriteStream
instance. See the chapter on streams and pumps for a
detailed explanation.
If you just want to upload the file to disk somewhere you can use streamToFileSystem
:
request.uploadHandler({ upload ->
upload.streamToFileSystem("myuploads_directory/${upload.filename()}")
})
Warning
|
Make sure you check the filename in a production system to avoid malicious clients uploading files to arbitrary places on your filesystem. See security notes for more information. |
Sending back responses
The server response object is an instance of HttpServerResponse
and is obtained from the
request with response
.
You use the response object to write a response back to the HTTP client.
Setting status code and message
The default HTTP status code for a response is 200
, representing OK
.
Use setStatusCode
to set a different code.
You can also specify a custom status message with setStatusMessage
.
If you don’t specify a status message, the default one corresponding to the status code will be used.
Writing HTTP responses
To write data to an HTTP response, you use one the write
operations.
These can be invoked multiple times before the response is ended. They can be invoked in a few ways:
With a single buffer:
def response = request.response()
response.write(buffer)
With a string. In this case the string will encoded using UTF-8 and the result written to the wire.
def response = request.response()
response.write("hello world!")
With a string and an encoding. In this case the string will encoded using the specified encoding and the result written to the wire.
def response = request.response()
response.write("hello world!", "UTF-16")
Writing to a response is asynchronous and always returns immediately after the write has been queued.
If you are just writing a single string or buffer to the HTTP response you can write it and end the response in a
single call to the end
The first call to write results in the response header being being written to the response. Consequently, if you are
not using HTTP chunking then you must set the Content-Length
header before writing to the response, since it will
be too late otherwise. If you are using HTTP chunking you do not have to worry.
Ending HTTP responses
Once you have finished with the HTTP response you should end
it.
This can be done in several ways:
With no arguments, the response is simply ended.
def response = request.response()
response.write("hello world!")
response.end()
It can also be called with a string or buffer in the same way write
is called. In this case it’s just the same as
calling write with a string or buffer followed by calling end with no arguments. For example:
def response = request.response()
response.end("hello world!")
Closing the underlying connection
You can close the underlying TCP connection with close
.
Non keep-alive connections will be automatically closed by Vert.x when the response is ended.
Keep-alive connections are not automatically closed by Vert.x by default. If you want keep-alive connections to be
closed after an idle time, then you configure idleTimeout
.
Setting response headers
HTTP response headers can be added to the response by adding them directly to the
headers
:
def response = request.response()
def headers = response.headers()
headers.set("content-type", "text/html")
headers.set("other-header", "wibble")
Or you can use putHeader
def response = request.response()
response.putHeader("content-type", "text/html").putHeader("other-header", "wibble")
Headers must all be added before any parts of the response body are written.
Chunked HTTP responses and trailers
Vert.x supports HTTP Chunked Transfer Encoding.
This allows the HTTP response body to be written in chunks, and is normally used when a large response body is being streamed to a client and the total size is not known in advance.
You put the HTTP response into chunked mode as follows:
def response = request.response()
response.setChunked(true)
Default is non-chunked. When in chunked mode, each call to one of the write
methods will result in a new HTTP chunk being written out.
When in chunked mode you can also write HTTP response trailers to the response. These are actually written in the final chunk of the response.
To add trailers to the response, add them directly to the trailers
.
def response = request.response()
response.setChunked(true)
def trailers = response.trailers()
trailers.set("X-wibble", "woobble").set("X-quux", "flooble")
Or use putTrailer
.
def response = request.response()
response.setChunked(true)
response.putTrailer("X-wibble", "woobble").putTrailer("X-quux", "flooble")
Serving files directly from disk
If you were writing a web server, one way to serve a file from disk would be to open it as an AsyncFile
and pump it to the HTTP response.
Or you could load it it one go using readFile
and write it straight to the response.
Alternatively, Vert.x provides a method which allows you to serve a file from disk to an HTTP response in one operation. Where supported by the underlying operating system this may result in the OS directly transferring bytes from the file to the socket without being copied through user-space at all.
This is done by using sendFile
, and is usually more efficient for large
files, but may be slower for small files.
Here’s a very simple web server that serves files from the file system using sendFile:
vertx.createHttpServer().requestHandler({ request ->
def file = ""
if (request.path() == "/") {
file = "index.html"
} else if (!request.path().contains("..")) {
file = request.path()
}
request.response().sendFile("web/${file}")
}).listen(8080)
Sending a file is asynchronous and may not complete until some time after the call has returned. If you want to
be notified when the file has been writen you can use sendFile
Note
|
If you use sendFile while using HTTPS it will copy through user-space, since if the kernel is copying data
directly from disk to socket it doesn’t give us an opportunity to apply any encryption.
|
Warning
|
If you’re going to write web servers directly using Vert.x be careful that users cannot exploit the path to access files outside the directory from which you want to serve them. It may be safer instead to use Vert.x Apex. |
Pumping responses
The server response is a WriteStream
instance so you can pump to it from any
ReadStream
, e.g. AsyncFile
, NetSocket
,
WebSocket
or HttpServerRequest
.
Here’s an example which echoes the request body back in the response for any PUT methods. It uses a pump for the body, so it will work even if the HTTP request body is much larger than can fit in memory at any one time:
import io.vertx.core.http.HttpMethod
import io.vertx.groovy.core.streams.Pump
vertx.createHttpServer().requestHandler({ request ->
def response = request.response()
if (request.method() == HttpMethod.PUT) {
response.setChunked(true)
Pump.pump(request, response).start()
request.endHandler({ v ->
response.end()})
} else {
response.setStatusCode(400).end()
}
}).listen(8080)
HTTP Compression
Vert.x comes with support for HTTP Compression out of the box.
This means you are able to automatically compress the body of the responses before they are sent back to the client.
If the client does not support HTTP compression the responses are sent back without compressing the body.
This allows to handle Client that support HTTP Compression and those that not support it at the same time.
To enable compression use can configure it with compressionSupported
.
By default compression is not enabled.
When HTTP compression is enabled the server will check if the client incldes an Accept-Encoding
header which
includes the supported compressions. Commonly used are deflate and gzip. Both are supported by Vert.x.
If such a header is found the server will automatically compress the body of the response with one of the supported compressions and send it back to the client.
Be aware that compression may be able to reduce network traffic but is more CPU-intensive.
Creating an HTTP client
You create an HttpClient
instance with default options as follows:
def client = vertx.createHttpClient()
If you want to configure options for the client, you create it as follows:
def options = [
keepAlive:false
]
def client = vertx.createHttpClient()
Making requests
The http client is very flexible and there are various ways you can make requests with it.
Often you want to make many requests to the same host/port with an http client. To avoid you repeating the host/port every time you make a request you can configure the client with a default host/port:
// Set the default host
def options = [
defaultHost:"wibble.com"
]
// Can also set default port if you want...
def client = vertx.createHttpClient(options)
client.getNow("/some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
Alternatively if you find yourself making lots of requests to different host/ports with the same client you can simply specify the host/port when doing the request.
def client = vertx.createHttpClient()
// Specify both port and host name
client.getNow(8080, "myserver.mycompany.com", "/some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
// This time use the default port 80 but specify the host name
client.getNow("foo.othercompany.com", "/other-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
Both methods of specifying host/port are supported for all the different ways of making requests with the client.
Simple requests with no request body
Often, you’ll want to make HTTP requests with no request body. This is usually the case with HTTP GET, OPTIONS and HEAD requests.
The simplest way to do this with the Vert.x http client is using the methods prefixed with Now
. For example
getNow
.
These methods create the http request and send it in a single method call and allow you to provide a handler that will be called with the http response when it comes back.
def client = vertx.createHttpClient()
// Send a GET request
client.getNow("/some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
// Send a GET request
client.headNow("/other-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
Writing general requests
At other times you don’t know the request method you want to send until run-time. For that use case we provide
general purpose request methods such as request
which allow you to specify
the HTTP method at run-time:
import io.vertx.core.http.HttpMethod
def client = vertx.createHttpClient()
client.request(HttpMethod.GET, "some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
}).end()
client.request(HttpMethod.POST, "foo-uri", { response ->
println("Received response with status code ${response.statusCode()}")
}).end("some-data")
Writing request bodies
Sometimes you’ll want to write requests which have a body, or perhaps you want to write headers to a request before sending it.
To do this you can call one of the specific request methods such as post
or
one of the general purpose request methods such as request
.
These methods don’t send the request immediately, but instead return an instance of HttpClientRequest
which can be used to write to the request body or write headers.
Here are some examples of writing a POST request with a body:
def client = vertx.createHttpClient()
def request = client.post("some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
// Now do stuff with the request
request.putHeader("content-length", "1000")
request.putHeader("content-type", "text/plain")
request.write(body)
// Make sure the request is ended when you're done with it
request.end()
// Or fluently:
client.post("some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
}).putHeader("content-length", "1000").putHeader("content-type", "text/plain").write(body).end()
// Or event more simply:
client.post("some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
}).putHeader("content-type", "text/plain").end(body)
Methods exist to write strings in UTF-8 encoding and in any specific encoding and to write buffers:
import io.vertx.groovy.core.buffer.Buffer
// Write string encoded in UTF-8
request.write("some data")
// Write string encoded in specific encoding
request.write("some other data", "UTF-16")
// Write a buffer
def buffer = Buffer.buffer()
buffer.appendInt(123).appendLong(245L)
request.write(buffer)
If you are just writing a single string or buffer to the HTTP request you can write it and end the request in a
single call to the end
function.
import io.vertx.groovy.core.buffer.Buffer
// Write string and end the request (send it) in a single call
request.end("some simple data")
// Write buffer and end the request (send it) in a single call
def buffer = Buffer.buffer().appendDouble(12.34d).appendLong(432L)
request.end(buffer)
When you’re writing to a request, the first call to write
will result in the request headers being written
out to the wire.
The actual write is asychronous and might not occur until some time after the call has returned.
Non-chunked HTTP requests with a request body require a Content-Length
header to be provided.
Consequently, if you are not using chunked HTTP then you must set the Content-Length
header before writing
to the request, as it will be too late otherwise.
If you are calling one of the end
methods that take a string or buffer then Vert.x will automatically calculate
and set the Content-Length
header before writing the request body.
If you are using HTTP chunking a a Content-Length
header is not required, so you do not have to calculate the size
up-front.
Writing request headers
You can write headers to a request using the headers
multi-map as follows:
// Write some headers using the headers() multimap
def headers = request.headers()
headers.set("content-type", "application/json").set("other-header", "foo")
The headers are an instance of MultiMap
which provides operations for adding, setting and removing
entries. Http headers allow more than one value for a specific key.
You can also write headers using putHeader
// Write some headers using the putHeader method
request.putHeader("content-type", "application/json").putHeader("other-header", "foo")
If you wish to write headers to the request you must do so before any part of the request body is written.
Ending HTTP requests
Once you have finished with the HTTP request you must end it with one of the end
operations.
Ending a request causes any headers to be written, if they have not already been written and the request to be marked as complete.
Requests can be ended in several ways. With no arguments the request is simply ended:
request.end()
Or a string or buffer can be provided in the call to end
. This is like calling write
with the string or buffer
before calling end
with no arguments
import io.vertx.groovy.core.buffer.Buffer
// End the request with a string
request.end("some-data")
// End it with a buffer
def buffer = Buffer.buffer().appendFloat(12.3f).appendInt(321)
request.end(buffer)
Chunked HTTP requests
Vert.x supports HTTP Chunked Transfer Encoding for requests.
This allows the HTTP request body to be written in chunks, and is normally used when a large request body is being streamed to the server, whose size is not known in advance.
You put the HTTP request into chunked mode using setChunked
.
In chunked mode each call to write will cause a new chunk to be written to the wire. In chunked mode there is
no need to set the Content-Length
of the request up-front.
request.setChunked(true)
// Write some chunks
for (def i = 0;i < 10;i++) {
request.write("this-is-chunk-${i}")
}
request.end()
Request timeouts
You can set a timeout for a specific http request using setTimeout
.
If the request does not return any data within the timeout period an exception will be passed to the exception handler (if provided) and the request will be closed.
Handling exceptions
You can handle exceptions corresponding to a request by setting an exception handler on the HttpClientRequest
instance:
def request = client.post("some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
request.exceptionHandler({ e ->
println("Received exception: ${e.getMessage()}")
e.printStackTrace()
})
TODO - what about exceptions in the getNow methods where no exception handler can be provided??
Maybe need a catch all exception handler??
Specifying a handler on the client request
Instead of providing a response handler in the call to create the client request object, alternatively, you can
not provide a handler when the request is created and set it later on the request object itself, using
handler
, for example:
def request = client.post("some-uri")
request.handler({ response ->
println("Received response with status code ${response.statusCode()}")
})
Using the request as a stream
The HttpClientRequest
instance is also a WriteStream
which means
you can pump to it from any ReadStream
instance.
For, example, you could pump a file on disk to a http request body as follows:
import io.vertx.groovy.core.streams.Pump
request.setChunked(true)
def pump = Pump.pump(file, request)
file.endHandler({ v ->
request.end()})
pump.start()
Handling http responses
You receive an instance of HttpClientResponse
into the handler that you specify in of
the request methods or by setting a handler directly on the HttpClientRequest
object.
You can query the status code and the status message of the response with statusCode
and statusMessage
.
client.getNow("some-uri", { response ->
// the status code - e.g. 200 or 404
println("Status code is ${response.statusCode()}")
// the status message e.g. "OK" or "Not Found".
println("Status message is ${response.statusMessage()}")
})
Using the response as a stream
The HttpClientResponse
instance is also a ReadStream
which means
you can pump it to any WriteStream
instance.
Response headers and trailers
Http responses can contain headers. Use headers
to get the headers.
The object returned is a MultiMap
as HTTP headers can contain multiple values for single keys.
def contentType = response.headers().get("content-type")
def contentLength = response.headers().get("content-lengh")
Chunked HTTP responses can also contain trailers - these are sent in the last chunk of the response body.
Reading the request body
The response handler is called when the headers of the response have been read from the wire.
If the response has a body this might arrive in several pieces some time after the headers have been read. We don’t wait for all the body to arrive before calling the response handler as the response could be very large and we might be waiting a long time, or run out of memory for large responses.
As parts of the response body arrive, the handler
is called with
a Buffer
representing the piece of the body:
client.getNow("some-uri", { response ->
response.handler({ buffer ->
println("Received a part of the response body: ${buffer}")
})
})
If you know the response body is not very large and want to aggregate it all in memory before handling it, you can either aggregate it yourself:
import io.vertx.groovy.core.buffer.Buffer
client.getNow("some-uri", { response ->
// Create an empty buffer
def totalBuffer = Buffer.buffer()
response.handler({ buffer ->
println("Received a part of the response body: ${buffer.length()}")
totalBuffer.appendBuffer(buffer)
})
response.endHandler({ v ->
// Now all the body has been read
println("Total response body length is ${totalBuffer.length()}")
})
})
Or you can use the convenience bodyHandler
which
is called with the entire body when the response has been fully read:
client.getNow("some-uri", { response ->
response.bodyHandler({ totalBuffer ->
// Now all the body has been read
println("Total response body length is ${totalBuffer.length()}")
})
})
Response end handler
The response endHandler
is called when the entire response body has been read
or immediately after the headers have been read and the response handler has been called if there is no body.
Reading cookies from the response
You can retrieve the list of cookies from a response using cookies
.
Alternatively you can just parse the Set-Cookie
headers yourself in the response.
100-Continue handling
According to the HTTP 1.1 specification a client can set a
header Expect: 100-Continue
and send the request header before sending the rest of the request body.
The server can then respond with an interim response status Status: 100 (Continue)
to signify to the client that
it is ok to send the rest of the body.
The idea here is it allows the server to authorise and accept/reject the request before large amounts of data are sent. Sending large amounts of data if the request might not be accepted is a waste of bandwidth and ties up the server in reading data that it will just discard.
Vert.x allows you to set a continueHandler
on the
client request object
This will be called if the server sends back a Status: 100 (Continue)
response to signify that it is ok to send
the rest of the request.
This is used in conjunction with `sendHead`to send the head of the request.
Here’s an example:
def request = client.put("some-uri", { response ->
println("Received response with status code ${response.statusCode()}")
})
request.putHeader("Expect", "100-Continue")
request.continueHandler({ v ->
// OK to send rest of body
request.write("Some data")
request.write("Some more data")
request.end()
})
Enabling compression on the client
The http client comes with support for HTTP Compression out of the box.
This means the client can let the remote http server know that it supports compression, and will be able to handle compressed response bodies.
An http server is free to either compress with one of the supported compression algorithms or to send the body back without compressing it at all. So this is only a hint for the Http server which it may ignore at will.
To tell the http server which compression is supported by the client it will include an Accept-Encoding
header with
the supported compression algorithm as value. Multiple compression algorithms are supported. In case of Vert.x this
will result in the following header added:
Accept-Encoding: gzip, deflate
The server will choose then from one of these. You can detect if a server ompressed the body by checking for the
Content-Encoding
header in the response sent back from it.
If the body of the response was compressed via gzip it will include for example the following header:
Content-Encoding: gzip
To enable compression set tryUseCompression
on the options
used when creating the client.
By default compression is disabled.
Pooling and keep alive
Http keep alive allows http connections to be used for more than one request. This can be a more efficient use of connections when you’re making multiple requests to the same server.
The http client supports pooling of connections, allowing you to reuse connections between requests.
For pooling to work, keep alive must be true using keepAlive
on the options used when configuring the client. The default value is true.
When keep alive is enabled. Vert.x will add a Connection: Keep-Alive
header to each HTTP request sent.
The maximum number of connections to pool for each server is configured using maxPoolSize
When making a request with pooling enabled, Vert.x will create a new connection if there are less than the maximum number of connections already created for that server, otherwise it will add the request to a queue.
When a response returns, if there are pending requests for the server, then the connection will be reused, otherwise it will be closed.
This gives the benefits of keep alive when the client is loaded but means we don’t keep connections hanging around unnecessarily when there would be no benefits anyway.
Pipe-lining
The client also supports pipe-lining of requests on a connection.
Pipe-lining means another request is sent on the same connection before the response from the preceding one has returned. Pipe-lining is not appropriate for all requests.
To enable pipe-lining, it must be enabled using pipelining
.
By default pipe-lining is disabled.
When pipe-lining is enabled requests will be written to connections without waiting for previous responses to return.
When pipe-line responses return at the client, the connection will be automatically closed when all in-flight responses have returned and there are no outstanding pending requests to write.
Server sharing
TODO round robin requests etc
Using HTTPS with Vert.x
Vert.x http servers and clients can be configured to use HTTPS in exactly the same way as net servers.
Please see configuring net servers to use SSL for more information.
WebSockets
WebSockets are a web technology that allows a full duplex socket-like connection between HTTP servers and HTTP clients (typically browsers).
Vert.x supports WebSockets on both the client and server-side.
WebSockets on the server
There are two ways of handling WebSockets on the server side.
WebSocket handler
The first way involves providing a websocketHandler
on the server instance.
When a WebSocket connection is made to the server, the handler will be called, passing in an instance of
ServerWebSocket
.
server.websocketHandler({ websocket ->
println("Connected!")
})
You can choose to reject the WebSocket by calling reject
.
server.websocketHandler({ websocket ->
if (websocket.path() == "/myapi") {
websocket.reject()
} else {
// Do something
}
})
Upgrading to WebSocket
The second way of handling WebSockets is to handle the HTTP Upgrade request that was sent from the client, and
call upgrade
on the server request.
server.requestHandler({ request ->
if (request.path() == "/myapi") {
def websocket = request.upgrade()
// Do something
} else {
// Reject
request.response().setStatusCode(400).end()
}
})
The server WebSocket
The ServerWebSocket
instance enables you to retrieve the headers
,
path
path}, query
and
uri
URI} of the HTTP request of the WebSocket handshake.
WebSockets on the client
The Vert.x HttpClient
supports WebSockets.
You can connect a WebSocket to a server using one of the websocket
operations and
providing a handler.
The handler will be called with an instance of WebSocket
when the connection has been made:
client.websocket("/some-uri", { websocket ->
println("Connected!")
})
Writing messages to WebSockets
If you wish to write a single binary WebSocket message containing a single WebSocket frame to the WebSocket (a
common case) the simplest way to do this is to use writeMessage
:
import io.vertx.groovy.core.buffer.Buffer
// Write a simple message
def buffer = Buffer.buffer().appendInt(123).appendFloat(1.23f)
websocket.writeMessage(buffer)
If the websocket message is larger than the maximum websocket frame size as configured with
maxWebsocketFrameSize
then Vert.x will split it into multiple WebSocket frames before sending it on the wire.
Writing frames to WebSockets
A WebSocket message can be composed of multiple frames. In this case the first frame is either a binary or text frame followed by one or more continuation frames.
The last frame in the message is marked as final.
To send a message consisting of multiple frames you create frames using
WebSocketFrame.binaryFrame
, WebSocketFrame.textFrame
or
WebSocketFrame.continuationFrame
and write them
to the WebSocket using writeFrame
.
Here’s an example for binary frames:
import io.vertx.groovy.core.http.WebSocketFrame
def frame1 = WebSocketFrame.binaryFrame(buffer1, false)
websocket.writeFrame(frame1)
def frame2 = WebSocketFrame.continuationFrame(buffer2, false)
websocket.writeFrame(frame2)
// Write the final frame
def frame3 = WebSocketFrame.continuationFrame(buffer2, true)
websocket.writeFrame(frame3)
Reading frames from WebSockets
To read frames from a WebSocket you use the frameHandler
.
The frame handler will be called with instances of WebSocketFrame
when a frame arrives,
for example:
websocket.frameHandler({ frame ->
println("Received a frame of size!")
})
Closing WebSockets
Use close
to close the WebSocket connection when you have finished with it.
Streaming WebSockets
The WebSocket
instance is also a ReadStream
and a
WriteStream
so it can be used with pumps.
When using a WebSocket as a write stream or a read stream it can only be used with WebSockets connections that are used with binary frames that are no split over multiple frames.
Automatic clean-up in verticles
If you’re creating http servers and clients from inside verticles, those servers and clients will be automatically closed when the verticle is undeployed.
Using Shared Data with Vert.x
Shared data contains functionality that allows you to safely share data between different parts of your application, or different applications in the same Vert.x instance or across a cluster of Vert.x instances.
Shared data includes local shared maps, distributed, cluster-wide maps, asynchronous cluster-wide locks and asynchronous cluster-wide counters.
Local shared maps
Local shared maps
allow you to share data safely between different event
loops (e.g. different verticles) in the same Vert.x instance.
Local shared maps only allow certain data types to be used as keys and values. Those types must either be immutable,
or certain other types that can be copied like Buffer
. In the latter case the key/value
will be copied before putting it in the map.
This way we can ensure there is no shared access to mutable state between different threads in your Vert.x application so you don’t have to worry about protecting that state by synchronising access to it.
Here’s an example of using a shared local map:
import io.vertx.groovy.core.buffer.Buffer
def sd = vertx.sharedData()
def map1 = sd.getLocalMap("mymap1")
map1.put("foo", "bar")
def map2 = sd.getLocalMap("mymap2")
map2.put("eek", Buffer.buffer().appendInt(123))
// Then... in another part of your application:
map1 = sd.getLocalMap("mymap1")
def val = map1.get("foo")
map2 = sd.getLocalMap("mymap2")
def buff = map2.get("eek")
Cluster-wide asynchronous maps
Cluster-wide asynchronous maps allow data to be put in the map from any node of the cluster and retrieved from any other node.
This makes them really useful for things like storing session state in a farm of servers hosting a Vert.x web application.
You get an instance of AsyncMap
with
getClusterWideMap
.
Getting the map is asynchronous and the result is returned to you in the handler that you specify. Here’s an example:
def sd = vertx.sharedData()
sd.getClusterWideMap("mymap", { res ->
if (res.succeeded()) {
def map = res.result()
} else {
// Something went wrong!
}
})
Putting data in a map
You put data in a map with put
.
The actual put is asynchronous and the handler is notified once it is complete:
map.put("foo", "bar", { resPut ->
if (resPut.succeeded()) {
// Successfully put the value
} else {
// Something went wrong!
}
})
Getting data from a map
You get data from a map with get
.
The actual get is asynchronous and the handler is notified with the result some time later
map.get("foo", { resGet ->
if (resGet.succeeded()) {
// Successfully got the value
def val = resGet.result()
} else {
// Something went wrong!
}
})
Other map operations
You can also remove entries from an asynchronous map, clear them and get the size.
See the API docs
for more information.
Cluster-wide locks
Cluster wide locks
allow you to obtain exclusive locks across the cluster -
this is useful when you want to do something or access a resource on only one node of a cluster at any one time.
Cluster wide locks have an asynchronous API unlike most lock APIs which block the calling thread until the lock is obtained.
To obtain a lock use getLock
.
This won’t block, but when the lock is available, the handler will be called with an instance of Lock
,
signifying that you now own the lock.
While you own the lock no other caller, anywhere on the cluster will be able to obtain the lock.
When you’ve finished with the lock, you call release
to release it, so
another caller can obtain it.
sd.getLock("mylock", { res ->
if (res.succeeded()) {
// Got the lock!
def lock = res.result()
// 5 seconds later we release the lock so someone else can get it
vertx.setTimer(5000, { tid ->
lock.release()})
} else {
// Something went wrong
}
})
You can also get a lock with a timeout. If it fails to obtain the lock within the timeout the handler will be called with a failure:
sd.getLockWithTimeout("mylock", 10000, { res ->
if (res.succeeded()) {
// Got the lock!
def lock = res.result()
} else {
// Failed to get lock
}
})
Cluster-wide counters
It’s often useful to maintain an atomic counter across the different nodes of your application.
You can do this with Counter
.
You obtain an instance with getCounter
:
sd.getCounter("mycounter", { res ->
if (res.succeeded()) {
def counter = res.result()
} else {
// Something went wrong!
}
})
Once you have an instance you can retrieve the current count, atomically increment it, decrement and add a value to it using the various methods.
See the API docs
for more information.
Using the file system with Vert.x
The Vert.x FileSystem
object provides many operations for manipulating the file system.
There is one file system object per Vert.x instance, and you obtain it with fileSystem
.
A blocking and a non blocking version of each operation is provided.
The non blocking versions take a handler which is called when the operation completes or an error occurs.
Here’s an example of asynchronously copying a file:
def fs = vertx.fileSystem()
// Copy file from foo.txt to bar.txt
fs.copy("foo.txt", "bar.txt", { res ->
if (res.succeeded()) {
// Copied ok!
} else {
// Something went wrong
}
})
The blocking versions are named and return the results or throw exceptions directly.
In many cases, depending on the operating system and file system,some of the potentially blocking operations can return quickly, which is why we provide them, but it’s highly recommended that you test how long they take to return in your particular application before using them from an event loop, so as not to break the Golden Rule.
Here’s the copy using the blocking API:
def fs = vertx.fileSystem()
// Copy file from foo.txt to bar.txt synchronously
fs.copyBlocking("foo.txt", "bar.txt")
Many operations exist to copy, move, truncate, chmod and many other file operations.
We won’t list them all here, please consult the API docs
for the full list.
Asynchronous files
Vert.x provides an asynchronous file abstraction that allows you to manipulate a file on the file system
You open an AsyncFile
as follows:
def options = [:]
fileSystem.open("myfile.txt", options, { res ->
if (res.succeeded()) {
def file = res.result()
} else {
// Something went wrong!
}
})
TODO
Datagram sockets (UDP)
Using User Datagram Protocol (UDP) with Vert.x is a piece of cake.
UDP is a connection-less transport which basically means you have no persistent connection to a remote peer.
Instead you can send and receive packages and the remote address is contained in each of them.
Beside this UDP is not as safe as TCP to use, which means there are no guarantees that a send Datagram packet will receive it’s endpoint at all.
The only guarantee is that it will either receive complete or not at all.
Also you usually can’t send data which is bigger then the MTU size of your network interface, this is because each packet will be send as one packet.
But be aware even if the packet size is smaller then the MTU it may still fail.
At which size it will fail depends on the Operating System etc. So rule of thumb is to try to send small packets.
Because of the nature of UDP it is best fit for Applications where you are allowed to drop packets (like for example a monitoring application).
The benefits are that it has a lot less overhead compared to TCP, which can be handled by the NetServer and NetClient (see above).
Creating a DatagramSocket
To use UDP you first need t create a DatagramSocket
. It does not matter here if you only want to send data or send
and receive.
def socket = vertx.createDatagramSocket([:])
The returned DatagramSocket
will not be bound to a specific port. This is not a
problem if you only want to send data (like a client), but more on this in the next section.
Sending Datagram packets
As mentioned before, User Datagram Protocol (UDP) sends data in packets to remote peers but is not connected to them in a persistent fashion.
This means each packet can be sent to a different remote peer.
Sending packets is as easy as shown here:
import io.vertx.groovy.core.buffer.Buffer
def socket = vertx.createDatagramSocket([:])
def buffer = Buffer.buffer("content")
// Send a Buffer
socket.send(buffer, 1234, "10.0.0.1", { asyncResult ->
println("Send succeeded? ${asyncResult.succeeded()}")
})
// Send a String
socket.send("A string used as content", 1234, "10.0.0.1", { asyncResult ->
println("Send succeeded? ${asyncResult.succeeded()}")
})
Receiving Datagram packets
If you want to receive packets you need to bind the DatagramSocket
by calling
listen(…)}
on it.
This way you will be able to receive DatagramPacket`s that were sent to the address and port on
which the `DatagramSocket
listens.
Beside this you also want to set a Handler
which will be called for each received DatagramPacket
.
The DatagramPacket
has the following methods:
So to listen on a specific address and port you would do something like shown here:
def socket = vertx.createDatagramSocket([:])
socket.listen(1234, "0.0.0.0", { asyncResult ->
if (asyncResult.succeeded()) {
socket.handler({ packet ->
// Do something with the packet
})
} else {
println("Listen failed${asyncResult.cause()}")
}
})
Be aware that even if the {code AsyncResult} is successed it only means it might be written on the network stack, but gives no guarantee that it ever reached or will reach the remote peer at all.
If you need such a guarantee then you want to use TCP with some handshaking logic build on top.
Multicast
Sending Multicast packets
Multicast allows multiple sockets to receive the same packets. This works by have same join a multicast group to which you can send packets.
We will look at how you can joint a Multicast Group and so receive packets in the next section.
For now let us focus on how to send those. Sending multicast packets is not different to send normal Datagram Packets.
The only difference is that you would pass in a multicast group address to the send method.
This is show here:
import io.vertx.groovy.core.buffer.Buffer
def socket = vertx.createDatagramSocket([:])
def buffer = Buffer.buffer("content")
// Send a Buffer to a multicast address
socket.send(buffer, 1234, "230.0.0.1", { asyncResult ->
println("Send succeeded? ${asyncResult.succeeded()}")
})
All sockets that have joined the multicast group 230.0.0.1 will receive the packet.
Receiving Multicast packets
If you want to receive packets for specific Multicast group you need to bind the DatagramSocket
by
calling listen(…)
on it and join the Multicast group.
This way you will be able to receive DatagramPackets that were sent to the address and port on which the
DatagramSocket
listens and also to those sent to the Multicast group.
Beside this you also want to set a Handler which will be called for each received DatagramPacket.
The DatagramPacket
has the following methods:
-
sender()
: The InetSocketAddress which represent the sender of the packet -
data()
: The Buffer which holds the data which was received.
So to listen on a specific address and port and also receive packets for the Multicast group 230.0.0.1 you would do something like shown here:
def socket = vertx.createDatagramSocket([:])
socket.listen(1234, "0.0.0.0", { asyncResult ->
if (asyncResult.succeeded()) {
socket.handler({ packet ->
// Do something with the packet
})
// join the multicast group
socket.listenMulticastGroup("230.0.0.1", { asyncResult2 ->
println("Listen succeeded? ${asyncResult2.succeeded()}")
})
} else {
println("Listen failed${asyncResult.cause()}")
}
})
Unlisten / leave a Multicast group
There are sometimes situations where you want to receive packets for a Multicast group for a limited time.
In this situations you can first start to listen for them and then later unlisten.
This is shown here:
def socket = vertx.createDatagramSocket([:])
socket.listen(1234, "0.0.0.0", { asyncResult ->
if (asyncResult.succeeded()) {
socket.handler({ packet ->
// Do something with the packet
})
// join the multicast group
socket.listenMulticastGroup("230.0.0.1", { asyncResult2 ->
if (asyncResult2.succeeded()) {
// will now receive packets for group
// do some work
socket.unlistenMulticastGroup("230.0.0.1", { asyncResult3 ->
println("Unlisten succeeded? ${asyncResult3.succeeded()}")
})
} else {
println("Listen failed${asyncResult2.cause()}")
}
})
} else {
println("Listen failed${asyncResult.cause()}")
}
})
Blocking multicast
Beside unlisten a Multicast address it’s also possible to just block multicast for a specific sender address.
Be aware this only work on some Operating Systems and kernel versions. So please check the Operating System documentation if it’s supported.
This an expert feature.
To block multicast from a specific address you can call blockMulticastGroup(…)
on the DatagramSocket
like shown here:
def socket = vertx.createDatagramSocket([:])
// Some code
// This would block packets which are send from 10.0.0.2
socket.blockMulticastGroup("230.0.0.1", "10.0.0.2", { asyncResult ->
println("block succeeded? ${asyncResult.succeeded()}")
})
DatagramSocket properties
When creating a DatagramSocket
there are multiple properties you can set to
change it’s behaviour with the DatagramSocketOptions
object. Those are listed here:
-
sendBufferSize
Sets the send buffer size in bytes. -
receiveBufferSize
Sets the TCP receive buffer size in bytes. -
reuseAddress
If true then addresses in TIME_WAIT state can be reused after they have been closed. -
broadcast
Sets or clears the SO_BROADCAST socket option. When this option is set, Datagram (UDP) packets may be sent to a local interface’s broadcast address. -
multicastNetworkInterface
Sets or clears the IP_MULTICAST_LOOP socket option. When this option is set, multicast packets will also be received on the local interface. -
multicastTimeToLive
Sets the IP_MULTICAST_TTL socket option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is allowed to go through, specifically for multicast traffic. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded.
DatagramSocket Local Address
You can find out the local address of the socket (i.e. the address of this side of the UDP Socket) by calling
localAddress
. This will only return an InetSocketAddress
if you
bound the DatagramSocket
with listen(…)
before, otherwise it will return null.
Closing a DatagramSocket
You can close a socket by invoking the close
method. This will close
the socket and release all resources
DNS client
Often you will find yourself in situations where you need to obtain DNS informations in an asynchronous fashion.
Unfortunally this is not possible with the API that is shipped with the Java Virtual Machine itself. Because of this Vert.x offers it’s own API for DNS resolution which is fully asynchronous.
To obtain a DnsClient instance you will create a new via the Vertx instance.
def client = vertx.createDnsClient(53, "10.0.0.1")
Be aware that you can pass in a varargs of InetSocketAddress arguments to specifiy more then one DNS Server to try to query for DNS resolution. The DNS Servers will be queried in the same order as specified here. Where the next will be used once the first produce an error while be used.
lookup
Try to lookup the A (ipv4) or AAAA (ipv6) record for a given name. The first which is returned will be used, so it behaves the same way as you may be used from when using "nslookup" on your operation system.
To lookup the A / AAAA record for "vertx.io" you would typically use it like:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.lookup("vertx.io", { ar ->
if (ar.succeeded()) {
println(ar.result())
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
lookup4
Try to lookup the A (ipv4) record for a given name. The first which is returned will be used, so it behaves the same way as you may be used from when using "nslookup" on your operation system.
To lookup the A record for "vertx.io" you would typically use it like:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.lookup4("vertx.io", { ar ->
if (ar.succeeded()) {
println(ar.result())
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
lookup6
Try to lookup the AAAA (ipv6) record for a given name. The first which is returned will be used, so it behaves the same way as you may be used from when using "nslookup" on your operation system.
To lookup the A record for "vertx.io" you would typically use it like:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.lookup6("vertx.io", { ar ->
if (ar.succeeded()) {
println(ar.result())
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveA
Try to resolve all A (ipv4) records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the A records for "vertx.io" you would typically do:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveA("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveAAAA
Try to resolve all AAAA (ipv6) records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the AAAAA records for "vertx.io" you would typically do:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveAAAA("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveCNAME
Try to resolve all CNAME records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the CNAME records for "vertx.io" you would typically do:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveCNAME("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveMX
Try to resolve all MX records for a given name. The MX records are used to define which Mail-Server accepts emails for a given domain.
To lookup all the MX records for "vertx.io" you would typically do:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveMX("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
Be aware that the List will contain the MxRecord
sorted by the priority of them, which
means MX records with smaller priority coming first in the List.
The MxRecord
allows you to access the priority and the name of the MX record by offer methods for it like:
Code not translatable
resolveTXT
Try to resolve all TXT records for a given name. TXT records are often used to define extra informations for a domain.
To resolve all the TXT records for "vertx.io" you could use something along these lines:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveTXT("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveNS
Try to resolve all NS records for a given name. The NS records specify which DNS Server hosts the DNS informations for a given domain.
To resolve all the NS records for "vertx.io" you could use something along these lines:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveNS("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
resolveSRV
Try to resolve all SRV records for a given name. The SRV records are used to define extra informations like port and hostname of services. Some protocols need this extra informations.
To lookup all the SRV records for "vertx.io" you would typically do:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolveSRV("vertx.io", { ar ->
if (ar.succeeded()) {
def records = ar.result()
records.each { record ->
println(record)
}
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
Be aware that the List will contain the SrvRecords sorted by the priority of them, which means SrvRecords with smaller priority coming first in the List.
The SrvRecord allows you to access all informations contained in the SRV record itself:
Code not translatable
Please refer to the API docs for the exact details.
resolvePTR
Try to resolve the PTR record for a given name. The PTR record maps an ipaddress to a name.
To resolve the PTR record for the ipaddress 10.0.0.1 you would use the PTR notion of "1.0.0.10.in-addr.arpa"
def client = vertx.createDnsClient(53, "10.0.0.1")
client.resolvePTR("1.0.0.10.in-addr.arpa", { ar ->
if (ar.succeeded()) {
def record = ar.result()
println(record)
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
reverseLookup
Try to do a reverse lookup for an ipaddress. This is basically the same as resolve a PTR record, but allows you to just pass in the ipaddress and not a valid PTR query string.
To do a reverse lookup for the ipaddress 10.0.0.1 do something similar like this:
def client = vertx.createDnsClient(53, "10.0.0.1")
client.reverseLookup("10.0.0.1", { ar ->
if (ar.succeeded()) {
def record = ar.result()
println(record)
} else {
println("Failed to resolve entry${ar.cause()}")
}
})
Unresolved directive in dns.adoc - include::override/dns.adoc[]
Streams
There are several objects in Vert.x that allow items to be read from and written.
In previous versions the streams.adoc package was manipulating Buffer
objects exclusively. From now, streams are not anymore coupled to buffers and work with any kind of objects.
In Vert.x, calls to write item return immediately and writes are internally queued.
It’s not hard to see that if you write to an object faster than it can actually write the data to its underlying resource then the write queue could grow without bound - eventually resulting in exhausting available memory.
To solve this problem a simple flow control capability is provided by some objects in the Vert.x API.
Any flow control aware object that can be written-to implements ReadStream
,
and any flow control object that can be read-from is said to implement WriteStream
.
Let’s take an example where we want to read from a ReadStream
and write the data to a WriteStream
.
A very simple example would be reading from a NetSocket
on a server and writing back to the
same NetSocket
- since NetSocket
implements both ReadStream
and WriteStream
, but you can
do this between any ReadStream
and any WriteStream
, including HTTP requests and response,
async files, WebSockets, etc.
A naive way to do this would be to directly take the data that’s been read and immediately write it
to the NetSocket
, for example:
def server = vertx.createNetServer([
port:1234,
host:"localhost"
])
server.connectHandler({ sock ->
sock.handler({ buffer ->
// Write the data straight back
sock.write(buffer)
})
}).listen()
There’s a problem with the above example: If data is read from the socket faster than it can be
written back to the socket, it will build up in the write queue of the NetSocket
, eventually
running out of RAM. This might happen, for example if the client at the other end of the socket
wasn’t reading very fast, effectively putting back-pressure on the connection.
Since NetSocket
implements WriteStream
, we can check if the WriteStream
is full before
writing to it:
def server = vertx.createNetServer([
port:1234,
host:"localhost"
])
server.connectHandler({ sock ->
sock.handler({ buffer ->
if (!sock.writeQueueFull()) {
sock.write(buffer)
}
})
}).listen()
This example won’t run out of RAM but we’ll end up losing data if the write queue gets full. What we
really want to do is pause the NetSocket
when the write queue is full. Let’s do that:
def server = vertx.createNetServer([
port:1234,
host:"localhost"
])
server.connectHandler({ sock ->
sock.handler({ buffer ->
sock.write(buffer)
if (sock.writeQueueFull()) {
sock.pause()
}
})
}).listen()
We’re almost there, but not quite. The NetSocket
now gets paused when the file is full, but we also need to unpause
it when the write queue has processed its backlog:
def server = vertx.createNetServer([
port:1234,
host:"localhost"
])
server.connectHandler({ sock ->
sock.handler({ buffer ->
sock.write(buffer)
if (sock.writeQueueFull()) {
sock.pause()
sock.drainHandler({ done ->
sock.resume()
})
}
})
}).listen()
And there we have it. The drainHandler
event handler will
get called when the write queue is ready to accept more data, this resumes the NetSocket
which
allows it to read more data.
It’s very common to want to do this when writing Vert.x applications, so we provide a helper class
called Pump
which does all this hard work for you. You just feed it the ReadStream
and
the WriteStream
and it tell it to start:
import io.vertx.groovy.core.streams.Pump
def server = vertx.createNetServer([
port:1234,
host:"localhost"
])
server.connectHandler({ sock ->
Pump.pump(sock, sock).start()
}).listen()
Which does exactly the same thing as the more verbose example.
Let’s look at the methods on ReadStream
and WriteStream
in more detail:
ReadStream
ReadStream
is implemented by HttpClientResponse
, DatagramSocket
,
HttpClientRequest
, HttpServerFileUpload
,
HttpServerRequest
, HttpServerRequestStream
,
MessageConsumer
, NetSocket
, NetSocketStream
,
WebSocket
, WebSocketStream
, TimeoutStream
,
AsyncFile
.
Functions:
-
handler
: set a handler which will receive items from the ReadStream. -
pause
: pause the handler. When paused no items will be received in the handler. -
resume
: resume the handler. The handler will be called if any item arrives. -
exceptionHandler
: Will be called if an exception occurs on the ReadStream. -
endHandler
: Will be called when end of stream is reached. This might be when EOF is reached if the ReadStream represents a file, or when end of request is reached if it’s an HTTP request, or when the connection is closed if it’s a TCP socket.
WriteStream
WriteStream
is implemented by HttpClientRequest
, HttpServerResponse
WebSocket
, NetSocket
, AsyncFile
,
PacketWritestream
and MessageProducer
Functions:
-
write
: write an object to the WriteStream. This method will never block. Writes are queued internally and asynchronously written to the underlying resource. -
setWriteQueueMaxSize
: set the number of object at which the write queue is considered full, and the methodwriteQueueFull
returnstrue
. Note that, when the write queue is considered full, if write is called the data will still be accepted and queued. The actual number depends on the stream implementation, forBuffer
the size represents the actual number of bytes written and not the number of buffers. -
writeQueueFull
: returnstrue
if the write queue is considered full. -
exceptionHandler
: Will be called if an exception occurs on theWriteStream
. -
drainHandler
: The handler will be called if theWriteStream
is considered no longer full.
Pump
Instances of Pump have the following methods:
-
start
: Start the pump. -
stop
: Stops the pump. When the pump starts it is in stopped mode. -
setWriteQueueMaxSize
: This has the same meaning assetWriteQueueMaxSize
on theWriteStream
.
A pump can be started and stopped multiple times.
When a pump is first created it is not started. You need to call the start()
method to start it.
Parse tools
TODO
Thread safety
Notes on thread safety of Vert.x objects
Metrics SPI
By default Vert.x does not record any metrics. Instead it provides an SPI for others to implement which can be added
to the classpath. The metrics SPI is an advanced feature which allows implementers to capture events from Vert.x in
order to gather metrics. For more information on this, please consult the
API Documentation
.
Clustering
Trouble-shooting clustering
High Availability
Security notes
Warn about file uploads and serving files from arbitrary locations
Vert.x is a tool kit
Run in a security sandbox
Use Apex