- Are you fluent?
- Don’t call us, we’ll call you.
- Don’t block me!
- Reactor and Multi-Reactor
- The Golden Rule - Don’t Block the Event Loop
- Running blocking code
- Async coordination
- Verticles
- Verticle Types
- Standard verticles
- Worker verticles
- Deploying verticles programmatically
- Rules for mapping a verticle name to a verticle factory
- How are Verticle Factories located?
- Waiting for deployment to complete
- Undeploying verticle deployments
- Specifying number of verticle instances
- Verticle Isolation Groups
- High Availability
- Running Verticles from the command line
- Causing Vert.x to exit
- The Context object
- Executing periodic and delayed actions
- Verticle worker pool
- Thread safety
- Metrics SPI
- OSGi
- The 'vertx' command line
- Cluster Managers
- Logging
- High Availability and Fail-Over
- Native transports
- Security notes
- Vert.x Command Line Interface API
- The vert.x Launcher
- Configuring Vert.x cache
At the heart of Vert.x is a set of Java APIs that we call Vert.x Core
Vert.x core provides functionality for things like:
-
Writing TCP clients and servers
-
Writing HTTP clients and servers including support for WebSockets
-
The Event bus
-
Shared data - local maps and clustered distributed maps
-
Periodic and delayed actions
-
Deploying and undeploying Verticles
-
Datagram Sockets
-
DNS client
-
File system access
-
High availability
-
Native transports
-
Clustering
The functionality in core is fairly low level - you won’t find stuff like database access, authorisation or high level web functionality here - that kind of stuff you’ll find in Vert.x ext (extensions).
Vert.x core is small and lightweight. You just use the parts you want. It’s also entirely embeddable in your existing applications - we don’t force you to structure your applications in a special way just so you can use Vert.x.
You can use core from any of the other languages that Vert.x supports. But here’a a cool bit - we don’t force you to use the Java API directly from, say, JavaScript or Ruby - after all, different languages have different conventions and idioms, and it would be odd to force Java idioms on Ruby developers (for example). Instead, we automatically generate an idiomatic equivalent of the core Java APIs for each language.
From now on we’ll just use the word core to refer to Vert.x core.
Let’s discuss the different concepts and features in core.
You may have noticed that in the previous examples a fluent API was used.
A fluent API is where multiple methods calls can be chained together. For example:
{@link examples.CoreExamples#example3}
This is a common pattern throughout Vert.x APIs, so get used to it.
Chaining calls like this allows you to write code that’s a little bit less verbose. Of course, if you don’t like the fluent approach we don’t force you to do it that way, you can happily ignore it if you prefer and write your code like this:
{@link examples.CoreExamples#example4}
The Vert.x APIs are largely event driven. This means that when things happen in Vert.x that you are interested in, Vert.x will call you by sending you events.
Some example events are:
-
a timer has fired
-
some data has arrived on a socket,
-
some data has been read from disk
-
an exception has occurred
-
an HTTP server has received a request
You handle events by providing handlers to the Vert.x APIs. For example to receive a timer event every second you would do:
{@link examples.CoreExamples#example5}
Or to receive an HTTP request:
{@link examples.CoreExamples#example6}
Some time later when Vert.x has an event to pass to your handler Vert.x will call it asynchronously.
This leads us to some important concepts in Vert.x:
With very few exceptions (i.e. some file system operations ending in 'Sync'), none of the APIs in Vert.x block the calling thread.
If a result can be provided immediately, it will be returned immediately, otherwise you will usually provide a handler to receive events some time later.
Because none of the Vert.x APIs block threads that means you can use Vert.x to handle a lot of concurrency using just a small number of threads.
With a conventional blocking API the calling thread might block when:
-
Reading data from a socket
-
Writing data to disk
-
Sending a message to a recipient and waiting for a reply.
-
… Many other situations
In all the above cases, when your thread is waiting for a result it can’t do anything else - it’s effectively useless.
This means that if you want a lot of concurrency using blocking APIs then you need a lot of threads to prevent your application grinding to a halt.
Threads have overhead in terms of the memory they require (e.g. for their stack) and in context switching.
For the levels of concurrency required in many modern applications, a blocking approach just doesn’t scale.
We mentioned before that Vert.x APIs are event driven - Vert.x passes events to handlers when they are available.
In most cases Vert.x calls your handlers using a thread called an event loop.
As nothing in Vert.x or your application blocks, the event loop can merrily run around delivering events to different handlers in succession as they arrive.
Because nothing blocks, an event loop can potentially deliver huge amounts of events in a short amount of time. For example a single event loop can handle many thousands of HTTP requests very quickly.
We call this the Reactor Pattern.
You may have heard of this before - for example Node.js implements this pattern.
In a standard reactor implementation there is a single event loop thread which runs around in a loop delivering all events to all handlers as they arrive.
The trouble with a single thread is it can only run on a single core at any one time, so if you want your single threaded reactor application (e.g. your Node.js application) to scale over your multi-core server you have to start up and manage many different processes.
Vert.x works differently here. Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden.
This means a single Vertx process can scale across your server, unlike Node.js.
We call this pattern the Multi-Reactor Pattern to distinguish it from the single threaded reactor pattern.
Note
|
Even though a Vertx instance maintains multiple event loops, any particular handler will never be executed concurrently, and in most cases (with the exception of worker verticles) will always be called using the exact same event loop. |
We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if you block the event loop yourself in a handler.
If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the event loops in Vertx instance then your application will grind to a complete halt!
So don’t do it! You have been warned.
Examples of blocking include:
-
Thread.sleep()
-
Waiting on a lock
-
Waiting on a mutex or monitor (e.g. synchronized section)
-
Doing a long lived database operation and waiting for a result
-
Doing a complex calculation that takes some significant time.
-
Spinning in a loop
If any of the above stop the event loop from doing anything else for a significant amount of time then you should go immediately to the naughty step, and await further instructions.
So… what is a significant amount of time?
How long is a piece of string? It really depends on your application and the amount of concurrency you require.
If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request can’t take more than 0.1 ms to process, so you can’t block for any more time than that.
The maths is not hard and shall be left as an exercise for the reader.
If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for some time. If you see warnings like these in your logs, then you should investigate.
Thread vertx-eventloop-thread-3 has been blocked for 20458 ms
Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.
If you want to turn off these warnings or change the settings, you can do that in the {@link io.vertx.core.VertxOptions} object before creating the Vertx object.
In a perfect world, there will be no war or hunger, all APIs will be written asynchronously and bunny rabbits will skip hand-in-hand with baby lambs across sunny green meadows.
But… the real world is not like that. (Have you watched the news lately?)
Fact is, many, if not most libraries, especially in the JVM ecosystem have synchronous APIs and many of the methods are likely to block. A good example is the JDBC API - it’s inherently synchronous, and no matter how hard it tries, Vert.x cannot sprinkle magic pixie dust on it to make it asynchronous.
We’re not going to rewrite everything to be asynchronous overnight so we need to provide you a way to use "traditional" blocking APIs safely within a Vert.x application.
As discussed before, you can’t call blocking operations directly from an event loop, as that would prevent it from doing any other useful work. So how can you do this?
It’s done by calling {@link io.vertx.core.Vertx#executeBlocking} specifying both the blocking code to execute and a result handler to be called back asynchronous when the blocking code has been executed.
{@link examples.CoreExamples#example7}
Warning
|
Blocking code should block for a reasonable amount of time (i.e no more than a few seconds). Long blocking operations or polling operations (i.e a thread that spin in a loop polling events in a blocking fashion) are precluded. When the blocking operation lasts more than the 10 seconds, a message will be printed on the console by the blocked thread checker. Long blocking operations should use a dedicated thread managed by the application, which can interact with verticles using the event-bus or {@link io.vertx.core.Context#runOnContext(io.vertx.core.Handler)} |
By default, if executeBlocking is called several times from the same context (e.g. the same verticle instance) then the different executeBlocking are executed serially (i.e. one after another).
If you don’t care about ordering you can call {@link io.vertx.core.Vertx#executeBlocking(io.vertx.core.Handler,boolean,io.vertx.core.Handler)}
specifying false
as the argument to ordered
. In this case any executeBlocking may be executed in parallel
on the worker pool.
An alternative way to run blocking code is to use a worker verticle
A worker verticle is always executed with a thread from the worker pool.
By default blocking code is executed on the Vert.x worker pool, configured with {@link io.vertx.core.VertxOptions#setWorkerPoolSize(int)}.
Additional pools can be created for different purposes:
{@link examples.CoreExamples#workerExecutor1}
The worker executor must be closed when it’s not necessary anymore:
{@link examples.CoreExamples#workerExecutor2}
When several workers are created with the same name, they will share the same pool. The worker pool is destroyed when all the worker executors using it are closed.
When an executor is created in a Verticle, Vert.x will close it automatically for you when the Verticle is undeployed.
Worker executors can be configured when created:
{@link examples.CoreExamples#workerExecutor3}
Note
|
the configuration is set when the worker pool is created |
Coordination of multiple asynchronous results can be achieved with Vert.x {@link io.vertx.core.Future futures}. It supports concurrent composition (run several async operations in parallel) and sequential composition (chain async operations).
{@link io.vertx.core.CompositeFuture#all} takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are and failed when at least one of the futures is failed:
{@link examples.CoreExamples#exampleFutureAll1}
The operations run concurrently, the {@link io.vertx.core.Handler} attached to the returned future is invoked upon completion of the composition. When one of the operation fails (one of the passed future is marked as a failure), the resulting future is marked as failed too. When all the operations succeed, the resulting future is completed with a success.
Alternatively, you can pass a list (potentially empty) of futures:
{@link examples.CoreExamples#exampleFutureAll2}
While the all
composition waits until all futures are successful (or one fails), the any
composition
waits for the first succeeded future. {@link io.vertx.core.CompositeFuture#any} takes several futures
arguments (up to 6) and returns a future that is succeeded when one of the futures is, and failed when
all the futures are failed:
{@link examples.CoreExamples#exampleFutureAny1}
A list of futures can be used also:
{@link examples.CoreExamples#exampleFutureAny2}
The join
composition waits until all futures are completed, either with a success or a failure.
{@link io.vertx.core.CompositeFuture#join} takes several futures arguments (up to 6) and returns a future that is
succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of
them is failed:
{@link examples.CoreExamples#exampleFutureJoin1}
A list of futures can be used also:
{@link examples.CoreExamples#exampleFutureJoin2}
While all
and any
are implementing concurrent composition, {@link io.vertx.core.Future#compose} can be used
for chaining futures (so sequential composition).
{@link examples.CoreExamples#exampleFuture6}
In this example, 3 operations are chained:
-
a file is created (
fut1
) -
something is written in the file (
fut2
) -
the file is moved (
startFuture
)
When these 3 steps are successful, the final future (startFuture
) is succeeded. However, if one
of the steps fails, the final future is failed.
This example uses:
-
{@link io.vertx.core.Future#compose(java.util.function.Function)}: when the current future completes, run the given function, that returns a future. When this returned future completes, it completes the composition.
-
{@link io.vertx.core.Future#compose(io.vertx.core.Handler,io.vertx.core.Future)}: when the current future completes, run the given handler that completes the given
future
(next).
In this second case, the {@link io.vertx.core.Handler} should complete the next
future to report its success or
failure.
Vert.x comes with a simple, scalable, actor-like deployment and concurrency model out of the box that you can use to save you writing your own.
This model is entirely optional and Vert.x does not force you to create your applications in this way if you don’t want to..
The model does not claim to be a strict actor-model implementation, but it does share similarities especially with respect to concurrency, scaling and deployment.
To use this model, you write your code as set of verticles.
Verticles are chunks of code that get deployed and run by Vert.x. A Vert.x instance maintains N event loop threads (where N by default is core*2) by default. Verticles can be written in any of the languages that Vert.x supports and a single application can include verticles written in multiple languages.
You can think of a verticle as a bit like an actor in the Actor Model.
An application would typically be composed of many verticle instances running in the same Vert.x instance at the same time. The different verticle instances communicate with each other by sending messages on the event bus.
There are three different types of verticles:
- Standard Verticles
-
These are the most common and useful type - they are always executed using an event loop thread. We’ll discuss this more in the next section.
- Worker Verticles
-
These run using a thread from the worker pool. An instance is never executed concurrently by more than one thread.
- Multi-threaded worker verticles
-
These run using a thread from the worker pool. An instance can be executed concurrently by more than one thread.
Standard verticles are assigned an event loop thread when they are created and the start method is called with that event loop. When you call any other methods that takes a handler on a core API from an event loop then Vert.x will guarantee that those handlers, when called, will be executed on the same event loop.
This means we can guarantee that all the code in your verticle instance is always executed on the same event loop (as long as you don’t create your own threads and call it!).
This means you can write all the code in your application as single threaded and let Vert.x worry about the threading and scaling. No more worrying about synchronized and volatile any more, and you also avoid many other cases of race conditions and deadlock so prevalent when doing hand-rolled 'traditional' multi-threaded application development.
A worker verticle is just like a standard verticle but it’s executed using a thread from the Vert.x worker thread pool, rather than using an event loop.
Worker verticles are designed for calling blocking code, as they won’t block any event loops.
If you don’t want to use a worker verticle to run blocking code, you can also run inline blocking code directly while on an event loop.
If you want to deploy a verticle as a worker verticle you do that with {@link io.vertx.core.DeploymentOptions#setWorker}.
{@link examples.CoreExamples#example7_1}
Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can executed by different threads at different times.
You can deploy a verticle using one of the {@link io.vertx.core.Vertx#deployVerticle} method, specifying a verticle name or you can pass in a verticle instance you have already created yourself.
Note
|
Deploying Verticle instances is Java only. |
{@link examples.CoreExamples#example8}
You can also deploy verticles by specifying the verticle name.
The verticle name is used to look up the specific {@link io.vertx.core.spi.VerticleFactory} that will be used to instantiate the actual verticle instance(s).
Different verticle factories are available for instantiating verticles in different languages and for various other reasons such as loading services and getting verticles from Maven at run-time.
This allows you to deploy verticles written in any language from any other language that Vert.x supports.
Here’s an example of deploying some different types of verticles:
{@link examples.CoreExamples#example9}
When deploying verticle(s) using a name, the name is used to select the actual verticle factory that will instantiate the verticle(s).
Verticle names can have a prefix - which is a string followed by a colon, which if present will be used to look-up the factory, e.g.
js:foo.js // Use the JavaScript verticle factory groovy:com.mycompany.SomeGroovyCompiledVerticle // Use the Groovy verticle factory service:com.mycompany:myorderservice // Uses the service verticle factory
If no prefix is present, Vert.x will look for a suffix and use that to lookup the factory, e.g.
foo.js // Will also use the JavaScript verticle factory SomeScript.groovy // Will use the Groovy verticle factory
If no prefix or suffix is present, Vert.x will assume it’s a Java fully qualified class name (FQCN) and try and instantiate that.
Most Verticle factories are loaded from the classpath and registered at Vert.x startup.
You can also programmatically register and unregister verticle factories using {@link io.vertx.core.Vertx#registerVerticleFactory} and {@link io.vertx.core.Vertx#unregisterVerticleFactory} if you wish.
Verticle deployment is asynchronous and may complete some time after the call to deploy has returned.
If you want to be notified when deployment is complete you can deploy specifying a completion handler:
{@link examples.CoreExamples#example10}
The completion handler will be passed a result containing the deployment ID string, if deployment succeeded.
This deployment ID can be used later if you want to undeploy the deployment.
Deployments can be undeployed with {@link io.vertx.core.Vertx#undeploy}.
Un-deployment is itself asynchronous so if you want to be notified when un-deployment is complete you can deploy specifying a completion handler:
{@link examples.CoreExamples#example11}
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
{@link examples.CoreExamples#example12}
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to utilise all the cores.
By default, Vert.x has a flat classpath. I.e, when Vert.x deploys verticles it does so with the current classloader - it doesn’t create a new one. In the majority of cases this is the simplest, clearest, and sanest thing to do.
However, in some cases you may want to deploy a verticle so the classes of that verticle are isolated from others in your application.
This might be the case, for example, if you want to deploy two different versions of a verticle with the same class name in the same Vert.x instance, or if you have two different verticles which use different versions of the same jar library.
When using an isolation group you provide a list of the class names that you want isolated using
{@link io.vertx.core.DeploymentOptions#setIsolatedClasses(java.util.List)}- an entry can be a fully qualified
classname such as com.mycompany.myproject.engine.MyClass
or it can be a wildcard which will match any classes in a package and any
sub-packages, e.g. com.mycompany.myproject.*
would match any classes in the package com.mycompany.myproject
or
any sub-packages.
Please note that only the classes that match will be isolated - any other classes will be loaded by the current class loader.
Extra classpath entries can also be provided with {@link io.vertx.core.DeploymentOptions#setExtraClasspath} so if you want to load classes or resources that aren’t already present on the main classpath you can add this.
Warning
|
Use this feature with caution. Class-loaders can be a can of worms, and can make debugging difficult, amongst other things. |
Here’s an example of using an isolation group to isolate a verticle deployment.
{@link examples.CoreExamples#example14}
Verticles can be deployed with High Availability (HA) enabled. In that context, when a verticle is deployed on a vert.x instance that dies abruptly, the verticle is redeployed on another vert.x instance from the cluster.
To run an verticle with the high availability enabled, just append the -ha
switch:
vertx run my-verticle.js -ha
When enabling high availability, no need to add -cluster
.
More details about the high availability feature and configuration in the High Availability and Fail-Over section.
You can use Vert.x directly in your Maven or Gradle projects in the normal way by adding a dependency to the Vert.x core library and hacking from there.
However you can also run Vert.x verticles directly from the command line if you wish.
To do this you need to download and install a Vert.x distribution, and add the bin
directory of the installation
to your PATH
environment variable. Also make sure you have a Java 8 JDK on your PATH
.
Note
|
The JDK is required to support on the fly compilation of Java code. |
You can now run verticles by using the vertx run
command. Here are some examples:
# Run a JavaScript verticle vertx run my_verticle.js # Run a Ruby verticle vertx run a_n_other_verticle.rb # Run a Groovy script verticle, clustered vertx run FooVerticle.groovy -cluster
You can even run Java source verticles without compiling them first!
vertx run SomeJavaSourceFile.java
Vert.x will compile the Java source file on the fly before running it. This is really useful for quickly prototyping verticles and great for demos. No need to set-up a Maven or Gradle build first to get going!
For full information on the various options available when executing vertx
on the command line,
type vertx
at the command line.
Threads maintained by Vert.x instances are not daemon threads so they will prevent the JVM from exiting.
If you are embedding Vert.x and you have finished with it, you can call {@link io.vertx.core.Vertx#close} to close it down.
This will shut-down all internal thread pools and close other resources, and will allow the JVM to exit.
When Vert.x provides an event to a handler or calls the start or stop methods of a
{@link io.vertx.core.Verticle}, the execution is associated with a Context
. Usually a context is an
event-loop context and is tied to a specific event loop thread. So executions for that context always occur
on that exact same event loop thread. In the case of worker verticles and running inline blocking code a
worker context will be associated with the execution which will use a thread from the worker thread pool.
To retrieve the context, use the {@link io.vertx.core.Vertx#getOrCreateContext()} method:
{@link examples.CoreExamples#retrieveContext(io.vertx.core.Vertx)}
If the current thread has a context associated with it, it reuses the context object. If not a new instance of context is created. You can test the type of context you have retrieved:
{@link examples.CoreExamples#retrieveContextType(io.vertx.core.Vertx)}
When you have retrieved the context object, you can run code in this context asynchronously. In other words, you submit a task that will be eventually run in the same context, but later:
{@link examples.CoreExamples#runInContext(io.vertx.core.Vertx)}
When several handlers run in the same context, they may want to share data. The context object offers methods to store and retrieve data shared in the context. For instance, it lets you pass data to some action run with {@link io.vertx.core.Context#runOnContext(io.vertx.core.Handler)}:
{@link examples.CoreExamples#runInContextWithData(io.vertx.core.Vertx)}
The context object also let you access verticle configuration using the {@link io.vertx.core.Context#config()} method. Check the [Passing configuration to a verticle] section for more details about this configuration.
It’s very common in Vert.x to want to perform an action after a delay, or periodically.
In standard verticles you can’t just make the thread sleep to introduce a delay, as that will block the event loop thread.
Instead you use Vert.x timers. Timers can be one-shot or periodic. We’ll discuss both
A one shot timer calls an event handler after a certain delay, expressed in milliseconds.
To set a timer to fire once you use {@link io.vertx.core.Vertx#setTimer} method passing in the delay and a handler
{@link examples.CoreExamples#example15}
The return value is a unique timer id which can later be used to cancel the timer. The handler is also passed the timer id.
You can also set a timer to fire periodically by using {@link io.vertx.core.Vertx#setPeriodic}.
There will be an initial delay equal to the period.
The return value of setPeriodic
is a unique timer id (long). This can be later used if the timer needs to be cancelled.
The argument passed into the timer event handler is also the unique timer id:
Keep in mind that the timer will fire on a periodic basis. If your periodic treatment takes a long amount of time to proceed, your timer events could run continuously or even worse : stack up.
In this case, you should consider using {@link io.vertx.core.Vertx#setTimer} instead. Once your treatment has finished, you can set the next timer.
{@link examples.CoreExamples#example16}
To cancel a periodic timer, call {@link io.vertx.core.Vertx#cancelTimer} specifying the timer id. For example:
{@link examples.CoreExamples#example17}
Verticles use the Vert.x worker pool for executing blocking actions, i.e {@link io.vertx.core.Context#executeBlocking} or worker verticle.
A different worker pool can be specified in deployment options:
{@link examples.CoreExamples#deployVerticleWithDifferentWorkerPool}
Most Vert.x objects are safe to access from different threads. However performance is optimised when they are accessed from the same context they were created from.
For example if you have deployed a verticle which creates a {@link io.vertx.core.net.NetServer} which provides {@link io.vertx.core.net.NetSocket} instances in it’s handler, then it’s best to always access that socket instance from the event loop of the verticle.
If you stick to the standard Vert.x verticle deployment model and avoid sharing objects between verticles then this should be the case without you having to think about it.
By default Vert.x does not record any metrics. Instead it provides an SPI for others to implement which can be added to the classpath. The metrics SPI is an advanced feature which allows implementers to capture events from Vert.x in order to gather metrics. For more information on this, please consult the {@link io.vertx.core.spi.metrics.VertxMetrics API Documentation}.
You can also specify a metrics factory programmatically if embedding Vert.x using {@link io.vertx.core.metrics.MetricsOptions#setFactory(io.vertx.core.spi.VertxMetricsFactory)}.
Vert.x Core is packaged as an OSGi bundle, so can be used in any OSGi R4.2+ environment such as Apache Felix
or Eclipse Equinox. The bundle exports io.vertx.core*
.
However, the bundle has some dependencies on Jackson and Netty. To get the vert.x core bundle resolved deploy:
-
Jackson Annotation [2.6.0,3)
-
Jackson Core [2.6.2,3)
-
Jackson Databind [2.6.2,3)
-
Netty Buffer [4.0.31,5)
-
Netty Codec [4.0.31,5)
-
Netty Codec/Socks [4.0.31,5)
-
Netty Codec/Common [4.0.31,5)
-
Netty Codec/Handler [4.0.31,5)
-
Netty Codec/Transport [4.0.31,5)
Here is a working deployment on Apache Felix 5.2.0:
14|Active | 1|Jackson-annotations (2.6.0)
15|Active | 1|Jackson-core (2.6.2)
16|Active | 1|jackson-databind (2.6.2)
18|Active | 1|Netty/Buffer (4.0.31.Final)
19|Active | 1|Netty/Codec (4.0.31.Final)
20|Active | 1|Netty/Codec/HTTP (4.0.31.Final)
21|Active | 1|Netty/Codec/Socks (4.0.31.Final)
22|Active | 1|Netty/Common (4.0.31.Final)
23|Active | 1|Netty/Handler (4.0.31.Final)
24|Active | 1|Netty/Transport (4.0.31.Final)
25|Active | 1|Netty/Transport/SCTP (4.0.31.Final)
26|Active | 1|Vert.x Core (3.1.0)
On Equinox, you may want to disable the ContextFinder
with the following framework property:
eclipse.bundle.setTCCL=false
The vertx
command is used to interact with Vert.x from the command line. It’s main use is to run Vert.x verticles.
To do this you need to download and install a Vert.x distribution, and add the bin
directory of the installation
to your PATH
environment variable. Also make sure you have a Java 8 JDK on your PATH
.
Note
|
The JDK is required to support on the fly compilation of Java code. |
You can run raw Vert.x verticles directly from the command line using vertx run
. Here is a couple of examples of
the run
command:
vertx run my-verticle.js (1)
vertx run my-verticle.groovy (2)
vertx run my-verticle.rb (3)
vertx run io.vertx.example.MyVerticle (4)
vertx run io.vertx.example.MVerticle -cp my-verticle.jar (5)
vertx run MyVerticle.java (6)
-
Deploys a JavaScript verticle
-
Deploys a Groovy verticle
-
Deploys a Ruby verticle
-
Deploys an already compiled Java verticle. Classpath root is the current directory
-
Deploys a verticle packaged in a Jar, the jar need to be in the classpath
-
Compiles the Java source and deploys it
As you can see in the case of Java, the name can either be the fully qualified class name of the verticle, or you can specify the Java Source file directly and Vert.x compiles it for you.
You can also prefix the verticle with the name of the language implementation to use. For example if the verticle is
a compiled Groovy class, you prefix it with groovy:
so that Vert.x knows it’s a Groovy class not a Java class.
vertx run groovy:io.vertx.example.MyGroovyVerticle
The vertx run
command can take a few optional parameters, they are:
-
-options <options>
- Provides the Vert.x options.options
is the name of a JSON file that represents the Vert.x options, or a JSON string. This is optional. -
-conf <config>
- Provides some configuration to the verticle.config
is the name of a JSON file that represents the configuration for the verticle, or a JSON string. This is optional. -
-cp <path>
- The path on which to search for the verticle and any other resources used by the verticle. This defaults to.
(current directory). If your verticle references other scripts, classes or other resources (e.g. jar files) then make sure these are on this path. The path can contain multiple path entries separated by:
(colon) or;
(semi-colon) depending on the operating system. Each path entry can be an absolute or relative path to a directory containing scripts, or absolute or relative filenames for jar or zip files. An example path might be-cp classes:lib/otherscripts:jars/myjar.jar:jars/otherjar.jar
. Always use the path to reference any resources that your verticle requires. Do not put them on the system classpath as this can cause isolation issues between deployed verticles. -
-instances <instances>
- The number of instances of the verticle to instantiate. Each verticle instance is strictly single threaded so to scale your application across available cores you might want to deploy more than one instance. If omitted a single instance will be deployed. -
-worker
- This option determines whether the verticle is a worker verticle or not. -
-cluster
- This option determines whether the Vert.x instance will attempt to form a cluster with other Vert.x instances on the network. Clustering Vert.x instances allows Vert.x to form a distributed event bus with other nodes. Default isfalse
(not clustered). -
-cluster-port
- If thecluster
option has also been specified then this determines which port will be bound for cluster communication with other Vert.x instances. Default is0
- which means 'choose a free random port'. You don’t usually need to specify this parameter unless you really need to bind to a specific port. -
-cluster-host
- If thecluster
option has also been specified then this determines which host address will be bound for cluster communication with other Vert.x instances. By default it will try and pick one from the available interfaces. If you have more than one interface and you want to use a specific one, specify it here. -
-cluster-public-port
- If thecluster
option has also been specified then this determines which port will be advertised for cluster communication with other Vert.x instances. Default is-1
, which means same ascluster-port
. -
-cluster-public-host
- If thecluster
option has also been specified then this determines which host address will be advertised for cluster communication with other Vert.x instances. If not specified, Vert.x uses the value ofcluster-host
. -
-ha
- if specified the verticle will be deployed as high availability (HA) deployment. See related section for more details -
-quorum
- used in conjunction with-ha
. It specifies the minimum number of nodes in the cluster for any HA deploymentIDs to be active. Defaults to 0. -
-hagroup
- used in conjunction with-ha
. It specifies the HA group this node will join. There can be multiple HA groups in a cluster. Nodes will only failover to other nodes in the same group. The default value is ` __DEFAULT__`
You can also set system properties using: -Dkey=value
.
Here are some more examples:
Run a JavaScript verticle server.js with default settings
vertx run server.js
Run 10 instances of a pre-compiled Java verticle specifying classpath
vertx run com.acme.MyVerticle -cp "classes:lib/myjar.jar" -instances 10
Run 10 instances of a Java verticle by source file
vertx run MyVerticle.java -instances 10
Run 20 instances of a ruby worker verticle
vertx run order_worker.rb -instances 20 -worker
Run two JavaScript verticles on the same machine and let them cluster together with each other and any other servers on the network
vertx run handler.js -cluster
vertx run sender.js -cluster
Run a Ruby verticle passing it some config
vertx run my_verticle.rb -conf my_verticle.conf
Where my_verticle.conf
might contain something like:
{
"name": "foo",
"num_widgets": 46
}
The config will be available inside the verticle via the core API.
When using the high-availability feature of vert.x you may want to create a bare instance of vert.x. This instance does not deploy any verticles when launched, but will receive a verticle if another node of the cluster dies. To create a bare instance, launch:
vertx bare
Depending on your cluster configuration, you may have to append the cluster-host
and cluster-port
parameters.
A fat jar is an executable jar embedding its dependencies. This means you don’t have to have Vert.x pre-installed on the machine on which you execute the jar. Like any executable Java jar it can be executed with.
java -jar my-application-fat.jar
There is nothing really Vert.x specific about this, you could do this with any Java application
You can either create your own main class and specify that in the manifest, but it’s recommended that you write your
code as verticles and use the Vert.x {@link io.vertx.core.Launcher} class (io.vertx.core.Launcher
) as your main
class. This is the same main class used when running Vert.x at the command line and therefore allows you to
specify command line arguments, such as -instances
in order to scale your application more easily.
To deploy your verticle in a fatjar like this you must have a manifest with:
-
Main-Class
set toio.vertx.core.Launcher
-
Main-Verticle
specifying the main verticle (fully qualified class name or script file name)
You can also provide the usual command line arguments that you would pass to vertx run
:
java -jar my-verticle-fat.jar -cluster -conf myconf.json
java -jar my-verticle-fat.jar -cluster -conf myconf.json -cp path/to/dir/conf/cluster_xml
Note
|
Please consult the Maven/Gradle simplest and Maven/Gradle verticle examples in the examples repository for examples of building applications as fatjars. |
A fat jar executes the run
command, by default.
The vertx
command line and the Launcher
also provide other commands in addition to run
and version
:
You can create a bare
instance using:
vertx bare
# or
java -jar my-verticle-fat.jar bare
You can also start an application in background using:
java -jar my-verticle-fat.jar start --vertx-id=my-app-name
If my-app-name
is not set, a random id will be generated, and printed on the command prompt. You can pass run
options to the start
command:
java -jar my-verticle-fat.jar start —-vertx-id=my-app-name -cluster
Once launched in background, you can stop it with the stop
command:
java -jar my-verticle-fat.jar stop my-app-name
You can also list the vert.x application launched in background using:
java -jar my-verticle-fat.jar list
The start
, stop
and list
command are also available from the vertx
tool. The start` command supports a couple of options:
-
vertx-id
: the application id, uses a random UUID if not set -
java-opts
: the Java Virtual Machine options, uses theJAVA_OPTS
environment variable if not set. -
redirect-output
: redirect the spawned process output and error streams to the parent process streams.If option values contain spaces, don't forget to wrap the value between `""` (double-quotes).
As the `start` command spawns a new process, the java options passed to the JVM are not propagated, so you **must** use `java-opts` to configure the JVM (`-X`, `-D`...). If you use the `CLASSPATH` environment variable, be sure it contains all the required jars (vertx-core, your jars and all the dependencies).
The set of commands is extensible, refer to the Extending the vert.x Launcher section.
When developing it may be convenient to automatically redeploy your application upon file changes. The vertx
command line tool and more generally the {@link io.vertx.core.Launcher} class offers this feature. Here are some
examples:
vertx run MyVerticle.groovy --redeploy="**/*.groovy" --launcher-class=io.vertx.core.Launcher
vertx run MyVerticle.groovy --redeploy="**/*.groovy,**/*.rb" --launcher-class=io.vertx.core.Launcher
java io.vertx.core.Launcher run org.acme.MyVerticle --redeploy="**/*.class" --launcher-class=io.vertx.core
.Launcher -cp ...
The redeployment process is implemented as follows. First your application is launched as a background application
(with the start
command). On matching file changes, the process is stopped and the application is restarted.
This avoids leaks, as the process is restarted.
To enable the live redeploy, pass the --redeploy
option to the run
command. The --redeploy
indicates the
set of file to watch. This set can use Ant-style patterns (with **
, *
and ?
). You can specify
several sets by separating them using a comma (,
). Patterns are relative to the current working directory.
Parameters passed to the run
command are passed to the application. Java Virtual Machine options can be
configured using --java-opts
. For instance, to pass the the conf
parameter or a system property, you need to
use: --java-opts="-conf=my-conf.json -Dkey=value"
The --launcher-class
option determine with with main class the application is launcher. It’s generally
{@link io.vertx.core.Launcher}, but you have use you own main.
The redeploy feature can be used in your IDE:
-
Eclipse - create a Run configuration, using the
io.vertx.core.Launcher
class a main class. In the Program arguments area (in the Arguments tab), writerun your-verticle-fully-qualified-name --redeploy=**/*.java --launcher-class=io.vertx.core.Launcher
. You can also add other parameters. The redeployment works smoothly as Eclipse incrementally compiles your files on save. -
IntelliJ - create a Run configuration (Application), set the Main class to
io.vertx.core.Launcher
. In the Program arguments write:run your-verticle-fully-qualified-name --redeploy=**/*.class --launcher-class=io.vertx.core.Launcher
. To trigger the redeployment, you need to make the project or the module explicitly (Build menu → Make project).
To debug your application, create your run configuration as a remote application and configure the debugger
using --java-opts
. However, don’t forget to re-plug the debugger after every redeployment as a new process is
created every time.
You can also hook your build process in the redeploy cycle:
java -jar target/my-fat-jar.jar --redeploy="**/*.java" --on-redeploy="mvn package"
java -jar build/libs/my-fat-jar.jar --redeploy="src/**/*.java" --on-redeploy='./gradlew shadowJar'
The "on-redeploy" option specifies a command invoked after the shutdown of the application and before the
restart. So you can hook your build tool if it updates some runtime artifacts. For instance, you can launch gulp
or grunt
to update your resources. Don’t forget that passing parameters to your application requires the
--java-opts
param:
java -jar target/my-fat-jar.jar --redeploy="**/*.java" --on-redeploy="mvn package" --java-opts="-Dkey=val"
java -jar build/libs/my-fat-jar.jar --redeploy="src/**/*.java" --on-redeploy='./gradlew shadowJar' --java-opts="-Dkey=val"
The redeploy feature also supports the following settings:
-
redeploy-scan-period
: the file system check period (in milliseconds), 250ms by default -
redeploy-grace-period
: the amount of time (in milliseconds) to wait between 2 re-deployments, 1000ms by default -
redeploy-termination-period
: the amount of time to wait after having stopped the application (before launching user command). This is useful on Windows, where the process is not killed immediately. The time is given in milliseconds. 0 ms by default.
In Vert.x a cluster manager is used for various functions including:
-
Discovery and group membership of Vert.x nodes in a cluster
-
Maintaining cluster wide topic subscriber lists (so we know which nodes are interested in which event bus addresses)
-
Distributed Map support
-
Distributed Locks
-
Distributed Counters
Cluster managers do not handle the event bus inter-node transport, this is done directly by Vert.x with TCP connections.
The default cluster manager used in the Vert.x distributions is one that uses Hazelcast but this can be easily replaced by a different implementation as Vert.x cluster managers are pluggable.
A cluster manager must implement the interface {@link io.vertx.core.spi.cluster.ClusterManager}. Vert.x locates cluster managers at run-time by using the Java Service Loader functionality to locate instances of {@link io.vertx.core.spi.cluster.ClusterManager} on the classpath.
If you are using Vert.x at the command line and you want to use clustering you should make sure the lib
directory
of the Vert.x installation contains your cluster manager jar.
If you are using Vert.x from a Maven or Gradle project just add the cluster manager jar as a dependency of your project.
You can also specify cluster managers programmatically if embedding Vert.x using {@link io.vertx.core.VertxOptions#setClusterManager(io.vertx.core.spi.cluster.ClusterManager)}.
Vert.x logs using its internal logging API and supports various logging backends.
The logging backend is selected as follows:
-
the backend denoted by the
vertx.logger-delegate-factory-class-name
system property if present or, -
JDK logging when a
vertx-default-jul-logging.properties
file is in the classpath or, -
a backend present in the classpath, in the following order of preference:
-
SLF4J
-
Log4J
-
Log4J2
-
Otherwise Vert.x defaults to JDK logging.
Set the vertx.logger-delegate-factory-class-name
system property to:
-
io.vertx.core.logging.SLF4JLogDelegateFactory
for SLF4J or, -
io.vertx.core.logging.Log4jLogDelegateFactory
for Log4J or, -
io.vertx.core.logging.Log4j2LogDelegateFactory
for Log4J2 or, -
io.vertx.core.logging.JULLogDelegateFactory
for JDK logging
A JUL logging configuration file can be specified in the normal JUL way, by providing a system property named java.util.logging.config.file
with the value being your configuration file.
For more information on this and the structure of a JUL config file please consult the JDK logging documentation.
Vert.x also provides a slightly more convenient way to specify a configuration file without having to set a system property.
Just provide a JUL config file with the name vertx-default-jul-logging.properties
on your classpath (e.g. inside your fatjar) and Vert.x will use that to configure JUL.
Netty does not rely on external logging configuration (e.g system properties). Instead, it implements a logging configuration based on the logging libraries visible from the Netty classes:
-
use
SLF4J
library if it is visible -
otherwise use
Log4j
if it is visible -
otherwise use
Log4j2
if it is visible -
otherwise fallback to
java.util.logging
Note
|
The eagle eyes among you might have noticed that Vert.x follows the same order of preference. |
The logger implementation can be forced to a specific implementation by setting Netty’s internal logger implementation directly on io.netty.util.internal.logging.InternalLoggerFactory
:
// Force logging to Log4j
InternalLoggerFactory.setDefaultFactory(Log4JLoggerFactory.INSTANCE);
If, when you start your application, you see the following message:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
It means that you have SLF4J-API in your classpath but no actual binding. Messages logged with SLF4J will be dropped. You should add a binding to your classpath. Check https://www.slf4j.org/manual.html#swapping to pick a binding and configure it.
Be aware that Netty looks for the SLF4-API jar and uses it by default.
If your logs show a bunch of:
io.vertx.core.net.impl.ConnectionBase SEVERE: java.io.IOException: Connection reset by peer
It means that the client is resetting the HTTP connection instead of closing it. This message also indicates that you may have not consumed the complete payload (the connection was cut before you were able to).
Vert.x allows you to run your verticles with high availability (HA) support. In that case, when a vert.x instance running a verticle dies abruptly, the verticle is migrated to another vertx instance. The vert.x instances must be in the same cluster.
When vert.x runs with HA enabled, if a vert.x instance where a verticle runs fails or dies, the verticle is redeployed automatically on another vert.x instance of the cluster. We call this verticle fail-over.
To run vert.x with the HA enabled, just add the -ha
flag to the command line:
vertx run my-verticle.js -ha
Now for HA to work, you need more than one Vert.x instances in the cluster, so let’s say you have another Vert.x instance that you have already started, for example:
vertx run my-other-verticle.js -ha
If the Vert.x instance that is running my-verticle.js
now dies (you can test this by killing the process
with kill -9
), the Vert.x instance that is running my-other-verticle.js
will automatic deploy my-verticle
.js
so now that Vert.x instance is running both verticles.
Note
|
the migration is only possible if the second vert.x instance has access to the verticle file (here
my-verticle.js ).
|
Important
|
Please note that cleanly closing a Vert.x instance will not cause failover to occur, e.g. CTRL-C
or kill -SIGINT
|
You can also start bare Vert.x instances - i.e. instances that are not initially running any verticles, they will also failover for nodes in the cluster. To start a bare instance you simply do:
vertx run -ha
When using the -ha
switch you do not need to provide the -cluster
switch, as a cluster is assumed if you
want HA.
Note
|
depending on your cluster configuration, you may need to customize the cluster manager configuration
(Hazelcast by default), and/or add the cluster-host and cluster-port parameters.
|
When running a Vert.x instance with HA you can also optional specify a HA group. A HA group denotes a
logical group of nodes in the cluster. Only nodes with the same HA group will failover onto one another. If
you don’t specify a HA group the default group __DEFAULT__
is used.
To specify an HA group you use the -hagroup
switch when running the verticle, e.g.
vertx run my-verticle.js -ha -hagroup my-group
Let’s look at an example:
In a first terminal:
vertx run my-verticle.js -ha -hagroup g1
In a second terminal, let’s run another verticle using the same group:
vertx run my-other-verticle.js -ha -hagroup g1
Finally, in a third terminal, launch another verticle using a different group:
vertx run yet-another-verticle.js -ha -hagroup g2
If we kill the instance in terminal 1, it will fail over to the instance in terminal 2, not the instance in terminal 3 as that has a different group.
If we kill the instance in terminal 3, it won’t get failed over as there is no other vert.x instance in that group.
The HA implementation also supports quora. A quorum is the minimum number of votes that a distributed transaction has to obtain in order to be allowed to perform an operation in a distributed system.
When starting a Vert.x instance you can instruct it that it requires a quorum
before any HA deployments will
be deployed. In this context, a quorum is a minimum number of nodes for a particular group in the cluster.
Typically you chose your quorum size to Q = 1 + N/2
where N
is the number of nodes in the group. If there
are less than Q
nodes in the cluster the HA deployments will undeploy. They will redeploy again if/when a
quorum is re-attained. By doing this you can prevent against network partitions, a.k.a. split brain.
There is more information on quora here.
To run vert.x instances with a quorum you specify -quorum
on the command line, e.g.
In a first terminal:
vertx run my-verticle.js -ha -quorum 3
At this point the Vert.x instance will start but not deploy the module (yet) because there is only one node in the cluster, not 3.
In a second terminal:
vertx run my-other-verticle.js -ha -quorum 3
At this point the Vert.x instance will start but not deploy the module (yet) because there are only two nodes in the cluster, not 3.
In a third console, you can start another instance of vert.x:
vertx run yet-another-verticle.js -ha -quorum 3
Yay! - we have three nodes, that’s a quorum. At this point the modules will automatically deploy on all instances.
If we now close or kill one of the nodes the modules will automatically undeploy on the other nodes, as there is no longer a quorum.
Quora can also be used in conjunction with ha groups. In that case, quora are resolved for each particular group.
Vert.x can run with native transports (when available) on BSD (OSX) and Linux:
You need to add the following dependency in your classpath:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>4.1.15.Final</version>
<classifier>linux-x86_64</classifier>
</dependency>
Native on Linux gives you extra networking options:
-
SO_REUSEPORT
-
TCP_QUICKACK
-
TCP_CORK
-
TCP_FASTOPEN
{@link examples.CoreExamples#configureLinuxOptions}
You need to add the following dependency in your classpath:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-kqueue</artifactId>
<version>4.1.15.Final</version>
<classifier>osx-x86_64</classifier>
</dependency>
MacOS Sierra and above are supported.
Native on BSD gives you extra networking options:
-
SO_REUSEPORT
{@link examples.CoreExamples#configureBSDOptions}
Natives provide domain sockets support for servers:
{@link examples.CoreExamples#tcpServerWithDomainSockets}
or for http:
{@link examples.CoreExamples#httpServerWithDomainSockets}
As well as clients:
{@link examples.CoreExamples#tcpClientWithDomainSockets}
or for http:
{@link examples.CoreExamples#httpClientWithDomainSockets}
Vert.x is a toolkit, not an opinionated framework where we force you to do things in a certain way. This gives you great power as a developer but with that comes great responsibility.
As with any toolkit, it’s possible to write insecure applications, so you should always be careful when developing your application especially if it’s exposed to the public (e.g. over the internet).
If writing a web application it’s highly recommended that you use Vert.x-Web instead of Vert.x core directly for serving resources and handling file uploads.
Vert.x-Web normalises the path in requests to prevent malicious clients from crafting URLs to access resources outside of the web root.
Similarly for file uploads Vert.x-Web provides functionality for uploading to a known place on disk and does not rely on the filename provided by the client in the upload which could be crafted to upload to a different place on disk.
Vert.x core itself does not provide such checks so it would be up to you as a developer to implement them yourself.
When clustering the event bus between different Vert.x nodes on a network, the traffic is sent un-encrypted across the wire, so do not use this if you have confidential data to send and your Vert.x nodes are not on a trusted network.
Any service can have potentially vulnerabilities whether it’s written using Vert.x or any other toolkit so always follow security best practice, especially if your service is public facing.
For example you should always run them in a DMZ and with an user account that has limited rights in order to limit the extent of damage in case the service was compromised.
The vert.x {@link io.vertx.core.Launcher} is used in fat jar as main class, and by the vertx
command line
utility. It executes a set of commands such as run, bare, start…
You can extend the set of command by implementing your own {@link io.vertx.core.spi.launcher.Command} (in Java only):
@Name("my-command")
@Summary("A simple hello command.")
public class MyCommand extends DefaultCommand {
private String name;
@Option(longName = "name", required = true)
public void setName(String n) {
this.name = n;
}
@Override
public void run() throws CLIException {
System.out.println("Hello " + name);
}
}
You also need an implementation of {@link io.vertx.core.spi.launcher.CommandFactory}:
public class HelloCommandFactory extends DefaultCommandFactory<HelloCommand> {
public HelloCommandFactory() {
super(HelloCommand.class);
}
}
Then, create the src/main/resources/META-INF/services/io.vertx.core.spi.launcher.CommandFactory
and add a line
indicating the fully qualified name of the factory:
io.vertx.core.launcher.example.HelloCommandFactory
Builds the jar containing the command. Be sure to includes the SPI file
(META-INF/services/io.vertx.core.spi.launcher.CommandFactory
).
Then, place the jar containing the command into the classpath of your fat-jar (or include it inside) or in the lib
directory of your vert.x distribution, and you would be able to execute:
vertx hello vert.x
java -jar my-fat-jar.jar hello vert.x
To use the {@link io.vertx.core.Launcher} class in a fat-jar just set the Main-Class
of the MANIFEST to
io.vertx.core.Launcher
. In addition, set the Main-Verticle
MANIFEST entry to the name of your main verticle.
By default, it executed the run
command. However, you can configure the default command by setting the
Main-Command
MANIFEST entry. The default command is used if the fat jar is launched without a command.
You can also create a sub-class of {@link io.vertx.core.Launcher} to start your application. The class has been designed to be easily extensible.
A {@link io.vertx.core.Launcher} sub-class can:
-
customize the vert.x configuration in {@link io.vertx.core.Launcher#beforeStartingVertx(io.vertx.core.VertxOptions)}
-
retrieve the vert.x instance created by the "run" or "bare" command by overriding {@link io.vertx.core.Launcher#afterStartingVertx(io.vertx.core.Vertx)}
-
configure the default verticle and command with {@link io.vertx.core.impl.launcher.VertxCommandLauncher#getMainVerticle()} and {@link io.vertx.core.impl.launcher.VertxCommandLauncher#getDefaultCommand()}
-
add / remove commands using {@link io.vertx.core.impl.launcher.VertxCommandLauncher#register(java.lang.Class)} and {@link io.vertx.core.impl.launcher.VertxCommandLauncher#unregister(java.lang.String)}
When you use the {@link io.vertx.core.Launcher} class as main class, it uses the following exit code:
-
0
if the process ends smoothly, or if an uncaught error is thrown -
1
for general purpose error -
11
if Vert.x cannot be initialized -
12
if a spawn process cannot be started, found or stopped. This error code is used by thestart
andstop
command -
14
if the system configuration is not meeting the system requirement (shc as java not found) -
15
if the main verticle cannot be deployed
When Vert.x needs to read a file from the classpath (embedded in a fat jar, in a jar form the classpath or a file that is on the classpath), it copies it to a cache directory. The reason behind this is simple: reading a file from a jar or from an input stream is blocking. So to avoid to pay the price every time, Vert.x copies the file to its cache directory and reads it from there every subsequent read. This behavior can be configured.
First, by default, Vert.x uses $CWD/.vertx
as cache directory. It creates a unique directory inside this one to
avoid conflicts. This location can be configured by using the vertx.cacheDirBase
system property. For instance
if the current working directory is not writable (such as in an immutable container context), launch your
application with:
vertx run my.Verticle -Dvertx.cacheDirBase=/tmp/vertx-cache
# or
java -jar my-fat.jar vertx.cacheDirBase=/tmp/vertx-cache
Important
|
the directory must be writable. |
When you are editing resources such as HTML, CSS or JavaScript, this cache mechanism can be annoying as it serves
only the first version of the file (and so you won’t see your edits if you reload your page). To avoid this
behavior, launch your application with -Dvertx.disableFileCaching=true
. With this setting, Vert.x still uses
the cache, but always refresh the version stored in the cache with the original source. So if you edit a file
served from the classpath and refresh your browser, Vert.x reads it from the classpath, copies it to the cache
directory and serves it from there. Do not use this setting in production, it can kill your performances.
Finally, you can disable completely the cache by using -Dvertx.disableFileCPResolving=true
. This setting is not
without consequences. Vert.x would be unable to read any files from the classpath (only from the file system). Be
very careful when using this settings.