Skip to content

ReactiveMeetupLucerne/SpringWebFluxWithSpringBoot2.0

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

"Reactive inter-process communication" with Spring WebFlux and Spring Boot 2.0

Introduction

We are a group of software developers with a more or less monthly meetup in Lucerne (Switzerland). We are interested in programming questions around concurrency, distributed systems, consistency…​

We did some learning using different kinds of async non-blocking programming models and libraries to communicate between objects / functions inside the same JVM process in 2017, see https://github.com/ReactiveMeetupLucerne/AsyncNonBlockingExamplesJVM for the results.

Important for us were especially the libraries implementing the reactive streams interfaces http://www.reactive-streams.org: RxJava, Reactor and Akka-Streams.

With the availability of Spring WebFlux in Spring 5 (Fall 2017) and Spring Boot 2.0 (Spring 2018), there is now one of the first "big main-stream" implementations available to do "reactive inter-process communication".

Spring WebFlux embraces fully Reactor and RxJava, but not Akka-Streams.

To figure out how Spring WebFlux works, we defined some questions regarding features, behaviour and performance. We try to answer these questions by providing working code examples.

Have fun viewing the results and feel free to provide feedback, improvements or contributions via GitHub issues and pull requests.

Sincerly,
the participating members of the Lucerne reactive meetup group

Goals for us

  • Get some more exercise using Reactor and RxJava

  • Get first impressions about how to use Spring WebFlux and Spring Boot 2.0

  • Try to get a deeper understanding by asking and answering questions

  • Have fun

Getting started

Reactive communication inside the same process

The "reactive" libraries for the JVM like Reactor or RxJava offer a lot of power to implement parallelism, flow-control, backpressure handling, etc. Think of Java 8 CompletableFuture mixed with Java 8 Streams, but on steroids.

Questions

Questions regarding "reactive inter-process communication" between two JVM processes

Question Answer Code example(s)

1

Regarding serialization.

In the hello world example, there is an object of the type TextDto transported between the two processes.

What are the requirements for a data type (java class) to be transported? e.g. must implement Serializable? or e.g. must follow the JavaBean convention (setter/getter)?

The default serialization/deserialization in Spring Boot 2.0 is Jackson.

So the requirements for a data type (java class) in order to "be transprtable" are, that it can be handled by Jackson. Jackson supports e.g. classes following the JavaBean convention (getter/setter) out of the box.

For more complicated cases like classes with ignoreable properties, final fields, mutliple constructors, e.g., there are lots of annotations like @JsonCreator or @JsonProperty to tell Jackson how to deal with it.

The standard Java serialization using e.g. the java.io.Serializable interface does NOT come into play within Spring WebFlux.

Q1Client.java Q1Server.java

2

Regarding serialization.

In addition to question 1: What mechanism uses Spring WebFlux to marshal/unmarshal the objects? (e.g. JAX-RS, Jackson, GSON, …​)

see answer of question 1

3

Regarding "composeability".

Can you create an example showing how to fetch the price for the flight, the hotel and the car "in parallel"?

Is this "inter-process" somehow different than in the "inside same process" example from RxJavaCodeExamples.java?

That’s simple. From the API perspective there is no difference in "calling something" within the same process or some other processes.

A couple of things are interesting in this case where we are calling an second process:

a) WebClient is immutable (see https://stackoverflow.com/a/49107545/1662412), so it is thread-safe and we can share it across threads "inside our process"

b) And since WebClient is non-blocking, we don’t need to do the subcription on the Mono explicitly on a separate thread with e.g. using subscribeOn(..) to "make it async".

ClientApplication.java ServerApplication.java

4

Regarding cancellation.

A server-side producer creates a Flux with 1'000'000 values. But the client only takes 1'000 of them (using the take(1000) operator).

How many values does the server actually produce?

How many values does the client actually receive?

When running the example for the first time, the server produces around 1013 values and then stops. The client gets exactly 1000 values. When I run the client example a couple more times, the server produces around 7000 values and then stops. The client always gets exactly 1000 values.

Works fine!

ClientApplication.java ServerApplication.java

5

Regarding timeouts/cancellation.

A server-side producer creates a Mono with 1 value, but the value is produced with a delay of 5 seconds.

The client aborts its call after 1 second using the timeout operator.

Is the cancellation of the client propagated to the server? Is the delayed creation of the value on the server-side aborted?

There seems to be an issue here.

The cancellation (unsubscribing) due to the timeout is not propagated to the server side in this example.

Altough the cancellation worked in the example for question 4 with many elements, it doesn’t seem to work when only one element is in play.

We created an issue for this: https://jira.spring.io/browse/SPR-16768

ClientApplication.java ServerApplication.java

6

Regarding flow-control.

There is a fast producer on the server side and a slow client. The slow client can only process 1 value every 100ms. The producer works at "full speed".

Does the producer react on this and eventually start producing fewer values per time? Do server and client find some common pace?

The fast producer reacts on the slow client, but only because the TCP-buffer is full and blocks.

When the buffer is not full anymore, the producer continues to produce at full speed until the buffer is full again. This is kind of "poor man’s backpressure": blocking backpressure.

We can monitor the TCP buffers using:

netstat -n -p tcp | grep 0 | grep 8080

ClientApplication.java ServerApplication.java

7

Regarding flow-control.

How is the flow-control from question 6 implemented?

Is there some kind of two-way communication using e.g. HTTP/2? Or some kind of backpressure on the network level (TCP)?

See https://stackoverflow.com/questions/41772711/backpressure-for-rest-endpoints-with-spring-functional-web-framework#comment70845288_41825959 and https://www.youtube.com/watch?v=oS9w3VenDW0 (watch between 28:20 and 29:20) for some hints.

See answer of question 6.

However, there seems to be some activity with "more sophisticated" (e.g. non-blocking, two-way communication, …​) transport mechanisms:

https://jira.spring.io/browse/SPR-16751 https://jira.spring.io/browse/SPR-16358 https://jira.spring.io/browse/SPR-16466 https://jira.spring.io/browse/SPR-15776

…​

8

Regarding flow-control.

Assuming there is some kind of buffering used internally as solution in question 6. Is there a way to configure the "buffer size"?

With "buffer size" I think of number of objects or number of bytes on the network level.

TODO

TODO

9

Regarding flow-control.

Slow producer (server), fast consumer (client). Is the consumer (client) somehow blocked? E.g. a thread wasted?

There are no threads in the client blocked by this.

We can’t see any blocked threads using e.g. JVisualVM.

"Under the hood" is netty at work, which is async/non-blocking.

ClientApplication.java ServerApplication.java

10

Regarding flow-control.

A client zips two Flux from a server together. The two Flux work on a different speed: One Flux works at full speed, the second Flux emits only one value every 100ms.

Does the faster server Flux eventually react on that and start producing fewer values per time?

TODO

TODO

11

Regarding flow-control.

We have 3 processes involved: A fake persistence process (think of a database with a reactive REST API), an API gateway process and a client process.

The three processes work at different speeds: The client is very slow and can only process one value per 1000ms. The API gateway process has a bad day and can only process one value per ms. The fake persistence process works at full speed.

Do the faster producers eventually react on the slower consumers? Do the three process find some common speed? What’s the resulting common speed?

Same behaviour as in question 6

ClientApplication.java API Gateway ServerApplication.java Fake DB ServerApplication.java

12

Regarding performance.

We have a producer (server) and a consumer (client). Both working at full speed.

How many kb/sec are transported? How many objects/sec?

TODO

TODO

13

Regarding latency.

We have a producer (server) with a Mono (single value) and a consumer (client).

What’s the approx. latency from subscription-time until the value is received?

TODO

TODO

14

Regarding errors.

We have a producer (server) with a Flux which delivers first 10 values and then one error, a RuntimeException("Boom").

How does the error arrive at the client? Type, Stacktrace etc?

TODO

TODO

15

Regarding errors.

We have a producer (server) with a Flux which delivers first 10 values and then one error, an exception with a custom data type "MyRuntimeException("Boom")".

How does the error arrive at the client? Type, Stacktrace etc?

TODO

TODO

16

Regarding errors.

We have a producer (server) which produces an endless stream of values.

The client processes the frist 10 values and then has an exception in his own "stream handling" (e.g. a RuntimeException in a map operator).

Is the producer (server) notified by this? Does the producer (server) stop producing values? How many values does the producer (server) produce?

TODO

TODO

17

Regarding errors.

Same like question 16. But this time, the client "crashes" (Runtime#halt), instead of the server.

Is the producer (server) notified by this? Does the producer (server) stop producing values? How many values does the producer (server) produce?

TODO

TODO

18

Regarding errors.

We have a producer (server) which produces an endless stream of values. But after 10 values, it crashes (Runtime#halt).

How is the client notified by this? What kind of error does the client get? How many values does the client get?

TODO

TODO

Questions regarding "reactive inter-process communication" between a JVM process and a webbrowser process

TODO

Releases

No releases published

Packages

No packages published

Languages