In recent years, it seems like reactive programming has found its place in the world of software development. This is especially obvious when you look at both mobile platforms (Android and iOS) as well as JavaScript and libraries like RxJava, RxSwift, and RxJS. Handling user interactions with a UI, making API calls in parallel and maybe even sending events to an analytics server while keeping your app from stuttering in an environment that has limited resources is not an easy job. Also, it’s not like you can just add another phone or browser to make your app run faster. This might be one of the reasons why reactive programming didn’t make significant progress in the world of backend development - it’s easy to just add another server instance that will run your app and leave load balancer to work things out.
But it seems like people realized that they might save a few bucks from running another server instance just by changing the way they write code. There seems to be more interest in reactive programming in recent years and there’s a lot of development focused towards support for reactive programming in different technologies and programming languages.
As this series of articles is Spring Boot oriented, our focus will be solely on Project Reactor since Spring Boot ships with it, but most of the principles explained in this article can be applied to any other implementation of the Reactive Streams specification.
Reactive Streams specification
Reactive Streams page sums the intention of the specification nicely by saying:
Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking backpressure.
We will come to the term backpressure later, but for now, it’s important to understand that we are talking about an asynchronous/non-blocking stream processing (usually some type of transformation that involves business logic). Also, it’s important to notice that Specification does not provide any details on how to achieve non-blocking processing or backpressure - this is an implementation detail and it’s on Specification implementer to decide.
For now, we will look at the 4 Java interfaces that are part of the specification.
Publisher
public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
Publisher<T>
is one of the two main “parties” involved in data exchange. As its name suggests, Publisher<T>
is an interface defining a producer of data and, by being parameterized with a generic type, it can produce anything (strings, numbers, custom types, etc.). Each Publisher<T>
enables us to subscribe to its stream of data (have in mind that stream can be anything from 0 to an infinite number of elements) by providing a consumer - Subscriber<T>
.
Subscriber
public interface Subscriber<T> {
public void onSubscribe(Subscription s);
public void onNext(T t);
public void onError(Throwable t);
public void onComplete();
}
Subscriber<T>
is a second main “party” that sits on the consumer side of a stream. It has a method for a subscription action that will notify Publisher<T>
it can start sending data along with 3 other methods that will be triggered as a stream is being processed. onNext
will be called every time there’s a new item in the stream ready for a consumer to process while onError
or onComplete
will be called when there’s an error in the stream processing or when there are no more items in the stream. The important thing here is that once onError
or onComplete
are called, there will be no more onNext
calls. We can say that onError
and onComplete
are terminating operations on a stream.
Subscription
public interface Subscription {
public void request(long n);
public void cancel();
}
Subscription
interface links Publisher<T>
with a Subscriber<T>
. It enables Subscriber<T>
to request more data from the Publisher<T>
or to cancel the stream. Requesting more data enables backpressure. Backpressure is an ability to slow down a producer if a consumer is too slow. So, imagine Subscriber<T>
being able to consume 100 items per second and Publisher<T>
being able to produce 10000 items per second. Without backpressure support, Subscriber<T>
would be overwhelmed relatively quickly, but with it, Subscriber<T>
can request an exact amount of data it can consume and keep running normally. Of course, extra data that cannot be processed has to be handled in some way and that can be customizable depending on the application needs.
Processor
public interface Processor<T, R> extends Subscriber<T>, Publisher<R> {
}
In the end, there’s a Processor<T, R>
that it both Subscriber<T>
and Publisher<T>
and it is used when you need a bridge between non-reactive and reactive API. Notice that interface does not define any new methods - it just contains inherited methods. In the upcoming post, we will see an example of the Processor<T>
usage so stay tuned!
Project Reactor
Project Reactor provides one of the many available Reactive Streams specification implementations and it comes bundled with the Spring WebFlux starter dependency. Two projects that are bundled in it are Reactor Core and Reactor Test that gives you basic support for writing and testing your reactive applications. Also, there’s a Reactor Netty project that provides reactive client/server capabilities over Netty.
Reactor Core’s awesome documentation does an amazing job explaining issues of both blocking and asynchronous programming and gradually builds up a case for using a reactive style. After that, it introduces the main types that implement Reactive Streams specification: Mono<T>
, Flux<T>
and 4 types of Processor<T>
. There’s also a section on testing, as well as a few sections on some more advanced topics that we will go through in upcoming posts.
Conclusion
Reactive programming is slowly finding its place in backend development. In modern, microservice/event-driven architecture where the system needs to handle a lot of requests, having the ability to process everything in a non-blocking fashion can have huge benefits on the system performance. In the next article, we will explore basic building blocks of Project Reactor and later we will see how can we leverage those building blocks in the Spring application and what are the benefits when compared to a blocking implementation.