You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently when using zipWith (and perhaps others) data is always requested at PlatformDependent.XS_BUFFER_SIZE. In some cases, this is too much data. For example, if implementing an exponential back off, You might want to have a Stream.range() concatenated with a Mono.error in order to flag when too many retries have happened. If the number of elements in the range is smaller than XS_BUFFER_SIZE, the error is immediately generated, not when the end of the range is reached. An example of this behavior would look like:
import reactor.core.publisher.Mono;
import reactor.rx.Stream;
import java.util.concurrent.CountDownLatch;
import static java.util.concurrent.TimeUnit.SECONDS;
public final class Test {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
Stream
.interval(0, 1, SECONDS)
.zipWith(Stream
.range(0, 5)
.concatWith(Mono.error(new IllegalStateException("A request too far"))))
.consume(
System.out::println,
(throwable) -> {
throwable.printStackTrace();
latch.countDown();
},
latch::countDown);
latch.await(); // Should print 0,0, 1,1, 2,2, and 3,3 before printing an IllegalStateException
}
}
There should be away to ensure that a subscriber can request as much data as they want, but can only request in batches of a certain size allowing producers to specify that size.
The text was updated successfully, but these errors were encountered:
Currently when using
zipWith
(and perhaps others) data is always requested atPlatformDependent.XS_BUFFER_SIZE
. In some cases, this is too much data. For example, if implementing an exponential back off, You might want to have aStream.range()
concatenated with aMono.error
in order to flag when too many retries have happened. If the number of elements in the range is smaller thanXS_BUFFER_SIZE
, the error is immediately generated, not when the end of the range is reached. An example of this behavior would look like:There should be away to ensure that a subscriber can request as much data as they want, but can only request in batches of a certain size allowing producers to specify that size.
The text was updated successfully, but these errors were encountered: