Skip to content

Commit

Permalink
#69 7.0.0 documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
vladimir-bukhtoyarov committed Dec 15, 2021
1 parent b1224c5 commit 5ee3d17
Show file tree
Hide file tree
Showing 26 changed files with 139 additions and 196 deletions.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ In addition to local in-memory buckets, the Bucket4j supports clustered usage sc
| ```Inifinispan``` | Yes | Yes | No |
| ```Oracle Coherence``` | Yes | Yes | No |

## [Documentation](https://bucket4j.com)

## Get Bucket4j library
#### You can add Bucket4j to your project as maven dependency
The Bucket4j is distributed through [Maven Central](http://search.maven.org/):
Expand All @@ -51,7 +53,7 @@ mvn clean install
Feel free to ask via:
* [Bucket4j discussions](https://github.com/vladimir-bukhtoyarov/bucket4j/discussions) for questions, feature proposals, sharing of experience.
* [Bucket4j issue tracker](https://github.com/vladimir-bukhtoyarov/bucket4j/issues/new) to report a bug.
* [Vladimir Bukhtoyarov - Upwork Profile](https://www.upwork.com/freelancers/~013d8e02a32ffdd5f5) if you want to get one time paid support.
* [Vladimir Bukhtoyarov - Upwork Profile](https://www.upwork.com/freelancers/~013d8e02a32ffdd5f5) if you want to get one time paid support.

## License
Copyright 2015-2021 Vladimir Bukhtoyarov
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ It is not possible to add, remove or change the limits for already created confi
2. There is another problem when you are choosing <<tokens-inheritance-strategy-proportionally, PROPORTIONALLY>>, <<tokens-inheritance-strategy-as-is, AS_IS>> or <<tokens-inheritance-strategy-additive, ADDITIVE>> or <<tokens-inheritance-strategy-as-is, AS_IS>> and bucket has more then one bandwidth. For example how does replaceConfiguration implementation should bind bandwidths to each other in the following example?
[source, java]
----
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
.addLimit(Bandwidth.simple(10, Duration.ofSeconds(1)))
.addLimit(Bandwidth.simple(10000, Duration.ofHours(1)))
.build();
...
BucketConfiguration newConfiguration = Bucket4j.configurationBuilder()
BucketConfiguration newConfiguration = BucketConfiguration.configurationBuilder()
.addLimit(Bandwidth.simple(5000, Duration.ofHours(1)))
.addLimit(Bandwidth.simple(100, Duration.ofSeconds(10)))
.build();
Expand All @@ -30,12 +30,12 @@ Instead of inventing the backward maggic Bucket4j provides to you ability to dea
so in case of multiple bandwidth configuratoin replacement code can copy available tokens by bandwidth ID. So it is better to rewrite code above as following:
[source, java]
----
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
.addLimit(Bandwidth.simple(10, Duration.ofSeconds(1)).withId("technical-limit"))
.addLimit(Bandwidth.simple(10000, Duration.ofHours(1)).withId("business-limit"))
.build();
...
BucketConfiguration newConfiguration = Bucket4j.configurationBuilder()
BucketConfiguration newConfiguration = BucketConfiguration.builder()
.addLimit(Bandwidth.simple(5000, Duration.ofHours(1)).withId("business-limit"))
.addLimit(Bandwidth.simple(100, Duration.ofSeconds(10)).withId("technical-limit"))
.build();
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/src/main/docs/asciidoc/advanced/listener.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The bucket can be decorated by listener via ``toListenable`` method.
----
BucketListener listener = new MyListener();
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
.addLimit(Bandwidth.simple(100, Duration.ofMinutes(1)))
.build()
.toListenable(listener);
Expand Down
8 changes: 4 additions & 4 deletions asciidoc/src/main/docs/asciidoc/basic/api-reference.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -188,12 +188,12 @@ long getAvailableTokens();
* For example how does replaceConfiguration implementation should bind bandwidths to each other in the following example?
* <pre>
* <code>
* Bucket bucket = Bucket4j.builder()
* Bucket bucket = Bucket.builder()
* .addLimit(Bandwidth.simple(10, Duration.ofSeconds(1)))
* .addLimit(Bandwidth.simple(10000, Duration.ofHours(1)))
* .build();
* ...
* BucketConfiguration newConfiguration = Bucket4j.configurationBuilder()
* BucketConfiguration newConfiguration = BucketConfiguration.builder()
* .addLimit(Bandwidth.simple(5000, Duration.ofHours(1)))
* .addLimit(Bandwidth.simple(100, Duration.ofSeconds(10)))
* .build();
Expand All @@ -205,12 +205,12 @@ long getAvailableTokens();
* so in case of multiple bandwidth configuratoin replacement code can copy available tokens by bandwidth ID. So it is better to rewrite code above as following:
* <pre>
* <code>
* Bucket bucket = Bucket4j.builder()
* Bucket bucket = Bucket.builder()
* .addLimit(Bandwidth.simple(10, Duration.ofSeconds(1)).withId("technical-limit"))
* .addLimit(Bandwidth.simple(10000, Duration.ofHours(1)).withId("business-limit"))
* .build();
* ...
* BucketConfiguration newConfiguration = Bucket4j.configurationBuilder()
* BucketConfiguration newConfiguration = BucketConfiguration.builder()
* .addLimit(Bandwidth.simple(5000, Duration.ofHours(1)).withId("business-limit"))
* .addLimit(Bandwidth.simple(100, Duration.ofSeconds(10)).withId("technical-limit"))
* .build();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ you need to pay close attention to the throttling time window.
To protect from this kind attacks, you should specify multiple limits like bellow
[source, java]
----
Bucket bucket = Bucket4j.jCacheBuilder(RecoveryStrategy.RECONSTRUCT)
Bucket bucket = Bucket.builder()
.addLimit(Bandwidth.simple(10000, Duration.ofSeconds(3_600))
.addLimit(Bandwidth.simple(20, Duration.ofSeconds(1)) // attacker is unable to achieve 1000RPS and crash service in short time
.build(cache, bucketId);
.build();
----
The number of limits specified per bucket does not impact the performance.

Expand Down
14 changes: 7 additions & 7 deletions asciidoc/src/main/docs/asciidoc/basic/quick-start.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ But acquiring stacktraces is very cost operation by itself, and you want to do i
// define the limit 1 time per 10 minute
Bandwidth limit = Bandwidth.simple(1, Duration.ofMinutes(10));
// construct the bucket
Bucket bucket = Bucket4j.builder().addLimit(limit).build();
Bucket bucket = Bucket.builder().addLimit(limit).build();
...
Expand All @@ -53,7 +53,7 @@ and by contract with provider you should poll not often than 100 times per 1 min
// define the limit 100 times per 1 minute
Bandwidth limit = Bandwidth.simple(100, Duration.ofMinutes(1));
// construct the bucket
Bucket bucket = Bucket4j.builder().addLimit(limit).build();
Bucket bucket = Bucket.builder().addLimit(limit).build();
...
volatile double exchangeRate;
Expand Down Expand Up @@ -89,7 +89,7 @@ public class ThrottlingFilter implements javax.servlet.Filter {
long overdraft = 50;
Refill refill = Refill.greedy(10, Duration.ofSeconds(1));
Bandwidth limit = Bandwidth.classic(overdraft, refill);
return Bucket4j.builder().addLimit(limit).build();
return Bucket.builder().addLimit(limit).build();
}
@Override
Expand Down Expand Up @@ -147,7 +147,7 @@ To solve problem you can construct following bucket:
static final long MAX_WAIT_NANOS = TimeUnit.HOURS.toNanos(1);
// ...
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
// allows 1000 tokens per 1 minute
.addLimit(Bandwidth.simple(1000, Duration.ofMinutes(1)))
// but not often then 50 tokens per 1 second
Expand All @@ -173,7 +173,7 @@ int initialTokens = 42;
Bandwidth limit = Bandwidth
.simple(1000, Duration.ofHours(1))
.withInitialTokens(initialTokens);
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
.addLimit(limit)
.build();
----
Expand Down Expand Up @@ -230,7 +230,7 @@ By default Bucket4j uses millisecond time resolution, it is preferred time measu
But rarely(for example benchmarking) you wish the nanosecond precision:
[source, java]
----
Bucket4j.builder().withNanosecondPrecision()
Bucket.builder().withNanosecondPrecision()
----
Be very careful to choose this time measurement strategy, because ``System.nanoTime()`` produces inaccurate results,
use this strategy only if period of bandwidth is too small that millisecond resolution will be undesired.
Expand All @@ -252,7 +252,7 @@ public class ClusteredTimeMeter implements TimeMeter {
}
Bandwidth limit = Bandwidth.simple(100, Duration.ofMinutes(1));
Bucket bucket = Bucket4j.builder()
Bucket bucket = Bucket.builder()
.withCustomTimePrecision(new ClusteredTimeMeter())
.addLimit(limit)
.build();
Expand Down
21 changes: 11 additions & 10 deletions asciidoc/src/main/docs/asciidoc/distributed/asynchronous.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
=== Asynchronous API
Since version ``3.0`` Bucket4j provides asynchronous analogs for majority of API methods.
Async view of bucket is availble through ``asAsync()`` method:
Async view of proxyManager is available through ``asAsync()`` method:
[source, java]
----
Bucket bucket = ...;
AsyncBucket asyncBucket = bucket.asAsync();
ProxyManager proxyManager = ...;
AsyncProxyManager asyncProxyManager = proxyManager.asAsync();

BucketConfiguration configuration = ...;
AsyncBucketProxy asyncBucket = asyncProxyManager.builder().build(key, configuration);
----
Each method of class [AsyncBucket](https://github.com/vladimir-bukhtoyarov/bucket4j/blob/3.1/bucket4j-core/src/main/java/io/github/bucket4j/AsyncBucket.java)
has full equivalence with same semantic in synchronous version in the [Bucket](https://github.com/vladimir-bukhtoyarov/bucket4j/blob/3.0/bucket4j-core/src/main/java/io/github/bucket4j/Bucket.java) class.
Each method of class ```AsyncBucketProxy``` has full equivalence with same semantic in synchronous version in the ```Bucket``` class.

==== Example - limiting the rate of access to asynchronous servlet
Imagine that you develop SMS service, which allows send SMS via HTTP interface.
Expand Down Expand Up @@ -36,11 +38,10 @@ non-blocking architecture means that both SMS sending and limit checking should
**Mockup of service based on top of Servlet API and bucket4j-infinispan**:
[source, java]
----
public class SmsServlet extends javax.servlet.http.HttpServlet {
private SmsSender smsSender;
private ProxyManager<String> buckets;
private AsyncProxyManager<String> buckets;
private Supplier<BucketConfiguration> configuration;
@Override
Expand All @@ -51,13 +52,13 @@ public class SmsServlet extends javax.servlet.http.HttpServlet {
smsSender = (SmsSender) ctx.getAttribute("sms-sender");
FunctionalMapImpl<String, byte[]> bucketMap = (FunctionalMapImpl<String, byte[]>) ctx.getAttribute("bucket-map");
this.buckets = Bucket4j.extension(Infinispan.class).proxyManagerForMap(bucketMap);
this.buckets = new InfinispanProxyManager(bucketMap).asAsync();
this.configuration = () -> {
long overdraft = 20;
Refill refill = Refill.greedy(10, Duration.ofMinutes(1));
Bandwidth limit = Bandwidth.classic(overdraft, refill);
return Bucket4j.configurationBuilder()
return BucketConfiguratiion.builder()
.addLimit(limit)
.build();
};
Expand All @@ -71,7 +72,7 @@ public class SmsServlet extends javax.servlet.http.HttpServlet {
String toNumber = req.getParameter("to");
String text = req.getParameter("text");
Bucket bucket = buckets.getProxy(fromNumber, configuration);
AsyncBucketProxy bucket = buckets.builder().build(fromNumber, configuration);
CompletableFuture<ConsumptionProbe> limitCheckingFuture = bucket.asAsync().tryConsumeAndReturnRemaining(1);
final AsyncContext asyncContext = req.startAsync();
limitCheckingFuture.thenCompose(probe -> {
Expand Down
53 changes: 12 additions & 41 deletions asciidoc/src/main/docs/asciidoc/distributed/jcache/coherence.adoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,5 @@
[[bucket4j-coherence, Bucket4j-Coherence]]
=== Oracle Coherence integration
Before use ``bucket4j-coherence`` module please read [bucket4j-jcache documentation](jcache-usage.md),
because ``bucket4j-coherence`` is just a follow-up of ``bucket4j-jcache``.

**Question:** Bucket4j already supports JCache since version ``1.2``. Why it was needed to introduce direct support for ``Oracle Coherence``?
**Answer:** Because https://www.jcp.org/en/jsr/detail?id=107[JCache API (JSR 107)] does not specify asynchronous API,
developing the dedicated module ``bucket4j-coherence`` was the only way to provide asynchrony for users who use ``Bucket4j`` and ``Oracle Coherence`` together.

**Question:** Should I migrate from ``bucket4j-jcache`` to ``bucketj-coherence`` If I do not need in asynchronous API?
**Answer:** No, you should not migrate to ``bucketj-coherence`` in this case.

==== Dependencies
To use ``bucket4j-coherence`` extension you need to add following dependency:
[source, xml, subs=attributes+]
Expand All @@ -25,30 +15,25 @@ To use ``bucket4j-coherence`` extension you need to add following dependency:
[source, java]
----
com.tangosol.net.NamedCache<K, byte[]> cache = ...;
...
private static final CoherenceProxyManager<K> proxyManager = new CoherenceProxyManager(map);
Bucket bucket = Bucket4j.extension(Coherence.class).builder()
.addLimit(Bandwidth.simple(1_000, Duration.ofMinutes(1)))
.build(cache, key, RecoveryStrategy.RECONSTRUCT);
----

==== Example of ProxyManager instantiation
[source, java]
----
com.tangosol.net.NamedCache<K, byte[]> cache = ...;
...
BucketConfiguration configuration = BucketConfiguration.builder()
.addLimit(Bandwidth.simple(1_000, Duration.ofMinutes(1)))
.build(key, configuration);
ProxyManager proxyManager = Bucket4j.extension(Coherence.class).proxyManagerForCache(cache);
Bucket bucket = proxyManager.builder().build(configuration);
----

==== Configuring POF serialization for Bucket4j library classes
If you configure nothing, then by default Java serialization will be used for serialization Bucket4j library classes. Java serialization can be rather slow and should be avoided in general.
``Bucket4j`` provides https://docs.oracle.com/cd/E24290_01/coh.371/e22837/api_pof.htm#COHDG1363[custom POF serializers] for all library classes that could be transferred over network.
To let Coherence know about POF serializers you should register three serializers in the POF configuration config file:
* ``CoherenceEntryProcessorAdapterPofSerializer`` for class ``CoherenceEntryProcessorAdapter``
* ``GridBucketStatePofSerializer`` for class ``GridBucketState``
* ``CommandResultPofSerializer`` for class ``CommandResult``
====
``io.github.bucket4j.grid.coherence.pof.CoherenceEntryProcessorPofSerializer`` for class ``io.github.bucket4j.grid.coherence.CoherenceProcessor``
====

*Example of POF serialization:*
.Example of POF serialization config:
[source, xml]
----
<pof-config xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config"
Expand All @@ -62,23 +47,9 @@ To let Coherence know about POF serializers you should register three serializer
<!-- Define serializers for Bucket4j classes -->
<user-type>
<type-id>1001</type-id>
<class-name>io.github.bucket4j.grid.coherence.CoherenceEntryProcessorAdapter</class-name>
<serializer>
<class-name>io.github.bucket4j.grid.coherence.pof.CoherenceEntryProcessorAdapterPofSerializer</class-name>
</serializer>
</user-type>
<user-type>
<type-id>1002</type-id>
<class-name>io.github.bucket4j.grid.GridBucketState</class-name>
<serializer>
<class-name>io.github.bucket4j.grid.coherence.pof.GridBucketStatePofSerializer</class-name>
</serializer>
</user-type>
<user-type>
<type-id>1003</type-id>
<class-name>io.github.bucket4j.grid.CommandResult</class-name>
<class-name>io.github.bucket4j.grid.coherence.CoherenceProcessor</class-name>
<serializer>
<class-name>io.github.bucket4j.grid.coherence.pof.CommandResultPofSerializer</class-name>
<class-name>io.github.bucket4j.grid.coherence.pof.CoherenceEntryProcessorPofSerializer</class-name>
</serializer>
</user-type>
</user-type-list>
Expand Down
32 changes: 8 additions & 24 deletions asciidoc/src/main/docs/asciidoc/distributed/jcache/hazelcast.adoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,5 @@
[[bucket4j-hazelcast, Bucket4j-Hazelcast]]
=== Hazelcast integration
Before use ``bucket4j-hazelcast`` module please read [bucket4j-jcache documentation](jcache-usage.md),
because ``bucket4j-hazelcast`` is just a follow-up of ``bucket4j-jcache``.

**Question:** Bucket4j already supports JCache since version ``1.2``. Why it was needed to introduce direct support for ``Hazelcast``?
**Answer:** Because https://www.jcp.org/en/jsr/detail?id=107[JCache API (JSR 107)] does not specify asynchronous API,
developing the dedicated module ``bucket4j-hazelcast`` was the only way to provide asynchrony for users who use ``Bucket4j`` and ``Hazelcast`` together.

**Question:** Should I migrate from ``bucket4j-jcache`` to ``bucket4j-hazelcast`` If I do not need in asynchronous API?
**Answer:** No, you should not migrate to ``bucket4j-hazelcast`` in this case.

==== Dependencies
To use Bucket4j extension for Hazelcast with ``Hazelcast 4.x`` you need to add following dependency:
[source, xml, subs=attributes+]
Expand Down Expand Up @@ -39,20 +29,14 @@ just log issue to https://github.com/vladimir-bukhtoyarov/bucket4j/issues[bug tr
[source, java]
----
IMap<K, byte[]> map = ...;
...
private static final HazelcastProxyManager<K> proxyManager = new HazelcastProxyManager(map);
Bucket bucket = Bucket4j.extension(Hazelcast.class).builder()
.addLimit(Bandwidth.simple(1_000, Duration.ofMinutes(1)))
.build(map, key, RecoveryStrategy.RECONSTRUCT);
----

==== Example of ProxyManager instantiation
[source, java]
----
IMap<K, byte[]> map = ...;
...
BucketConfiguration configuration = BucketConfiguration.builder()
.addLimit(Bandwidth.simple(1_000, Duration.ofMinutes(1)))
.build(key, configuration);
ProxyManager proxyManager = Bucket4j.extension(Hazelcast.class).proxyManagerForMap(map);
Bucket bucket = proxyManager.builder().build(configuration);
----

==== Configuring Custom Serialization for Bucket4j library classes
Expand All @@ -74,9 +58,9 @@ import io.github.bucket4j.grid.hazelcast.serialization.HazelcastSerializer;
// and may use more types in the future, so leave enough empty space after baseTypeIdNumber
int baseTypeIdNumber = 10000;
HazelcastSerializer.addCustomSerializers(serializationConfig, baseTypeIdNumber);
HazelcastProxyManager.addCustomSerializers(serializationConfig, baseTypeIdNumber);
----

==== Known issues related with Docker and(or) SpringBoot
* [#186 HazelcastEntryProcessorAdapter class not found](https://github.com/vladimir-bukhtoyarov/bucket4j/discussions/186) - check file permissins inside your image.
* [#182 HazelcastSerializationException with Hazelcast 4.2](https://github.com/vladimir-bukhtoyarov/bucket4j/issues/162) - properly setup classloader for Hazelcast client configuration.
* https://github.com/vladimir-bukhtoyarov/bucket4j/discussions/186:[#186 HazelcastEntryProcessor class not found] - check file permissions inside your image.
* https://github.com/vladimir-bukhtoyarov/bucket4j/issues/162:[#182 HazelcastSerializationException with Hazelcast 4.2] - properly setup classloader for Hazelcast client configuration.

0 comments on commit 5ee3d17

Please sign in to comment.