From 64f5f1080880baa7faa5eec242df8b9451cb4fdc Mon Sep 17 00:00:00 2001 From: Abel Salgado Romero Date: Wed, 3 Aug 2022 21:24:39 +0200 Subject: [PATCH] #287 Fix asciidoctor build messages * Fix invalid <> cross-references: used autogenerated id instead * Fix invalid section numbers in jdbc-integrations.adoc * Fix broken TokensInheritanceStrategy cross-references * Correctly rename jdbc-integraions.adoc to jdbc-integrations.adoc * Set java as default language for code blocks in `toc.adoc`: applied short notation for code blocks where applicable (ie. xml, sql) * Chain paragraphs ('+') to fix misalignment of numbered section in configuration-replacement.adoc * Reorder TokensInheritanceStrategy.RESET to first position to respect order in which strategies are introduced --- .../advanced/configuration-replacement.adoc | 15 ++++++++---- .../main/docs/asciidoc/advanced/listener.adoc | 1 - .../docs/asciidoc/basic/api-reference.adoc | 4 ++-- .../main/docs/asciidoc/basic/quick-start.adoc | 2 +- .../distributed/distributed-index.adoc | 4 ++-- .../distributed/jcache/coherence.adoc | 4 ++-- .../asciidoc/distributed/jcache/ignite.adoc | 2 +- ...ntegraions.adoc => jdbc-integrations.adoc} | 23 +++++++++++-------- .../asciidoc/distributed/redis/redis.adoc | 4 ++-- asciidoc/src/main/docs/asciidoc/toc.adoc | 3 ++- 10 files changed, 36 insertions(+), 26 deletions(-) rename asciidoc/src/main/docs/asciidoc/distributed/jdbc/{jdbc-integraions.adoc => jdbc-integrations.adoc} (95%) diff --git a/asciidoc/src/main/docs/asciidoc/advanced/configuration-replacement.adoc b/asciidoc/src/main/docs/asciidoc/advanced/configuration-replacement.adoc index f930aeff..6ca5dff4 100644 --- a/asciidoc/src/main/docs/asciidoc/advanced/configuration-replacement.adoc +++ b/asciidoc/src/main/docs/asciidoc/advanced/configuration-replacement.adoc @@ -4,12 +4,13 @@ As previously mentioned in the definition for <> it is an It is not possible to add, remove or change the limits for already created configuration, however, you can replace the configuration of the bucket via creating a new configuration instance and calling `bucket.replaceConfiguration(newConfiguration, tokensInheritanceStrategy)`. ==== Why configuration replacement is not trivial? -1. The first problem of configuration replacement is deciding on how to propagate available tokens from a bucket with a previous configuration to the bucket with a new configuration. If you don't care about previous the bucket state then use `TokensInheritanceStrategy.RESET`. But it becomes a tricky problem when we expect that previous consumption(that has not been compensated by refill yet) should take effect on the bucket with a new configuration. In this case, you need to choose between: +1. The first problem of configuration replacement is deciding on how to propagate available tokens from a bucket with a previous configuration to the bucket with a new configuration. If you don't care about previous the bucket state then use <>. But it becomes a tricky problem when we expect that previous consumption (that has not been compensated by refill yet) should take effect on the bucket with a new configuration. In this case, you need to choose between: * <> * <> * <> 2. There is another problem when you are choosing <>, <> or <> or <> and a bucket has more than one bandwidth. For example, how does replaceConfiguration implementation bind bandwidths to each other in the following example? ++ [source, java] ---- Bucket bucket = Bucket.builder() @@ -23,6 +24,7 @@ BucketConfiguration newConfiguration = BucketConfiguration.configurationBuilder( .build(); bucket.replaceConfiguration(newConfiguration, TokensInheritanceStrategy.AS_IS); ---- ++ It is obvious that a simple strategy - copying tokens by bandwidth index will not work well in this case, because it highly depends on the order in which bandwidths were mentioned in the new and previous configuration. ==== Taking control over replacement process via bandwidth identifiers @@ -50,6 +52,12 @@ Bucket bucket = Bucket.builder() *TokensInheritanceStrategy* specifies the rules for inheritance of available tokens during configuration replacement process. .There are four strategies: + +[[tokens-inheritance-strategy-reset]] +RESET:: +Use this mode when you want just to forget about the previous bucket state. RESET just instructs to erase all previous states. Using this strategy equals removing a bucket and creating again with a new configuration. + +[[tokens-inheritance-strategy-proportionally]] PROPORTIONALLY:: Makes to copy available tokens proportional to bandwidth capacity by following formula: *newAvailableTokens = availableTokensBeforeReplacement * (newBandwidthCapacity / capacityBeforeReplacement)* + @@ -63,6 +71,7 @@ After replacing this bandwidth by following `Bandwidth.classic(200, Refill.gread ** *Example 2:* imagine bandwidth that was created by `Bandwidth.classic(100, Refill.gready(10, Duration.ofMinutes(1)))`. At the moment of config replacement, there were 40 available tokens. After replacing this bandwidth by following `Bandwidth.classic(20, Refill.gready(10, Duration.ofMinutes(1)))` 40 available tokens will be multiplied by 0.2(20/100), and after replacement, we will have 8 available tokens. +[[tokens-inheritance-strategy-as-is]] AS_IS:: Instructs to copy available tokens as is, but with one exclusion: if available tokens are greater than new capacity, available tokens will be decreased to new capacity. + @@ -79,9 +88,7 @@ At the moment of config replacement, it was 40 available tokens. + + After replacing this bandwidth by following `Bandwidth.classic(20, Refill.gready(10, Duration.ofMinutes(1)))` 40 available tokens can not be copied as is because it is greater than new capacity, so available tokens will be reduced to 20. -RESET:: -Use this mode when you want just to forget about the previous bucket state. RESET just instructs to erase all previous states. Using this strategy equals removing a bucket and creating again with a new configuration. - +[[tokens-inheritance-strategy-additive]] ADDITIVE:: Instructs to copy available tokens as is, but with one exclusion: if new bandwidth capacity is greater than old capacity, available tokens will be increased by the difference between the old and the new configuration. + + diff --git a/asciidoc/src/main/docs/asciidoc/advanced/listener.adoc b/asciidoc/src/main/docs/asciidoc/advanced/listener.adoc index 7c7d880c..9bf4d012 100644 --- a/asciidoc/src/main/docs/asciidoc/advanced/listener.adoc +++ b/asciidoc/src/main/docs/asciidoc/advanced/listener.adoc @@ -1,4 +1,3 @@ -[[listener]] === Listening for bucket events ==== What can be listened diff --git a/asciidoc/src/main/docs/asciidoc/basic/api-reference.adoc b/asciidoc/src/main/docs/asciidoc/basic/api-reference.adoc index f2efc8e6..742f4e16 100644 --- a/asciidoc/src/main/docs/asciidoc/basic/api-reference.adoc +++ b/asciidoc/src/main/docs/asciidoc/basic/api-reference.adoc @@ -308,7 +308,7 @@ See <> section for more details. */ Bucket toListenable(BucketListener listener); ---- -See <> section for more details. +See <> section for more details. [[blocking-bucket]] ==== io.github.bucket4j.BlockingBucket @@ -634,4 +634,4 @@ CompletableFuture tryConsume(long numTokens, long maxWaitNanos, Schedul * */ CompletableFuture consume(long numTokens, ScheduledExecutorService scheduler); ----- \ No newline at end of file +---- diff --git a/asciidoc/src/main/docs/asciidoc/basic/quick-start.adoc b/asciidoc/src/main/docs/asciidoc/basic/quick-start.adoc index 55d30c9c..7df9e5c5 100644 --- a/asciidoc/src/main/docs/asciidoc/basic/quick-start.adoc +++ b/asciidoc/src/main/docs/asciidoc/basic/quick-start.adoc @@ -4,7 +4,7 @@ The Bucket4j is distributed through https://mvnrepository.com/artifact/com.bucke You need to add the dependency to your project as described below in order to be able to compile and run examples .Maven dependency -[source, xml, subs=attributes+] +[,xml,subs=attributes+] ---- com.bucket4j diff --git a/asciidoc/src/main/docs/asciidoc/distributed/distributed-index.adoc b/asciidoc/src/main/docs/asciidoc/distributed/distributed-index.adoc index 9b4804f6..449fcf9a 100644 --- a/asciidoc/src/main/docs/asciidoc/distributed/distributed-index.adoc +++ b/asciidoc/src/main/docs/asciidoc/distributed/distributed-index.adoc @@ -14,10 +14,10 @@ include::jcache/coherence.adoc[] include::redis/redis.adoc[] -include::jdbc/jdbc-integraions.adoc[] +include::jdbc/jdbc-integrations.adoc[] include::asynchronous.adoc[] include::implement-custom-database/concept.adoc[] -include::distributed-checklist.adoc[] \ No newline at end of file +include::distributed-checklist.adoc[] diff --git a/asciidoc/src/main/docs/asciidoc/distributed/jcache/coherence.adoc b/asciidoc/src/main/docs/asciidoc/distributed/jcache/coherence.adoc index a253ae8e..b8384563 100644 --- a/asciidoc/src/main/docs/asciidoc/distributed/jcache/coherence.adoc +++ b/asciidoc/src/main/docs/asciidoc/distributed/jcache/coherence.adoc @@ -2,7 +2,7 @@ === Oracle Coherence integration ==== Dependencies To use ``bucket4j-coherence`` extension you need to add the following dependency: -[source, xml, subs=attributes+] +[,xml,subs=attributes+] ---- com.bucket4j @@ -35,7 +35,7 @@ To let Coherence know about POF serializers you should register three serializer ==== .Example of POF serialization config: -[source, xml] +[,xml] ---- com.bucket4j diff --git a/asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integraions.adoc b/asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integrations.adoc similarity index 95% rename from asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integraions.adoc rename to asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integrations.adoc index 823749ab..778d2685 100644 --- a/asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integraions.adoc +++ b/asciidoc/src/main/docs/asciidoc/distributed/jdbc/jdbc-integrations.adoc @@ -6,22 +6,24 @@ General principles to use each JDBC integration: * You should create a table, which includes the next required columns: BIGINT as a PRIMARY KEY, BYTEA as a state. By default, Bucket4j works with the next structure: .PostgreSQL +[,sql] ---- CREATE TABLE IF NOT EXISTS buckets(id BIGINT PRIMARY KEY, state BYTEA); ---- .MySQL +[,sql] ---- CREATE TABLE IF NOT EXISTS buckets(id BIGINT PRIMARY KEY, state BLOB); ---- -[[listener]] -===== Configuring custom settings of SQLProxyManager +==== Configuring custom settings of SQLProxyManager * Each proxy manager takes `SQLProxyConfiguration` to customize work with database * To do that, you should use `SQLProxyConfigurationBuilder`, which includes the next parameters: +[source, java] ---- /** * @param clientSideConfig {@link ClientSideConfig} client-side configuration for proxy-manager. @@ -44,8 +46,7 @@ CREATE TABLE IF NOT EXISTS buckets(id BIGINT PRIMARY KEY, state BLOB); } ---- -[[listener]] -===== Overriding table configuration +==== Overriding table configuration You can override the names of the columns to set your custom name of columns, to do that, you should use `BucketTableSettings` to set into `SQLProxyConfigurationBuilder` of your JDBC implementation. * `SQLProxyConfigurationBuilder` Takes `BucketTableSettings` - is the class to define a configuration of the table to use as a buckets store. By default, under the hood uses `BucketTableSettings.getDefault()` @@ -60,17 +61,17 @@ Parameters: By default, uses: "buckets" as a `tableName`; "id" as a `idName`; "state" as a `stateName` -====== addTableSettings -Takes `BucketTableSettings` - See <>. +===== addTableSettings +Takes `BucketTableSettings` - See <>. -====== addClientSideConfig +===== addClientSideConfig Takes `ClientSideConfig` - is a client-side configuration for proxy-manager. By default, under the hood uses `ClientSideConfig.getDefault()` ==== PostgreSQL integration ===== Dependencies To use Bucket4j extension for PostgreSQL you need to add following dependency: -[source, xml, subs=attributes+] +[,xml,subs=attributes+] ---- com.bucket4j @@ -118,7 +119,8 @@ Within a SERIALIZABLE transaction, however, an error will be thrown if a row to ==== MySQL integration ===== Dependencies To use Bucket4j extension for MySQL you need to add following dependency: -[source, xml, subs=attributes+] + +[,xml,subs=attributes+] ---- com.bucket4j @@ -128,6 +130,7 @@ To use Bucket4j extension for MySQL you need to add following dependency: ---- ===== Example of Bucket instantiation + ---- Long key = 1L; MySQLSelectForUpdateBasedProxyManager proxyManager = new MySQLSelectForUpdateBasedProxyManager(new SQLProxyConfiguration(dataSource)); @@ -135,4 +138,4 @@ To use Bucket4j extension for MySQL you need to add following dependency: .addLimit(Bandwidth.simple(10, Duration.ofSeconds(1))) .build(); BucketProxy bucket = proxyManager.builder().build(key, bucketConfiguration); ----- \ No newline at end of file +---- diff --git a/asciidoc/src/main/docs/asciidoc/distributed/redis/redis.adoc b/asciidoc/src/main/docs/asciidoc/distributed/redis/redis.adoc index e54b69cf..851ce6d0 100644 --- a/asciidoc/src/main/docs/asciidoc/distributed/redis/redis.adoc +++ b/asciidoc/src/main/docs/asciidoc/distributed/redis/redis.adoc @@ -25,7 +25,7 @@ IMPORTANT: For all libraries mentioned above concurrent access to Redis is solve ==== Dependencies To use ``bucket4j-redis`` extension you need to add following dependency: -[source, xml, subs=attributes+] +[,xml,subs=attributes+] ---- com.bucket4j @@ -93,4 +93,4 @@ BucketConfiguration configuration = BucketConfiguration.builder() .addLimit(Bandwidth.simple(1_000, Duration.ofMinutes(1))) .build(); Bucket bucket = proxyManager.builder().build(key, configuration); ----- \ No newline at end of file +---- diff --git a/asciidoc/src/main/docs/asciidoc/toc.adoc b/asciidoc/src/main/docs/asciidoc/toc.adoc index f86b5d3e..f27feb52 100644 --- a/asciidoc/src/main/docs/asciidoc/toc.adoc +++ b/asciidoc/src/main/docs/asciidoc/toc.adoc @@ -1,5 +1,6 @@ = Bucket4j {revnumber} Reference :front-cover-image: images/white-logo.png +:source-language: java include::about.adoc[] @@ -7,4 +8,4 @@ include::basic/basic-functionality-index.adoc[] include::distributed/distributed-index.adoc[] -include::advanced/advanced-features-index.adoc[] \ No newline at end of file +include::advanced/advanced-features-index.adoc[]