Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IllegalStateException in StreamProcessor : Cannot complete future, the future is already completed #10240

Closed
deepthidevaki opened this issue Sep 1, 2022 · 17 comments · Fixed by #10289
Assignees
Labels
kind/bug Categorizes an issue or PR as a bug severity/critical Marks a stop-the-world bug, with a high impact and no existing workaround version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0

Comments

@deepthidevaki
Copy link
Contributor

deepthidevaki commented Sep 1, 2022

Describe the bug

java.lang.IllegalStateException: Cannot complete future, the future is already completed exceptionally with 'Expected to claim segment of size 20033584, but can't claim more than 4194304 bytes.' 
	at io.camunda.zeebe.scheduler.future.CompletableActorFuture.completeExceptionally(CompletableActorFuture.java:186) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.future.CompletableActorFuture.completeExceptionally(CompletableActorFuture.java:193) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.retry.AbortableRetryStrategy.run(AbortableRetryStrategy.java:50) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:106) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:198) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
Caused by: java.lang.IllegalArgumentException: Expected to claim segment of size 20033584, but can't claim more than 4194304 bytes.
	at io.camunda.zeebe.dispatcher.Dispatcher.offer(Dispatcher.java:207) ~[zeebe-dispatcher-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.dispatcher.Dispatcher.claimFragmentBatch(Dispatcher.java:164) ~[zeebe-dispatcher-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamBatchWriterImpl.claimBatchForEvents(LogStreamBatchWriterImpl.java:235) ~[zeebe-logstreams-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamBatchWriterImpl.tryWrite(LogStreamBatchWriterImpl.java:212) ~[zeebe-logstreams-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.streamprocessor.ProcessingScheduleServiceImpl.lambda$toRunnable$4(ProcessingScheduleServiceImpl.java:127) ~[zeebe-workflow-engine-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.retry.ActorRetryMechanism.run(ActorRetryMechanism.java:36) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	at io.camunda.zeebe.scheduler.retry.AbortableRetryStrategy.run(AbortableRetryStrategy.java:45) ~[zeebe-scheduler-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT]
	... 6 more

StreamProcessor actor failed and the partition is unhealthy.

To Reproduce

Observed this when running an e2e test with a new process added in this PR

Expected behavior

StreamProcessor should not fail.

Log/Stacktrace

Logs

Log explorer

Logs before this error that might be relevant

2022-08-31 16:28:33.521 CEST
Expected to find job with key 6755399441066071, but no job found
2022-08-31 16:28:33.521 CEST
Expected to find job with key 6755399441066104, but no job found
2022-08-31 16:28:33.521 CEST
Expected to find job with key 6755399441067170, but no job found
2022-08-31 16:28:33.521 CEST
Expected to find job with key 6755399441067940, but no job found
2022-08-31 16:31:10.218 CEST
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:10.218 CEST
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:10.219 CEST
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:10.219 CEST
.....
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:40.226 CEST
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:40.226 CEST
Failed to write command MESSAGE_SUBSCRIPTION CREATE from 0 to logstream
2022-08-31 16:31:48.722 CEST
Actor Broker-2-StreamProcessor-3 failed in phase STARTED.
2022-08-31 16:31:48.728 CEST
Discard job io.camunda.zeebe.scheduler.retry.AbortableRetryStrategy$$Lambda$2505/0x00000008016d7530 QUEUED from fastLane of Actor Broker-2-StreamProcessor-3.
2022-08-31 16:31:48.728 CEST
Discard job io.camunda.zeebe.streamprocessor.StreamProcessor$$Lambda$2317/0x0000000801677500 QUEUED from fastLane of Actor Broker-2-StreamProcessor-3.

Environment:

  • OS: camunda cloud
  • Zeebe Version: 8.1.0-SNAPSHOT
@deepthidevaki deepthidevaki added the kind/bug Categorizes an issue or PR as a bug label Sep 1, 2022
@megglos
Copy link
Contributor

megglos commented Sep 2, 2022

Triage: Streamprocessor got stuck unhealthy and it didn't seem to recover.
Seems related to recent refactoring.

@npepinpe
Copy link
Member

npepinpe commented Sep 6, 2022

Interesting that the error is that it tried to claim almost 20MB from the dispatcher 🤯 Is that expected from your test?

@deepthidevaki
Copy link
Contributor Author

Interesting that the error is that it tried to claim almost 20MB from the dispatcher exploding_head Is that expected from your test?

Not expected, I guess 🤔 . Variables are small. It is running workers with default config (except for the timeout).

@npepinpe
Copy link
Member

npepinpe commented Sep 6, 2022

Could it be possible we have an unexpected loop? :s Perhaps putting submit instead of run in the retry is causing more issues than expected. I'll have to reproduce it locally to confirm.

@deepthidevaki
Copy link
Contributor Author

deepthidevaki commented Sep 6, 2022

It occurred again here. I can keep the cluster running if it is useful for you. Let me know if you need any other help in reproducing it.

java.lang.IllegalStateException: Cannot complete future, the future is already completed  with value true
	at io.camunda.zeebe.scheduler.future.CompletableActorFuture.completeExceptionally(CompletableActorFuture.java:186) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.future.CompletableActorFuture.completeExceptionally(CompletableActorFuture.java:193) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.retry.AbortableRetryStrategy.run(AbortableRetryStrategy.java:50) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:106) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:198) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
Caused by: java.lang.IllegalStateException: Cannot complete future, the future is already completed  with value true
	at io.camunda.zeebe.scheduler.future.CompletableActorFuture.complete(CompletableActorFuture.java:166) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.retry.ActorRetryMechanism.run(ActorRetryMechanism.java:37) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	at io.camunda.zeebe.scheduler.retry.AbortableRetryStrategy.run(AbortableRetryStrategy.java:45) ~[zeebe-scheduler-8.1.0-alpha5.jar:8.1.0-alpha5]
	... 6 more

@npepinpe
Copy link
Member

npepinpe commented Sep 6, 2022

I think removing runUntilDone created some unexpected side effects with the current refactoring.

Since we moved from runUntilDone to a run followed by submit calls, it's possible for records to be written to the dispatcher out of order, in theory (in practice, hard to say, possibly there is no failure case where this is possible).

Say the processing schedule service wants to write a result, but for whatever reason the first try fails. It will then submit a job to retry. At this point we can continue processing. Possibly readNextRecord was already enqueued. So we process the record, then write the results. This calls run via the AbortableRetryStrategy, so the task to write here is prepended. Considering the commands we wrote may have been based on state before the record was processed, this could be wrong. I think the only way to ensure correct ordering is to use run everywhere or submit everywhere, but not both.

It's kind of late though, so I'll think on it again in the morning with fresh eyes, maybe I'm missing something. It also doesn't quite explain why the error occurred, though it hints that there are some concurrent issues.

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

I think our assumption was that any state machine should still guard against the execution being interleaved, but I don't know if we do that in every case 😄 It seems very brittle without something like the facilities to declare how FSMs handle which messages in which state in Akka.

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

I have some hint that the ProcessingScheduleServiceImpl may be an issue. It reuses its own writeRetryStrategy, and calls runWithRetry every time a submitted runnable runs. This means we overwrite the underlying ActorRetryMechanism on every runWithRetry call. But what if we had already enqueued a retry?

Say we schedule two runnables, both will want to write some commands. Say they were to expire/be executed one right after the other, e.g. the actor queue is [runnable1, runnable2]. The first one runs and calls runWithRetry, which creates a new future in AbortableRetryStrategy, sets it to ActorRetryMechanism. This calls prepends AbortableRetryStrategy::run on the actor via actor.run. So the queue is now: [AbortableRetryStrategy::run, runnable2]. Let's say the operation fails and we retry. So we'll submit the retried operation. The queue would look like [runnable2, retry1] (not exactly, as we have two queues actually, fun times, but you get the idea). runnable2 runs, calls runWithRetry, which creates a new future (!) and overwrites it on the ActorRetryMechanism (!). Then it calls actor.run(AbortableRetryStrategy::run). So the queue is now: [AbortableRetryStrategy::run, retry1], and the first future will never be completely (sad trombone). If the run now succeeds, the future is completed, then retry1 executes and tries to complete the future. If the run didn't succeed, it's now submitted again, and the same thing happens (e.g. the future will be "completed" twice).

I think this doesn't happen in the processing state machine because it's a state machine, so it's always in one state and the futures are always completed before the retry strategy is reused.

@deepthidevaki
Copy link
Contributor Author

Makes sense.

As I understood, this happens because we submit for retry. If it succeeds in the first attempt itself, this is not a problem because the order is preserved due to run. So the retryMechanism and the currentFuture won't be overwritten. Is that correct?

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

Correct. I think if we switch to run instead of submit, it will be mostly fine, but as this is quite hard to test for (and the class itself is hard to test), I can't fully guarantee it without running through more hand-made scenarios 😞

Maybe we should think about property based testing this as well.

So if we replace submit with run, then we are always prepending the next retry to the fast lane queue, since we're always doing this from the same actor. And since it's from the same actor, no other calls can prepend a job before that job is executed, because:

  • call will submit a job
  • subscriptions (incl. timers) will also submit jobs to the fast lane (so append)
  • run will only prepend a job if the run call is being done by the current actor. If the run call is done outside of the actor, then it is appended.

So I would suggest as a quick fix to go back to using run instead of submit for places where we were using runUntilDone.

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

Now I just need to figure out how to write a regression test for this before writing the fix 🤔

@deepthidevaki
Copy link
Contributor Author

deepthidevaki commented Sep 7, 2022

I was also wondering why we are reusing the currentFuture and retryMechanism in AbortableRetryStrategy. Can we remove that? Each retry task will keep track of its future and the task to rerun. So concurrent retryable tasks are not a problem.

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

That's a good idea, but I'm wondering if that's enough (without reverting submit to run) when it comes to side effect, since now the records (from the different runWithRetry calls) may be written in different orders. Is that...safe?

Another question, in the ProcessingScheduleService, shouldn't we reschedule the next run only when the write future is completed? Right now, we reschedule it after the task run. If the write needs to be retried, we may schedule the next iteration already before the write is successful. If it required multiple iterations, then for whatever reason it could be the next task would be scheduled. It's probably safe at the moment, but only because of the nature of the scheduled tasks...?

@deepthidevaki
Copy link
Contributor Author

That's a good idea, but I'm wondering if that's enough (without reverting submit to run) when it comes to side effect, since now the records (from the different runWithRetry calls) may be written in different orders. Is that...safe?

I'm not sure about the expected semantics for side effects.

Another question, in the ProcessingScheduleService, shouldn't we reschedule the next run only when the write future is completed?

I don't understand. Rescheduling is done by AboratableRetryStrategy, right? ProcessSchedulingService is only waiting for the future to log incase it failed. And it is only retried after the "write" failed.

@npepinpe
Copy link
Member

npepinpe commented Sep 7, 2022

Yes, but the scheduling service reschedules the original timer-based task immediately before the write is retried. The next timer can then call runWithRetry again. So for example, if you have the message time to live checker, it might run, try to write some commands, be retried, then the next time to live checker run executes and writes more commands.

I think that's generally OK-ish because we always write commands at the moment, and these may be rejected, but if we ever wrote state updates it would be a big problem I think.

@deepthidevaki
Copy link
Contributor Author

Ok. This is for runAtFixedRate. Yes, it is possible.

@npepinpe npepinpe added the severity/high Marks a bug as having a noticeable impact on the user with no known workaround label Sep 7, 2022
@Zelldon Zelldon mentioned this issue Sep 7, 2022
3 tasks
@npepinpe npepinpe added severity/critical Marks a stop-the-world bug, with a high impact and no existing workaround and removed severity/high Marks a bug as having a noticeable impact on the user with no known workaround labels Sep 8, 2022
@npepinpe
Copy link
Member

npepinpe commented Sep 8, 2022

Follow up issue: #10306

zeebe-bors-camunda bot added a commit that referenced this issue Sep 12, 2022
10289: Ensure retries are not interleaved even on multiple sequential calls r=npepinpe a=npepinpe

## Description

By using `ActorControl#submit` in some of the retry strategies, we can create race conditions if the retry strategy is reused. Since the initial call uses run to prepend a retry attempt, and further retries use submit, it's possible for one run to retry (thus submitting the retry job to the end of the queue) and the next call to `runWithRetry` cause its state to be overwritten, causing issues when it comes to completing the future (as well as potential shared state by the operations).

Additionally, this PR fixes an issue where on retry, we were not resetting the writer, causing the same command to be written multiple times.

There is a regression test added which isn't perfect, and I'd like some suggestions on how to improve it. The integration test added to the `ProcessingScheduleServiceTest` is not amazing and likely to flaky, as it's hard to write controlled tests with our timers. Suggestions are welcomed 👍 

## Related issues

closes #10240 



10302: deps(go): bump github.com/google/go-cmp from 0.5.8 to 0.5.9 in /clients/go r=npepinpe a=dependabot[bot]

Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.8 to 0.5.9.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/google/go-cmp/releases">github.com/google/go-cmp's releases</a>.</em></p>
<blockquote>
<h2>v0.5.9</h2>
<p>Reporter changes:</p>
<ul>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/299">#299</a>) Adjust heuristic for line-based versus byte-based diffing</li>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/306">#306</a>) Use <code>value.TypeString</code> in <code>PathStep.String</code></li>
</ul>
<p>Code cleanup changes:</p>
<ul>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/297">#297</a>) Use <code>reflect.Value.IsZero</code></li>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/304">#304</a>) Format with Go 1.19 formatter</li>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/300">#300</a> )Fix typo in Result documentation</li>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/302">#302</a>) Pre-declare global type variables</li>
<li>(<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/309">#309</a>) Run tests on Go 1.19</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/google/go-cmp/commit/a97318bf6562f2ed2632c5f985db51b1bc5bdcd0"><code>a97318b</code></a> Adjust heuristic for line-based versus byte-based diffing (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/299">#299</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/377d28384c85781079e04aab3937170479da8cd6"><code>377d283</code></a> Run tests on Go 1.19 (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/309">#309</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/6606d4d51e3239f038565f525940ac6043aff53e"><code>6606d4d</code></a> Use value.TypeString in PathStep.String (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/306">#306</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/f36a68d19a9bca43e070954ab9170a8305662d15"><code>f36a68d</code></a> Pre-declare global type variables (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/302">#302</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/5dac6aa44b75666a956f67df1b5bd4e2e044e1f8"><code>5dac6aa</code></a> Fix typo in Result documentation (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/300">#300</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/14ad8a02f30ba66e7e19f9814e69daab44219cb8"><code>14ad8a0</code></a> Format with Go 1.19 formatter (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/304">#304</a>)</li>
<li><a href="https://github.com/google/go-cmp/commit/a53d7e09b000ee6e0ca9f2676820299b5de8e89f"><code>a53d7e0</code></a> Use reflect.Value.IsZero (<a href="https://github-redirect.dependabot.com/google/go-cmp/issues/297">#297</a>)</li>
<li>See full diff in <a href="https://github.com/google/go-cmp/compare/v0.5.8...v0.5.9">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/google/go-cmp&package-manager=go_modules&previous-version=0.5.8&new-version=0.5.9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Nicolas Pepin-Perreault <nicolas.pepin-perreault@camunda.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
zeebe-bors-camunda bot added a commit that referenced this issue Sep 12, 2022
10289: Ensure retries are not interleaved even on multiple sequential calls r=npepinpe a=npepinpe

## Description

By using `ActorControl#submit` in some of the retry strategies, we can create race conditions if the retry strategy is reused. Since the initial call uses run to prepend a retry attempt, and further retries use submit, it's possible for one run to retry (thus submitting the retry job to the end of the queue) and the next call to `runWithRetry` cause its state to be overwritten, causing issues when it comes to completing the future (as well as potential shared state by the operations).

Additionally, this PR fixes an issue where on retry, we were not resetting the writer, causing the same command to be written multiple times.

There is a regression test added which isn't perfect, and I'd like some suggestions on how to improve it. The integration test added to the `ProcessingScheduleServiceTest` is not amazing and likely to flaky, as it's hard to write controlled tests with our timers. Suggestions are welcomed 👍 

## Related issues

closes #10240 



Co-authored-by: Nicolas Pepin-Perreault <nicolas.pepin-perreault@camunda.com>
zeebe-bors-camunda bot added a commit that referenced this issue Sep 12, 2022
10289: Ensure retries are not interleaved even on multiple sequential calls r=npepinpe a=npepinpe

## Description

By using `ActorControl#submit` in some of the retry strategies, we can create race conditions if the retry strategy is reused. Since the initial call uses run to prepend a retry attempt, and further retries use submit, it's possible for one run to retry (thus submitting the retry job to the end of the queue) and the next call to `runWithRetry` cause its state to be overwritten, causing issues when it comes to completing the future (as well as potential shared state by the operations).

Additionally, this PR fixes an issue where on retry, we were not resetting the writer, causing the same command to be written multiple times.

There is a regression test added which isn't perfect, and I'd like some suggestions on how to improve it. The integration test added to the `ProcessingScheduleServiceTest` is not amazing and likely to flaky, as it's hard to write controlled tests with our timers. Suggestions are welcomed 👍 

## Related issues

closes #10240 



10324: deps(maven): bump software.amazon.awssdk:bom from 2.17.269 to 2.17.271 r=npepinpe a=dependabot[bot]

Bumps [software.amazon.awssdk:bom](https://github.com/aws/aws-sdk-java-v2) from 2.17.269 to 2.17.271.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aws/aws-sdk-java-v2/blob/master/CHANGELOG.md">software.amazon.awssdk:bom's changelog</a>.</em></p>
<blockquote>
<h1><strong>2.17.271</strong> <strong>2022-09-09</strong></h1>
<h2><strong>AWS CloudTrail</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds CloudTrail getChannel and listChannels APIs to allow customer to view the ServiceLinkedChannel configurations.</li>
</ul>
</li>
</ul>
<h2><strong>AWS Performance Insights</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Increases the maximum values of two RDS Performance Insights APIs. The maximum value of the Limit parameter of DimensionGroup is 25. The MaxResult maximum is now 25 for the following APIs: DescribeDimensionKeys, GetResourceMetrics, ListAvailableResourceDimensions, and ListAvailableResourceMetrics.</li>
</ul>
</li>
</ul>
<h2><strong>AWS SDK for Java v2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Updated service endpoint metadata.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lex Model Building V2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release is for supporting Composite Slot Type feature in AWS Lex V2. Composite Slot Type will help developer to logically group coherent slots and maintain their inter-relationships in runtime conversation.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lex Runtime V2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release is for supporting Composite Slot Type feature in AWS Lex V2. Composite Slot Type will help developer to logically group coherent slots and maintain their inter-relationships in runtime conversation.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Redshift</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release updates documentation for AQUA features and other description updates.</li>
</ul>
</li>
</ul>
<h1><strong>2.17.270</strong> <strong>2022-09-08</strong></h1>
<h2><strong>AWS Common Runtime HTTP Client</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Adds support for Https proxy system properties: host, port, user, password</li>
</ul>
</li>
</ul>
<h2><strong>AWS Elemental MediaLive</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This change exposes API settings which allow Dolby Atmos and Dolby Vision to be used when running a channel using Elemental Media Live</li>
</ul>
</li>
</ul>
<h2><strong>AWS SDK for Java v2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Updated service endpoint metadata.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon EMR Containers</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>EMR on EKS now allows running Spark SQL using the newly introduced Spark SQL Job Driver in the Start Job Run API</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Elastic Compute Cloud</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds support to send VPC Flow Logs to kinesis-data-firehose as new destination type</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lookout for Metrics</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Release dimension value filtering feature to allow customers to define dimension filters for including only a subset of their dataset to be used by LookoutMetrics.</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/0c8422ebc6449e1e691656d7291da77d6011649d"><code>0c8422e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/aws/aws-sdk-java-v2/issues/2142">#2142</a> from aws/staging/bea6ab8b-b330-4cc1-8ca8-94bfb1689861</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/0f4f98929600e72eb967627794e35e96771e2afe"><code>0f4f989</code></a> Release 2.17.271. Updated CHANGELOG.md, README.md and all pom.xml.</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/8d63789be9dfd364fefd00b5854f06c485e2b180"><code>8d63789</code></a> Updated endpoints.json.</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/6351b993b38a7d07ce10173aa5aaac81ca2ea975"><code>6351b99</code></a> Amazon Lex Model Building V2 Update: This release is for supporting Composite...</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/583f5018daa6c105350297b3c071bcbe76e2e940"><code>583f501</code></a> Amazon Redshift Update: This release updates documentation for AQUA features ...</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/3404db0de6a9f1dba7f021f6d23850f17d61b284"><code>3404db0</code></a> AWS Performance Insights Update: Increases the maximum values of two RDS Perf...</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/7ae99600853441be20fd3a1ebce4c62f197c66aa"><code>7ae9960</code></a> AWS CloudTrail Update: This release adds CloudTrail getChannel and listChanne...</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/b0edee8185b2a07916326fc6c32eef02ad1698c5"><code>b0edee8</code></a> Amazon Lex Runtime V2 Update: This release is for supporting Composite Slot T...</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/e377518b08ccc2cd9cc32be74345064b2fadee64"><code>e377518</code></a> Update LaunchChangelog.md (<a href="https://github-redirect.dependabot.com/aws/aws-sdk-java-v2/issues/3417">#3417</a>)</li>
<li><a href="https://github.com/aws/aws-sdk-java-v2/commit/3e1f08ad9562e3203f904fe6f51f7fb1d2878953"><code>3e1f08a</code></a> Update to next snapshot version: 2.17.271-SNAPSHOT</li>
<li>Additional commits viewable in <a href="https://github.com/aws/aws-sdk-java-v2/compare/2.17.269...2.17.271">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=software.amazon.awssdk:bom&package-manager=maven&previous-version=2.17.269&new-version=2.17.271)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

10325: deps(maven): bump version.micrometer from 1.9.3 to 1.9.4 r=npepinpe a=dependabot[bot]

Bumps `version.micrometer` from 1.9.3 to 1.9.4.
Updates `micrometer-core` from 1.9.3 to 1.9.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/micrometer-metrics/micrometer/releases">micrometer-core's releases</a>.</em></p>
<blockquote>
<h2>1.9.4</h2>
<h2>:star: New Features</h2>
<ul>
<li>HTTP server instrumentation TCK <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3379">#3379</a></li>
</ul>
<h2>:lady_beetle: Bug Fixes</h2>
<ul>
<li>system.cpu.usage missing with OpenJ9 0.33.0 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3349">#3349</a></li>
<li>Uri tag replaced with REDIRECTION on all HTTP redirect responses with Jersey server <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3327">#3327</a></li>
</ul>
<h2>:hammer: Dependency Upgrades</h2>
<ul>
<li>Upgrade to signalfx-java 1.0.23 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3407">#3407</a></li>
<li>Upgrade to aws-java-sdk-cloudwatch 1.12.300 and software.amazon.awssdk:cloudwatch 2.17.271 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3406">#3406</a></li>
<li>Upgrade to Reactor 2020.0.22 and netty 4.1.81 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3405">#3405</a></li>
<li>Upgrade to Test Retry Gradle Plugin 1.4.1 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3380">#3380</a></li>
<li>Bump com.gradle.enterprise from 3.10.3 to 3.11.1 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3353">#3353</a></li>
</ul>
<h2>:heart: Contributors</h2>
<p>We'd like to thank all the contributors who worked on this release!</p>
<ul>
<li><a href="https://github.com/izeye"><code>`@​izeye</code></a></li>`
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/aa5be1ef19281aa83df19d4242803e5e2206640c"><code>aa5be1e</code></a> Remove conditional check for disabling japicmp in otlp</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/134dca6c09d3fba691c68ccd0d3fb9a4ca6cea2a"><code>134dca6</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/bd470ce080153c474fde9fb5d559bcdc56589f48"><code>bd470ce</code></a> HTTP server instrumentation TCK (<a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3379">#3379</a>)</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/277c8dd1acac0a91010505ca194c7514bf304395"><code>277c8dd</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/f89e67c83db62f436d48e69db82ce61c0c527e9c"><code>f89e67c</code></a> Upgrade to signalfx-java 1.0.23</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/60412c4b838e96ae2c7b2e4dd35b323fcb9ce508"><code>60412c4</code></a> Upgrade to aws-java-sdk-cloudwatch 1.12.300 and software.amazon.awssdk:cloudw...</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/c18a194c4292d3e79206f89ac519dbcad5e33db8"><code>c18a194</code></a> Upgrade to Reactor 2020.0.22 and netty 4.1.81</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/20c423caf98fef3763af88df2811026a2d8dd92a"><code>20c423c</code></a> Enable Gradle's stable configuration cache feature flag (<a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3390">#3390</a>)</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/2a497ab93539101abd5499c841fed5b7891ac86b"><code>2a497ab</code></a> japicmp for 1.9.x branch</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/71bad060f23170259ece454304a5b9b8448372a9"><code>71bad06</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li>Additional commits viewable in <a href="https://github.com/micrometer-metrics/micrometer/compare/v1.9.3...v1.9.4">compare view</a></li>
</ul>
</details>
<br />

Updates `micrometer-registry-prometheus` from 1.9.3 to 1.9.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/micrometer-metrics/micrometer/releases">micrometer-registry-prometheus's releases</a>.</em></p>
<blockquote>
<h2>1.9.4</h2>
<h2>:star: New Features</h2>
<ul>
<li>HTTP server instrumentation TCK <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3379">#3379</a></li>
</ul>
<h2>:lady_beetle: Bug Fixes</h2>
<ul>
<li>system.cpu.usage missing with OpenJ9 0.33.0 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3349">#3349</a></li>
<li>Uri tag replaced with REDIRECTION on all HTTP redirect responses with Jersey server <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3327">#3327</a></li>
</ul>
<h2>:hammer: Dependency Upgrades</h2>
<ul>
<li>Upgrade to signalfx-java 1.0.23 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3407">#3407</a></li>
<li>Upgrade to aws-java-sdk-cloudwatch 1.12.300 and software.amazon.awssdk:cloudwatch 2.17.271 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3406">#3406</a></li>
<li>Upgrade to Reactor 2020.0.22 and netty 4.1.81 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3405">#3405</a></li>
<li>Upgrade to Test Retry Gradle Plugin 1.4.1 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3380">#3380</a></li>
<li>Bump com.gradle.enterprise from 3.10.3 to 3.11.1 <a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/pull/3353">#3353</a></li>
</ul>
<h2>:heart: Contributors</h2>
<p>We'd like to thank all the contributors who worked on this release!</p>
<ul>
<li><a href="https://github.com/izeye"><code>`@​izeye</code></a></li>`
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/aa5be1ef19281aa83df19d4242803e5e2206640c"><code>aa5be1e</code></a> Remove conditional check for disabling japicmp in otlp</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/134dca6c09d3fba691c68ccd0d3fb9a4ca6cea2a"><code>134dca6</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/bd470ce080153c474fde9fb5d559bcdc56589f48"><code>bd470ce</code></a> HTTP server instrumentation TCK (<a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3379">#3379</a>)</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/277c8dd1acac0a91010505ca194c7514bf304395"><code>277c8dd</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/f89e67c83db62f436d48e69db82ce61c0c527e9c"><code>f89e67c</code></a> Upgrade to signalfx-java 1.0.23</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/60412c4b838e96ae2c7b2e4dd35b323fcb9ce508"><code>60412c4</code></a> Upgrade to aws-java-sdk-cloudwatch 1.12.300 and software.amazon.awssdk:cloudw...</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/c18a194c4292d3e79206f89ac519dbcad5e33db8"><code>c18a194</code></a> Upgrade to Reactor 2020.0.22 and netty 4.1.81</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/20c423caf98fef3763af88df2811026a2d8dd92a"><code>20c423c</code></a> Enable Gradle's stable configuration cache feature flag (<a href="https://github-redirect.dependabot.com/micrometer-metrics/micrometer/issues/3390">#3390</a>)</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/2a497ab93539101abd5499c841fed5b7891ac86b"><code>2a497ab</code></a> japicmp for 1.9.x branch</li>
<li><a href="https://github.com/micrometer-metrics/micrometer/commit/71bad060f23170259ece454304a5b9b8448372a9"><code>71bad06</code></a> Merge branch '1.8.x' into 1.9.x</li>
<li>Additional commits viewable in <a href="https://github.com/micrometer-metrics/micrometer/compare/v1.9.3...v1.9.4">compare view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

10326: deps(maven): bump aws-java-sdk-core from 1.12.298 to 1.12.300 r=npepinpe a=dependabot[bot]

Bumps [aws-java-sdk-core](https://github.com/aws/aws-sdk-java) from 1.12.298 to 1.12.300.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md">aws-java-sdk-core's changelog</a>.</em></p>
<blockquote>
<h1><strong>1.12.300</strong> <strong>2022-09-09</strong></h1>
<h2><strong>AWS CloudTrail</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds CloudTrail getChannel and listChannels APIs to allow customer to view the ServiceLinkedChannel configurations.</li>
</ul>
</li>
</ul>
<h2><strong>AWS Performance Insights</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Increases the maximum values of two RDS Performance Insights APIs. The maximum value of the Limit parameter of DimensionGroup is 25. The MaxResult maximum is now 25 for the following APIs: DescribeDimensionKeys, GetResourceMetrics, ListAvailableResourceDimensions, and ListAvailableResourceMetrics.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lex Model Building V2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release is for supporting Composite Slot Type feature in AWS Lex V2. Composite Slot Type will help developer to logically group coherent slots and maintain their inter-relationships in runtime conversation.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lex Runtime V2</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release is for supporting Composite Slot Type feature in AWS Lex V2. Composite Slot Type will help developer to logically group coherent slots and maintain their inter-relationships in runtime conversation.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Redshift</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release updates documentation for AQUA features and other description updates.</li>
</ul>
</li>
</ul>
<h1><strong>1.12.299</strong> <strong>2022-09-08</strong></h1>
<h2><strong>AWS Elemental MediaLive</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This change exposes API settings which allow Dolby Atmos and Dolby Vision to be used when running a channel using Elemental Media Live</li>
</ul>
</li>
</ul>
<h2><strong>AWS SDK for Java</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Adding support for me-central-1 region</li>
</ul>
</li>
</ul>
<h2><strong>Amazon EMR Containers</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>EMR on EKS now allows running Spark SQL using the newly introduced Spark SQL Job Driver in the Start Job Run API</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Elastic Compute Cloud</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds support to send VPC Flow Logs to kinesis-data-firehose as new destination type</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Lookout for Metrics</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Release dimension value filtering feature to allow customers to define dimension filters for including only a subset of their dataset to be used by LookoutMetrics.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Route 53</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>Amazon Route 53 now supports the Middle East (UAE) Region (me-central-1) for latency records, geoproximity records, and private DNS for Amazon VPCs in that region.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon SageMaker Service</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds Mode to AutoMLJobConfig.</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aws/aws-sdk-java/commit/874a8771641b0d825e5f2fb6cd806680f22028e6"><code>874a877</code></a> AWS SDK for Java 1.12.300</li>
<li><a href="https://github.com/aws/aws-sdk-java/commit/48bc7cfbdc9b0806cee3b4a00d72924186dbe70d"><code>48bc7cf</code></a> Update GitHub version number to 1.12.300-SNAPSHOT</li>
<li><a href="https://github.com/aws/aws-sdk-java/commit/b08cda01a176eed97fd6c7823c35747736f95f25"><code>b08cda0</code></a> AWS SDK for Java 1.12.299</li>
<li><a href="https://github.com/aws/aws-sdk-java/commit/ecfdc1f5a9d9e9984bb13cecd3e88ca401640e91"><code>ecfdc1f</code></a> Update GitHub version number to 1.12.299-SNAPSHOT</li>
<li>See full diff in <a href="https://github.com/aws/aws-sdk-java/compare/1.12.298...1.12.300">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.amazonaws:aws-java-sdk-core&package-manager=maven&previous-version=1.12.298&new-version=1.12.300)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Nicolas Pepin-Perreault <nicolas.pepin-perreault@camunda.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@Zelldon Zelldon added the version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0 label Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes an issue or PR as a bug severity/critical Marks a stop-the-world bug, with a high impact and no existing workaround version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants