Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZeebePartition caught in a loop transition to inactive after a dead partition #9924

Closed
deepthidevaki opened this issue Jul 28, 2022 · 2 comments · Fixed by #10776
Closed
Assignees
Labels
kind/bug Categorizes an issue or PR as a bug severity/low Marks a bug as having little to no noticeable impact for the user version:8.1.3 Marks an issue as being completely or in parts released in 8.1.3 version:8.2.0-alpha1 Marks an issue as being completely or in parts released in 8.2.0-alpha1 version:8.2.0 Marks an issue as being completely or in parts released in 8.2.0

Comments

@deepthidevaki
Copy link
Contributor

Describe the bug

When the partition is dead to #9115, the partition is marked as dead. Since it is an unrecoverable error, it is transition to inactive. The transition to inactive is somehow cause it is triggering the failure listener again and a new transition is triggered.

See the following repeated log

2022-07-28 16:20:27.724 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:21:27.844 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:22:27.866 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:23:28.048 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:24:28.352 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:25:28.613 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:26:28.634 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:27:28.731 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:28:29.242 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:29:30.363 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:30:30.373 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:31:30.375 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:32:30.903 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:33:31.575 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:34:32.325 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:35:32.414 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:36:32.803 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:37:33.645 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:38:34.538 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:39:34.984 CEST
Transition to INACTIVE on term 407 requested.
2022-07-28 16:40:35.415 CEST
Transition to INACTIVE on term 407 requested.

To Reproduce

Trigger #9115

Expected behavior

  • Mark the partition as dead once
  • Transiton to inactive one and stop
@deepthidevaki deepthidevaki added kind/bug Categorizes an issue or PR as a bug severity/low Marks a bug as having little to no noticeable impact for the user labels Jul 28, 2022
@deepthidevaki deepthidevaki self-assigned this Oct 14, 2022
@deepthidevaki
Copy link
Contributor Author

After an initial investigation, it looks like this is caused due to transitions initiated by raft due to cluster configuration operations.


When we force raft to transition to inactive, it is not committed to the configuration. So the other nodes still have marked it as active. This can cause it to transition back to active, when it receives a new update from the leader.

@deepthidevaki
Copy link
Contributor Author

Actually, not able to reproduce this exact behavior in 8.2.0-SNAPSHOT. After inserting a deliberate ZeebeDbInconistentException in streamprocessor, what I observe is the following:

StreamProcessor in Leader fails with the unrecoverable exception -> detected as dead
Transition to inactive
Transition to Follower (but health status is dead)
This node becomes leader again (because the new leader also fails -> transition to inactive)
StreamProcessor fails again
Transition to inactive
Transition to follower
Becomes leader again
StreamProcessor fails again -> remain as the leader

2022-10-20 14:17:04.939 CEST
zeebe
Actor Broker-0-StreamProcessor-1 failed in phase STARTED.
2022-10-20 14:17:04.944 CEST
zeebe
Broker-0-StreamProcessor-1 failed, marking it as dead: Broker-0-StreamProcessor-1{status=DEAD, issue=HealthIssue[message=null, throwable=io.camunda.zeebe.db.ZeebeDbInconsistentException: Always fail, cause=null]}
 
2022-10-20 14:17:04.948 CEST
zeebe
Transition to INACTIVE on term 1 requested. 

2022-10-20 14:17:04.949 CEST
zeebe
RaftServer{raft-partition-partition-1} - Transitioning to INACTIVE
2022-10-20 14:17:04.949 CEST
zeebe
Prepare transition from LEADER on term 1 to INACTIVE

2022-10-20 14:17:04.952 CEST
zeebe
Transition to INACTIVE on term 1 requested.
2022-10-20 14:17:04.953 CEST
zeebe
Partition role transitioning from LEADER to INACTIVE in term 1
2022-10-20 14:17:04.953 CEST
zeebe
Received cancel signal for transition to INACTIVE on term 1

2022-10-20 14:17:04.978 CEST
zeebe
Transition to INACTIVE on term 1 completed

2022-10-20 14:17:09.784 CEST
zeebe
RaftServer{raft-partition-partition-1} - Found leader 1
2022-10-20 14:17:09.785 CEST
zeebe
RaftClusterContext committing - type = ACTIVE
2022-10-20 14:17:09.785 CEST
zeebe
RaftServer{raft-partition-partition-1} - Transitioning to FOLLOWER
2022-10-20 14:17:09.786 CEST
zeebe
Transition to FOLLOWER on term 2 requested.
2022-10-20 14:17:09.786 CEST
zeebe
Partition role transitioning from INACTIVE to FOLLOWER in term 2
2022-10-20 14:17:09.787 CEST
zeebe
Prepare transition from INACTIVE on term 1 to FOLLOWER

2022-10-20 14:17:09.925 CEST
zeebe
Transition to FOLLOWER on term 2 completed

2022-10-20 14:17:12.784 CEST
zeebe
RaftServer{raft-partition-partition-1} - Transitioning to CANDIDATE

2022-10-20 14:17:12.814 CEST
zeebe
Transition to LEADER on term 3 requested.
2022-10-20 14:17:12.889 CEST
zeebe
Transition to LEADER on term 3 completed

2022-10-20 14:17:12.940 CEST
zeebe
Actor Broker-0-StreamProcessor-1 failed in phase STARTED.

2022-10-20 14:17:12.944 CEST
zeebe
Transition to INACTIVE on term 3 requested.
2022-10-20 14:17:14.418 CEST
zeebe
Transition to INACTIVE on term 3 completed

2022-10-20 14:17:17.844 CEST
zeebe
RaftServer{raft-partition-partition-1} - Found leader 1
2022-10-20 14:17:17.844 CEST
zeebe
RaftClusterContext committing - type = ACTIVE
2022-10-20 14:17:17.845 CEST
zeebe
RaftServer{raft-partition-partition-1} - Transitioning to FOLLOWER
2022-10-20 14:17:17.845 CEST
zeebe
Transition to FOLLOWER on term 4 requested.


2022-10-20 14:17:20.390 CEST
zeebe
Transition to LEADER on term 5 requested.

2022-10-20 14:17:20.529 CEST
zeebe
Actor Broker-0-StreamProcessor-1 failed in phase STARTED.

Expected behavior is once it is inactive because the partition is dead, it doesn't come back to active.

zeebe-bors-camunda bot added a commit that referenced this issue Oct 24, 2022
10759: deps(maven): bump version.zpt from 8.1.1 to 8.1.2 r=oleschoenburg a=dependabot[bot]

Bumps `version.zpt` from 8.1.1 to 8.1.2.
Updates `zeebe-process-test-assertions` from 8.1.1 to 8.1.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/camunda/zeebe-process-test/releases">zeebe-process-test-assertions's releases</a>.</em></p>
<blockquote>
<h2>8.1.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Zeebe release 8.1.2 by <a href="https://github.com/korthout"><code>`@​korthout</code></a>` in <a href="https://github-redirect.dependabot.com/camunda/zeebe-process-test/pull/545">camunda/zeebe-process-test#545</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2">https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/30bf84155a40f169f99918beb8743d9464251cdc"><code>30bf841</code></a> release(v8.1.2)</li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/44406fba5c228aba61e3ff58e9d7e41b173693d1"><code>44406fb</code></a> merge: <a href="https://github-redirect.dependabot.com/camunda/zeebe-process-test/issues/545">#545</a></li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/9f1ebf7665aacdbf43354cd9495e1c9dcfb79a4e"><code>9f1ebf7</code></a> deps(pom): bump zeebe to 8.1.2</li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/231129543b868ccbe28a8206de117ba34f47e51a"><code>2311295</code></a> release(v8.1.1): prepare for next development iteration</li>
<li>See full diff in <a href="https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2">compare view</a></li>
</ul>
</details>
<br />

Updates `zeebe-process-test-filters` from 8.1.1 to 8.1.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/camunda/zeebe-process-test/releases">zeebe-process-test-filters's releases</a>.</em></p>
<blockquote>
<h2>8.1.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Zeebe release 8.1.2 by <a href="https://github.com/korthout"><code>`@​korthout</code></a>` in <a href="https://github-redirect.dependabot.com/camunda/zeebe-process-test/pull/545">camunda/zeebe-process-test#545</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2">https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/30bf84155a40f169f99918beb8743d9464251cdc"><code>30bf841</code></a> release(v8.1.2)</li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/44406fba5c228aba61e3ff58e9d7e41b173693d1"><code>44406fb</code></a> merge: <a href="https://github-redirect.dependabot.com/camunda/zeebe-process-test/issues/545">#545</a></li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/9f1ebf7665aacdbf43354cd9495e1c9dcfb79a4e"><code>9f1ebf7</code></a> deps(pom): bump zeebe to 8.1.2</li>
<li><a href="https://github.com/camunda/zeebe-process-test/commit/231129543b868ccbe28a8206de117ba34f47e51a"><code>2311295</code></a> release(v8.1.1): prepare for next development iteration</li>
<li>See full diff in <a href="https://github.com/camunda/zeebe-process-test/compare/8.1.1...8.1.2">compare view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

10776: Stop raft server when going inactive due to unrecoverable errors r=deepthidevaki a=deepthidevaki

## Description

Previously, it was transitioning to inactive. But, in the configuration the member is still marked as active. As a result, the member transition back to active when it gets a new message from the leader. We cannot change the configuration, and mark this member as inactive because that would mean we are changing the quorum. What we really requires is that this partition is "dead" (atleast temporarily) so that it doesn't become leader again. We also don't want it to become a follower because this can also lead to partial functionality which can cause problems. For example, in follower role, raft is replicating events, but the streamprocessor or snapshotting is not working because of this error. So it is not able to compact the logs. This will eventually leads to disk space full and thus affecting other possibly healthy partitions.

To fix this, in this PR we stop the raft server instead of only transitioning to inactive. The replication factor and quorum remains the same. But this node cannot become leader again until the member is restarted.

## Related issues

closes #9924



10794: deps(maven): bump aws-java-sdk-core from 1.12.325 to 1.12.326 r=npepinpe a=dependabot[bot]

Bumps [aws-java-sdk-core](https://github.com/aws/aws-sdk-java) from 1.12.325 to 1.12.326.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md">aws-java-sdk-core's changelog</a>.</em></p>
<blockquote>
<h1><strong>1.12.326</strong> <strong>2022-10-21</strong></h1>
<h2><strong>Amazon Cognito Identity Provider</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds a new &quot;DeletionProtection&quot; field to the UserPool in Cognito. Application admins can configure this value with either ACTIVE or INACTIVE value. Setting this field to ACTIVE will prevent a user pool from accidental deletion.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon SageMaker Service</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>CreateInferenceRecommenderjob API now supports passing endpoint details directly, that will help customers to identify the max invocation and max latency they can achieve for their model and the associated endpoint along with getting recommendations on other instances.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Simple Storage Service</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>S3 on Outposts launches support for automatic bucket-style alias. You can use the automatic access point alias instead of an access point ARN for any object-level operation in an Outposts bucket.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aws/aws-sdk-java/commit/a09eee1e8f39077386c25362d4fe004294e7ca60"><code>a09eee1</code></a> AWS SDK for Java 1.12.326</li>
<li><a href="https://github.com/aws/aws-sdk-java/commit/809b864c50b846cb65f1354303a1c00f048e6dde"><code>809b864</code></a> Update GitHub version number to 1.12.326-SNAPSHOT</li>
<li>See full diff in <a href="https://github.com/aws/aws-sdk-java/compare/1.12.325...1.12.326">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.amazonaws:aws-java-sdk-core&package-manager=maven&previous-version=1.12.325&new-version=1.12.326)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Deepthi Devaki Akkoorath <deepthidevaki@gmail.com>
zeebe-bors-camunda bot added a commit that referenced this issue Oct 24, 2022
10776: Stop raft server when going inactive due to unrecoverable errors r=deepthidevaki a=deepthidevaki

## Description

Previously, it was transitioning to inactive. But, in the configuration the member is still marked as active. As a result, the member transition back to active when it gets a new message from the leader. We cannot change the configuration, and mark this member as inactive because that would mean we are changing the quorum. What we really requires is that this partition is "dead" (atleast temporarily) so that it doesn't become leader again. We also don't want it to become a follower because this can also lead to partial functionality which can cause problems. For example, in follower role, raft is replicating events, but the streamprocessor or snapshotting is not working because of this error. So it is not able to compact the logs. This will eventually leads to disk space full and thus affecting other possibly healthy partitions.

To fix this, in this PR we stop the raft server instead of only transitioning to inactive. The replication factor and quorum remains the same. But this node cannot become leader again until the member is restarted.

## Related issues

closes #9924



10794: deps(maven): bump aws-java-sdk-core from 1.12.325 to 1.12.326 r=npepinpe a=dependabot[bot]

Bumps [aws-java-sdk-core](https://github.com/aws/aws-sdk-java) from 1.12.325 to 1.12.326.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md">aws-java-sdk-core's changelog</a>.</em></p>
<blockquote>
<h1><strong>1.12.326</strong> <strong>2022-10-21</strong></h1>
<h2><strong>Amazon Cognito Identity Provider</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>This release adds a new &quot;DeletionProtection&quot; field to the UserPool in Cognito. Application admins can configure this value with either ACTIVE or INACTIVE value. Setting this field to ACTIVE will prevent a user pool from accidental deletion.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon SageMaker Service</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>CreateInferenceRecommenderjob API now supports passing endpoint details directly, that will help customers to identify the max invocation and max latency they can achieve for their model and the associated endpoint along with getting recommendations on other instances.</li>
</ul>
</li>
</ul>
<h2><strong>Amazon Simple Storage Service</strong></h2>
<ul>
<li>
<h3>Features</h3>
<ul>
<li>S3 on Outposts launches support for automatic bucket-style alias. You can use the automatic access point alias instead of an access point ARN for any object-level operation in an Outposts bucket.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aws/aws-sdk-java/commit/a09eee1e8f39077386c25362d4fe004294e7ca60"><code>a09eee1</code></a> AWS SDK for Java 1.12.326</li>
<li><a href="https://github.com/aws/aws-sdk-java/commit/809b864c50b846cb65f1354303a1c00f048e6dde"><code>809b864</code></a> Update GitHub version number to 1.12.326-SNAPSHOT</li>
<li>See full diff in <a href="https://github.com/aws/aws-sdk-java/compare/1.12.325...1.12.326">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.amazonaws:aws-java-sdk-core&package-manager=maven&previous-version=1.12.325&new-version=1.12.326)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Deepthi Devaki Akkoorath <deepthidevaki@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
zeebe-bors-camunda bot added a commit that referenced this issue Oct 24, 2022
10798: [Backport stable/8.0] Stop raft server when going inactive due to unrecoverable errors r=deepthidevaki a=backport-action

# Description
Backport of #10776 to `stable/8.0`.

closes #9924

Co-authored-by: Deepthi Devaki Akkoorath <deepthidevaki@gmail.com>
zeebe-bors-camunda bot added a commit that referenced this issue Oct 24, 2022
10799: [Backport stable/8.1] Stop raft server when going inactive due to unrecoverable errors r=deepthidevaki a=backport-action

# Description
Backport of #10776 to `stable/8.1`.

closes #9924

Co-authored-by: Deepthi Devaki Akkoorath <deepthidevaki@gmail.com>
@korthout korthout added version:8.2.0-alpha1 Marks an issue as being completely or in parts released in 8.2.0-alpha1 release/8.0.8 version:8.1.3 Marks an issue as being completely or in parts released in 8.1.3 labels Nov 1, 2022
@npepinpe npepinpe added the version:8.2.0 Marks an issue as being completely or in parts released in 8.2.0 label Apr 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes an issue or PR as a bug severity/low Marks a bug as having little to no noticeable impact for the user version:8.1.3 Marks an issue as being completely or in parts released in 8.1.3 version:8.2.0-alpha1 Marks an issue as being completely or in parts released in 8.2.0-alpha1 version:8.2.0 Marks an issue as being completely or in parts released in 8.2.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants