New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSDOCS-3903: known issue BZ2100323, update bug fix (New Jira Ticket) #48796
OSDOCS-3903: known issue BZ2100323, update bug fix (New Jira Ticket) #48796
Conversation
160f67f
to
6c03d86
Compare
@@ -2457,6 +2438,38 @@ spec: | |||
|
|||
* The pipeline metrics API does not support the required `pipelinerun/taskrun` histogram values from {rh-openstack} 1.6 and beyond. Consequently, the *Metrics* tab in the *Pipeline* -> *Details* page is removed instead of displaying incorrect values. There is currently no workaround for this issue. (link: https://bugzilla.redhat.com/show_bug.cgi?id=2074767[*BZ#2074767*]) | |||
|
|||
* In {product-title} {product-version}, pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings. Versions of the `opm` binary released earlier than {product-title} {product-version}, do not work with the restricted profile setting. As a result, an error displays if you install a catalog source that was built with an `opm` binary released earlier than {product-title} {product-version}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The note that "pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings" comes from the "Pod security admission" release note above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and you can find it from:
mac:~ jianzhang$ jq "" $(oc extract cm/config -n openshift-kube-apiserver --confirm) | jq '.admission.pluginConfig.PodSecurity'
{
"configuration": {
"apiVersion": "pod-security.admission.config.k8s.io/v1beta1",
"defaults": {
"audit": "restricted",
"audit-version": "latest",
"enforce": "privileged",
"enforce-version": "latest",
"warn": "restricted",
"warn-version": "latest"
},
"exemptions": {
"usernames": [
"system:serviceaccount:openshift-infra:build-controller"
]
},
"kind": "PodSecurityConfiguration"
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But, why do we still mention the opm
here? As I said on Slack, no matter the Catalog Source created by the new or old opm
, both of them will encounter the PSA issue, because:
-
registry-server (catalog source pod’s container, but no matter the SQL-based or Filed based, both of them have this issue)
-
unpack job (nothing with the CatalogSource, but payload)
6c03d86
to
5e1dfae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great. I think that's an excellent way to present the workaround for this known issue. Nice job!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
this is great, thanks for consolidating all the loose ends @michaelryanpeter
@jianzhangbjz PTAL |
@@ -2457,6 +2438,38 @@ spec: | |||
|
|||
* The pipeline metrics API does not support the required `pipelinerun/taskrun` histogram values from {rh-openstack} 1.6 and beyond. Consequently, the *Metrics* tab in the *Pipeline* -> *Details* page is removed instead of displaying incorrect values. There is currently no workaround for this issue. (link: https://bugzilla.redhat.com/show_bug.cgi?id=2074767[*BZ#2074767*]) | |||
|
|||
* In {product-title} {product-version}, pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings. Versions of the `opm` binary released earlier than {product-title} {product-version}, do not work with the restricted profile setting. As a result, an error displays if you install a catalog source that was built with an `opm` binary released earlier than {product-title} {product-version}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and you can find it from:
mac:~ jianzhang$ jq "" $(oc extract cm/config -n openshift-kube-apiserver --confirm) | jq '.admission.pluginConfig.PodSecurity'
{
"configuration": {
"apiVersion": "pod-security.admission.config.k8s.io/v1beta1",
"defaults": {
"audit": "restricted",
"audit-version": "latest",
"enforce": "privileged",
"enforce-version": "latest",
"warn": "restricted",
"warn-version": "latest"
},
"exemptions": {
"usernames": [
"system:serviceaccount:openshift-infra:build-controller"
]
},
"kind": "PodSecurityConfiguration"
}
}
@@ -1780,7 +1761,7 @@ With this update, the CMO now properly propagates the external labels that you c | |||
|
|||
* Previously, pod failures were artificially extending the validity period of certificates causing them to incorrectly rotate. With this update, the certificate validity period is correctly determined and the certificates are correctly rotated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2020484[*BZ#2020484*]) | |||
|
|||
* In {product-title} {product-version} the default cluster-wide pod security admission policy is set to `baseline` for all namespaces and the default warning level is set to `restricted`. Before this update, Operator Lifecycle Manager displayed pod security admission warnings in the `operator-marketplace` namespace. With this fix, reducing the warning level to `baseline` resolves the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2088541[*BZ#2088541*]) | |||
* In {product-title} {product-version} the default cluster-wide pod security admission policy is set to `privileged` for all namespaces and the default warning level is set to `restricted`. Before this update, Operator Lifecycle Manager displayed pod security admission warnings in the `operator-marketplace` namespace. With this fix, reducing the warning level to `baseline` resolves the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2088541[*BZ#2088541*]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this update,
How to understand it? What's this update
means? The PSA? If yes, should be after this update
.
warnings in the
operator-marketplace
namespace.
And, should be openshift-marketplace
, not the operator-marketplace
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change the namespace to openshift-marketplace
when I update the PR.
Before this update refers to BZ2088541. This is just a bug fix note for the doc text supplied in the BZ. Though I have cleaned it up to conform with our style guide. I noticed that the doc text incorrectly said that baseline was the default, so I was fixing it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Before this update" is referring to the time before the bug was fixed. "With this fix" is referring to the time after the BZ was implemented.
.Example error | ||
[source,terminal] | ||
---- | ||
Error: open ./db-xxxx: permission denied |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I think this error is nothing with the PSA, it's related to bug: https://bugzilla.redhat.com/show_bug.cgi?id=2100323 or am I missing something? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the confusion. I am trying to supply the known issue and work around for BZ2100323. The BZ is linked on line 2471.
@@ -2457,6 +2438,38 @@ spec: | |||
|
|||
* The pipeline metrics API does not support the required `pipelinerun/taskrun` histogram values from {rh-openstack} 1.6 and beyond. Consequently, the *Metrics* tab in the *Pipeline* -> *Details* page is removed instead of displaying incorrect values. There is currently no workaround for this issue. (link: https://bugzilla.redhat.com/show_bug.cgi?id=2074767[*BZ#2074767*]) | |||
|
|||
* In {product-title} {product-version}, pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings. Versions of the `opm` binary released earlier than {product-title} {product-version}, do not work with the restricted profile setting. As a result, an error displays if you install a catalog source that was built with an `opm` binary released earlier than {product-title} {product-version}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But, why do we still mention the opm
here? As I said on Slack, no matter the Catalog Source created by the new or old opm
, both of them will encounter the PSA issue, because:
-
registry-server (catalog source pod’s container, but no matter the SQL-based or Filed based, both of them have this issue)
-
unpack job (nothing with the CatalogSource, but payload)
@@ -1780,7 +1761,7 @@ With this update, the CMO now properly propagates the external labels that you c | |||
|
|||
* Previously, pod failures were artificially extending the validity period of certificates causing them to incorrectly rotate. With this update, the certificate validity period is correctly determined and the certificates are correctly rotated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2020484[*BZ#2020484*]) | |||
|
|||
* In {product-title} {product-version} the default cluster-wide pod security admission policy is set to `baseline` for all namespaces and the default warning level is set to `restricted`. Before this update, Operator Lifecycle Manager displayed pod security admission warnings in the `operator-marketplace` namespace. With this fix, reducing the warning level to `baseline` resolves the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2088541[*BZ#2088541*]) | |||
* In {product-title} {product-version} the default cluster-wide pod security admission policy is set to `privileged` for all namespaces and the default warning level is set to `restricted`. Before this update, Operator Lifecycle Manager displayed pod security admission warnings in the `operator-marketplace` namespace. With this fix, reducing the warning level to `baseline` resolves the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2088541[*BZ#2088541*]) |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a summary and rewording of the doc text in the BZ. This is not meant to do anything other than provide a more customer friendly summary of the content of the doc text in the advanced filed of the BZ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michaelryanpeter the info is not accurate and I think we should link here the release notes that explain what was changed in the OCP 4.11 and why the warning is raised now.
For transparency, I have created a new Jira ticket to track this work: OSDOCS-3903 |
@@ -2457,6 +2438,38 @@ spec: | |||
|
|||
* The pipeline metrics API does not support the required `pipelinerun/taskrun` histogram values from {rh-openstack} 1.6 and beyond. Consequently, the *Metrics* tab in the *Pipeline* -> *Details* page is removed instead of displaying incorrect values. There is currently no workaround for this issue. (link: https://bugzilla.redhat.com/show_bug.cgi?id=2074767[*BZ#2074767*]) | |||
|
|||
* In {product-title} {product-version}, pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings. Versions of the `opm` binary released earlier than {product-title} {product-version}, do not work with the restricted profile setting. As a result, an error displays if you install a catalog source that was built with an `opm` binary released earlier than {product-title} {product-version}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- In {product-title} {product-version}, pod security admission runs globally with privileged enforcement and restricted audit logging and API warnings.
@michaelryanpeter @anik120 @jianzhangbjz ^ the above text has no relation to the issue ^. Btw, all are still as "Previlged" and that will only change on 4.12. From 4.12, Openshift system namespaces (whch are these ones here) will be enforced to restricted.
Error: open ./db-xxxx: permission denied | ||
---- | ||
+ | ||
Workaround: Cluster administrators can allow the creation of catalog sources built with earlier `opm` versions by setting the pod security admission labels in the namespace metadata files. Apply the `baseline` label to the `warn`, `audit`, and `enforce` modes and set the `scc.podSecurityLabelSync` label to `false`, as shown in the following example: |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to create a new Catalogue Source it is not recommended use the Openshift system namespace
If the customer create a CatalogSource in their namespace, they will encounter bug https://bugzilla.redhat.com/show_bug.cgi?id=2088541, the workaround is add labels for their namespace:
apiVersion: v1
kind: Namespace
metadata:
...
labels:
security.openshift.io/scc.podSecurityLabelSync: "false"
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/audit: baseline <1>
pod-security.kubernetes.io/warn: baseline <2>
pod-security.kubernetes.io/enforce: baseline <3>
name: "<namespace_name>"
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The workarround here is about the error Error: open ./db-xxxx: permission denied
Hi @camilamacedo86 , thanks! But, why do we need this workaround? Based on https://bugzilla.redhat.com/show_bug.cgi?id=2100323#c4, it had been fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This index image quay.io/olmqe/etcd-index:v1
was created by using the old opm
, but I don't encounter this Error: open ./db-xxxx: permission denied
error when installing it on the latest OCP 4.11 cluster. As follows,
MacBook-Pro:~ jianzhang$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-08-04-081314 True False 3h30m Cluster version is 4.11.0-0.nightly-2022-08-04-081314
MacBook-Pro:~ jianzhang$ cat cs-test.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: test-operators
namespace: openshift-marketplace
spec:
displayName: Jian Operators
image: quay.io/olmqe/etcd-index:v1
priority: -100
publisher: Jian
sourceType: grpc
updateStrategy:
registryPoll:
interval: 10m0s
MacBook-Pro:~ jianzhang$ oc get catalogsource
NAME DISPLAY TYPE PUBLISHER AGE
certified-operators Certified Operators grpc Red Hat 3h46m
community-operators Community Operators grpc Red Hat 3h46m
qe-app-registry Production Operators grpc OpenShift QE 3h24m
redhat-marketplace Red Hat Marketplace grpc Red Hat 3h46m
redhat-operators Red Hat Operators grpc Red Hat 3h46m
test-operators Jian Operators grpc Jian 2m23s
MacBook-Pro:~ jianzhang$ oc get pods
NAME READY STATUS RESTARTS AGE
1263538589d348af98ee1c1a3af1e63c7d8f8f7b9bf1a043d121dac5c9c92fk 0/1 Completed 0 3h21m
3ef707a2bab80ae871eaa876bd55216b08d6a4980ebb5b8ed5cd9c123d7rpqf 0/1 Completed 0 3h21m
certified-operators-wnwzw 1/1 Running 0 3h20m
community-operators-5tr9l 1/1 Running 0 122m
marketplace-operator-5b84d4d799-4scrc 1/1 Running 1 (3h39m ago) 3h50m
qe-app-registry-kl8gq 1/1 Running 0 3h22m
redhat-marketplace-4bgs9 1/1 Running 0 3h43m
redhat-operators-mc46k 1/1 Running 0 3h43m
test-operators-dl4xf 1/1 Running 0 12s
MacBook-Pro:~ jianzhang$ oc rsh test-operators-dl4xf
/ # ps -elf|cat
PID USER TIME COMMAND
1 root 0:00 /bin/opm registry serve --database /database/index.db
78 root 0:00 /bin/sh
84 root 0:00 ps -elf
85 root 0:00 cat
/ # exit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
capability.openshift.io/name: marketplace
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
openshift.io/node-selector: ""
openshift.io/sa.scc.mcs: s0:c17,c9
openshift.io/sa.scc.supplemental-groups: 1000290000/10000
openshift.io/sa.scc.uid-range: 1000290000/10000
workload.openshift.io/allowed: management
creationTimestamp: "2022-08-08T23:14:30Z"
labels:
kubernetes.io/metadata.name: openshift-marketplace
olm.operatorgroup.uid/a93b3ad3-dc83-4801-908e-65b1de607ded: ""
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/audit: baseline
pod-security.kubernetes.io/warn: baseline
name: openshift-marketplace
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: d9e8b196-691f-4e8e-92c8-c83338f46eec
resourceVersion: "8735"
uid: ecc126e9-1d9e-43bd-8c40-7cb81c566497
spec:
finalizers:
- kubernetes
status:
phase: Active
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also install it in a restricted
project, but still not encounter this Error: open ./db-xxxx: permission denied
. So, I'm confused why we treat this Error: open ./db-xxxx: permission denied
as a known issue, it had been fixed.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n test
NAME DISPLAY TYPE PUBLISHER AGE
test-operators Jian Operators grpc Jian 14s
MacBook-Pro:~ jianzhang$ oc get pods -n test
NAME READY STATUS RESTARTS AGE
test-operators-k9wsc 1/1 Running 0 21s
MacBook-Pro:~ jianzhang$
MacBook-Pro:~ jianzhang$ oc get ns test -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
openshift.io/description: ""
openshift.io/display-name: ""
openshift.io/sa.scc.mcs: s0:c27,c14
openshift.io/sa.scc.supplemental-groups: 1000730000/10000
openshift.io/sa.scc.uid-range: 1000730000/10000
creationTimestamp: "2022-08-09T03:21:26Z"
labels:
kubernetes.io/metadata.name: test
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.24
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.24
name: test
resourceVersion: "104920"
uid: b76b2aec-0d90-4b70-acaa-f762c78add67
spec:
finalizers:
- kubernetes
status:
phase: Active
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right @jianzhangbjz.
This case can NOT be sorted out by the PSA label sync = true.
I just tested it out and the OCP matches the pod/image to restricted-v2 so OCP is unable to check that the image requires scale permissions and it fails in one of the impacted case scenarios.
You can ignore my comment and we need to say as workarround just enforce the namespace as privileged.
The above description from PSA release note, but my test shows:
MacBook-Pro:~ jianzhang$ cat ns.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
security.openshift.io/scc.podSecurityLabelSync: "false"
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.24
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/enforce: restricted
name: debug
MacBook-Pro:~ jianzhang$ oc create -f ns.yaml
namespace/debug created
MacBook-Pro:~ jianzhang$ cat cs-test.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: test-operators
namespace: debug
spec:
displayName: Jian Operators
image: quay.io/olmqe/etcd-index:v1
priority: -100
publisher: Jian
sourceType: grpc
updateStrategy:
registryPoll:
interval: 10m0s
MacBook-Pro:~ jianzhang$ oc create -f cs-test.yaml
catalogsource.operators.coreos.com/test-operators created
MacBook-Pro:~ jianzhang$ oc get catalogsource -n debug
NAME DISPLAY TYPE PUBLISHER AGE
test-operators Jian Operators grpc Jian 30s
MacBook-Pro:~ jianzhang$ oc get pods -n debug
No resources found in debug namespace.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n debug test-operators -o yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
creationTimestamp: "2022-08-09T07:19:10Z"
generation: 1
name: test-operators
namespace: debug
resourceVersion: "193097"
uid: 71087856-5b61-4997-bdeb-c188570ff617
spec:
displayName: Jian Operators
image: quay.io/olmqe/etcd-index:v1
priority: -100
publisher: Jian
sourceType: grpc
updateStrategy:
registryPoll:
interval: 10m0s
status:
message: 'couldn''t ensure registry server - error ensuring pod: : error creating
new pod: test-operators-: pods "test-operators-w2l62" is forbidden: violates PodSecurity
"restricted:latest": allowPrivilegeEscalation != false (container "registry-server"
must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities
(container "registry-server" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "registry-server" must set securityContext.runAsNonRoot=true),
seccompProfile (pod or container "registry-server" must set securityContext.seccompProfile.type
to "RuntimeDefault" or "Localhost")'
reason: RegistryServerError As you can see, no CatalogSource pod was generated, and did not encounter the described error |
And, I used the latest file-based index image MacBook-Pro:~ jianzhang$ cat cs-test.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: jian-operators
namespace: debug
spec:
displayName: Jian Operators
image: registry.redhat.io/redhat/redhat-operator-index:v4.11
priority: -100
publisher: Jian
sourceType: grpc
updateStrategy:
registryPoll:
interval: 10m0s
MacBook-Pro:~ jianzhang$ oc get catalogsource -n debug
NAME DISPLAY TYPE PUBLISHER AGE
jian-operators Jian Operators grpc Jian 16s
test-operators Jian Operators grpc Jian 10m
MacBook-Pro:~ jianzhang$ oc get pods -n debug
No resources found in debug namespace.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n debug jian-operators -o yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
creationTimestamp: "2022-08-09T07:29:37Z"
generation: 1
name: jian-operators
namespace: debug
resourceVersion: "197911"
uid: d5d2b309-3fd6-4bce-a7bf-ca9aa2a8618b
spec:
displayName: Jian Operators
image: registry.redhat.io/redhat/redhat-operator-index:v4.11
priority: -100
publisher: Jian
sourceType: grpc
updateStrategy:
registryPoll:
interval: 10m0s
status:
message: 'couldn''t ensure registry server - error ensuring pod: : error creating
new pod: jian-operators-: pods "jian-operators-kpwks" is forbidden: violates PodSecurity
"restricted:latest": allowPrivilegeEscalation != false (container "registry-server"
must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities
(container "registry-server" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "registry-server" must set securityContext.runAsNonRoot=true),
seccompProfile (pod or container "registry-server" must set securityContext.seccompProfile.type
to "RuntimeDefault" or "Localhost")'
reason: RegistryServerError |
Yes, I understand. But, if customers install the CatalogSource in their (not |
Hi @camilamacedo86 @anik120 I met the |
closed in favor of #50940 |
Version(s): 4.11 release notes
Issue: New jira ticket: OSDOCS-3903
Old ticket, for reference: OSDOCS-3774
Relates to an known issue and workaround for BZ#210323
Link to docs preview (requires VPN):
Additional information:
See #47490, #48575, and #48318 for comments and previous discussion.