New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
index rollover cronjob fails on openshift-logging operator #859
Comments
possible solution is to replace the or maybe the entire line with:
|
Is there any update about this? We updated the operator to the last version but this issue is still there. |
hi @tucsolo , thanks for reporting this. |
The issue has been migrated to JIRA: https://issues.redhat.com/browse/LOG-2644 |
/close |
@xperimental: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Just to know, I tried to open and issue on JIRA as someone else suggested me, buy there wasn't any way to do it. |
As documentation suggests, we installed the 5.3.5-20 OpenShift Logging and Elasticsearch operators on our 4.9.0-0.okd-2022-02-12-140851 OKD. Unfortunately, during a oc get pods routine check we noticed a failing pod, and it's the elasticsearch-im-app-xxx, from the elasticsearch-im-app cronjob. During the delete-then-rollover script execution, delete process goes OK, but then during the rollover it gets stuck during the process on some random index:
OK Process:
Faulty process:
of course we noticed that
Now I noticed that the problem is in lines 346-347 of index management scripts, because we've got indexes like app-openshift-something-012345 and the cut command does not get "app-openshift-something" and "012345" but it gets "app" and "openshift", then failing the counter advance at 012346.
What to do? Do I have to notify someone else? In the OKD github they told me to open an issue in https://issues.redhat.com/projects/LOG/ but I can't actually open an issue on that.
The text was updated successfully, but these errors were encountered: