New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Repetitive warnings and errors in a new longhorn setup #6257
Comments
cc @PhanLe1010 |
The warning is fine.
Can you provide a support bundle? |
@derekbit Thank you for your reply. Here is the support bundle |
These volumes such as |
Do you have outdated configurations and volumesnapshot for taking snapshots of these volumes? |
heya, I don't mean to steal this issue but the first warning might be a broader problem: Since upgrading from v1.4.2 to v1.5.0, I'm getting a lot of them, for what feels like all (52; 38 attached) of my volumes. Since the number of log lines seem to spike every 4 hours, I assume they're related to my 4h-ly snapshot job: here's a short excerpt from the longhorn-manager logs: click to expand...
...you get the idea. this is the job's definition: apiVersion: longhorn.io/v1beta1
kind: RecurringJob
metadata:
name: 4h-snapshot
namespace: longhorn-system
spec:
concurrency: 1
cron: 0 0/4 * * ?
groups:
- default
retain: 12
task: snapshot It's meant to run on all volumes unless specified otherwise, which works. From digging through the code a bit, it looks like the Is this something longhorn should patch automatically? Unless I've misconfigured something, to me it looks like anyone upgrading to (and per the OP's description: anyone freshly installing) longhorn v1.5.0 will get this message on recurring job runs? let me know if you need a support bundle from me, 75MB is too big to just attach to a github issue >_< |
Using trace or debug log level might be better to avoid flooding messages |
@c3y1huang Please help with this. Need to get to 1.5.1. |
Pre Ready-For-Testing Checklist
|
Removing the v1.3 and v1.4 backport labels because the log is introduced in v1.5. |
Verified on master-head 20230815
Result Passed After Helm installation and manifests, I have checked the following items:
|
I've setup a completely new k8s cluster with longhorn 1.5.0.
There is nothing special running on that cluster, but I do get multiple warnings like:
and repetitive errors like this:
I don't understand the warning and the error. Obviously the volume is not existing, but why is it trying to create a snapshot at all and why is it trying this on a volume which is not existing?
I guess both parts are an issue of misconfiguration.
My
values.yaml
file:Environment
The text was updated successfully, but these errors were encountered: