Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notice: /Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/{vrrp,uuid,brick}/ensure: removed #38

Open
aamerik opened this issue Mar 16, 2015 · 5 comments

Comments

@aamerik
Copy link

aamerik commented Mar 16, 2015

Hi! Thank you for this awesome module. Im having an issue on every agent run where the following 3 temporary files get removed:

Notice: /Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/vrrp]/ensure: removed
Notice: /Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/uuid]/ensure: removed
Notice: /Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/brick]/ensure: removed

This doesnt appear to cause any functionality issues but causes my puppet reporting to log a config change.

Thank you

@purpleidea
Copy link
Owner

On Mon, Mar 16, 2015 at 10:29 AM, Alexey Amerik notifications@github.com
wrote:

Hi! Thank you for this awesome module. Im having an issue on every agent
run where the following 3 temporary files get removed:

Notice:
/Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/vrrp]/ensure:
removed
Notice:
/Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/uuid]/ensure:
removed
Notice:
/Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/brick]/ensure:
removed

This doesnt appear to cause any functionality issues but causes my puppet
reporting to log a config change.

Thank you

FYI: this is a legit bug.

Tags += Confirmed.
Priority = Low.

It's a bit low on my priority list because I've got some more pressing
things to deal with. Sorry about that.
If anyone wants to take a stab, I'm happy to review. It turns out it's a
bit tricky though, but it's not dangerous at all in its current state.

Cheers,
James

@dw-thomast
Copy link

We have the same problem with our monitoring.
Each puppet's run exits with an exit code equal to 2 instead of 0. So, our monitoring thinks that there is some modifications on theses servers.
I have no idea how to fix this problem. So I cant send you a pull request.

@purpleidea
Copy link
Owner

@dw-thomast On reason I probably didn't notice this bug earlier is that @aamerik has a slightly unorthodox config. If you have a standard config using gluster::simple, it should probably not be an issue...

So @dw-thomast in the meantime, if you want to post your config and/or try an alternate one, perhaps this would help. Unfortunately I'm not actively writing patches for non-critical issues at this time. I will happily merge/review help otherwise. I get this is important to you, I just don't have the cycles ATM.

@ekohl
Copy link

ekohl commented Oct 8, 2015

I have the same issue. The config we have:

class { '::gluster::simple':
  replica   => 2,
  volume    => [$volume],
  shorewall => false,
  again     => false
}

I'm assuming this is because we do have a replica set up, but no vrrp (nor a vip) configured by the module. We do have a keepalived instance running but that's managed by a different module. I suspect the solution would be to add a parameter keepalived_manage or something similar.

A workaround is to manually define the files so at least your monitoring will be happy.

@purpleidea
Copy link
Owner

@ekohl Sorry you're experiencing this... With the exact config you posted, I don't expect that issue, but anything is possible I guess. I guess try confirming that vrrp => false in the ::simple class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants