Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Systemd start for glusterd failed! #200

Open
HTechHQ opened this issue Jul 6, 2019 · 1 comment
Open

Systemd start for glusterd failed! #200

HTechHQ opened this issue Jul 6, 2019 · 1 comment

Comments

@HTechHQ
Copy link

HTechHQ commented Jul 6, 2019

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8.2
  • Ruby: ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
  • Distribution: Linux 4.9.0-8-amd64 A stopped volume aborts a Puppet run #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
  • Module version: 5.0.0

How to reproduce (e.g Puppet code you use)

I am basically running the example 02_simple, just adapted it to my node names

What are you seeing

Error: Systemd start for glusterd failed!
journalctl log for glusterd:
-- No entries --

Error: /Stage[main]/Gluster::Service/Service[glusterd]/ensure: change from stopped to running failed: Systemd start for glusterd failed!
journalctl log for glusterd:
-- No entries --

What behaviour did you expect instead

I would expect the example to pass and the file system to come up.
Because of the error message I don't know what to try any more. Please help me out, if I missed something or you need more info for debugging.
Firewall settings ect. seam all to be fine. I can execute gluster commands manually and create a volume by hand.

Output log

Error: Systemd start for glusterd failed!
journalctl log for glusterd:
-- No entries --

Error: /Stage[main]/Gluster::Service/Service[glusterd]/ensure: change from stopped to running failed: Systemd start for glusterd failed!
journalctl log for glusterd:
-- No entries --

Notice: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Exec[gluster create volume gvol0]: Dependency Service[glusterd] has failures: true
Warning: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Exec[gluster create volume gvol0]: Skipping because of failed dependencies
Notice: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Gluster::Volume::Option[gvol0:nfs.disable]/Exec[gluster option gvol0 nfs.disable true]: Dependency Service[glusterd] has failures: true
Warning: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Gluster::Volume::Option[gvol0:nfs.disable]/Exec[gluster option gvol0 nfs.disable true]: Skipping because of failed dependencies
Notice: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Exec[gluster start volume gvol0]: Dependency Service[glusterd] has failures: true
Warning: /Stage[main]/Main/Node[__node_regexp__swarm-node-01-9]/Gluster::Volume[gvol0]/Exec[gluster start volume gvol0]: Skipping because of failed dependencies

Any additional information you'd like to impart

@maxadamo
Copy link
Sponsor Contributor

maxadamo commented Dec 18, 2023

@HTechHQ is this still happening?
As I see, Gluster could not run those commands if the service is down and it makes sense to me to have such dependency. How can you tell puppet to ignore the status of the service?
We could implement a try and except mechanism, or other methods, but would it make sense, to let Puppet complete, and leave your service down?
I think it's even better to see that puppet is failing (maybe from your monitoring, or from Puppetboard). The way around you believe that everything is fine.

But please correct me if I am misunderstanding your request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants