-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nfs::mount ensure: mounted does not work #45
Comments
|
Looks like your NFS implementation does not like the commands given. Try running with |
|
This installation is hosted on Oracle Linux Server release 6.3, kernel 2.6.32-279.11.1.el6.x86_64 both master and agent pair The result of debug is shown below execute |
|
Hi, is there any finding on the issue updated? |
|
It seems you have an incorrect mount option.. though the entry in Hiera seems fine to me. I'm unable to duplicate the issue. |
|
Try not specifying any options and see if it mounts, then add an option, re-run puppet and repeat. |
|
|
|
We have a similar issue with OEL 5 and 6, mount and umount work great, but when Puppet goes to 'mount -o remount /mount'. This seemed to happen when we moved from Netapp to Isilon backend. Error: /Stage[main]/Puppet_nfs::Main/Mount[/mount]: Failed to call refresh: Execution of '/bin/mount -o remount /mount' returned 32: mount.nfs: an incorrect mount option was specified Error: /Stage[main]/Puppet_nfs::Main/Mount[/mount]: Execution of '/bin/mount -o remount /mount' returned 32: mount.nfs: an incorrect mount option was specified [root@tstpupt902 ~]# mount -o remount /mount fstab entry (about as basic as you can get): |
|
Hmm.. it appears that the fstab entry is incorrect, though I'm stumped. Any ideas? |
|
If it were incorrect, then it wouldn't even mount (which it does), it's 'remount' which complains. We're still trying to figure out what's up. |
|
So, we determined that for our issue, it indeed was the Isilon backend causing the issue. What happens is that a initial mount goes to one Isilon load balanced head, but when a subsequest 'mount -o remount' is asked for, that request is balanced to a different head which has no idea that a mount already existed and rejects the remount request. |
|
@ipoppo what type of nfs server are you using? |
|
@helperton is this something you fix entirely on the nfs server or are there special options that are needed for the nfs client? Either way, I'd be happy to document all of this info along with a fix in the README to help others. |
|
We have no fix yet. Here's my troubleshooting steps though: Since Isilon load balances nfs requests which come to it, the initial mount goes to one head, but the ‘remount’ command hits a different host, which doesn’t seem to know about the original mount which existed on the head which originally serviced the request. If I go into my /etc/hosts and hard code in an IP for a specific head (e.g. 10.0.0.61 somenas.yourcomp.org somenas) so it only ever talks to this head, then a ‘remount’ works just fine. Here’s how I found how it was going elsewhere on a remount: I ran an strace on both a mount and mount with -o remount added and saw this: Here’s the mount: mount("somenas.somecomp.org:/ifs/mount", "/mount", "nfs", MS_RDONLY|MS_NOSUID, "soft,vers=4,addr=10.0.0.53,clientaddr=10.1.0.71") = 0 Here’s the mount with -o remount added: mount("somenas.somecomp.org:/ifs/mount", "/mount", 0x7f8abd3651bb, MS_RDONLY|MS_NOSUID|MS_REMOUNT, "soft,addr=10.0.0.61") = -1 EINVAL (Invalid argument) You can see it going to a different ‘addr’. In the first request it goes to 10.0.0.53 and in the 2nd request it goes to 10.0.0.61. I guess the Isilon heads don't share state info, or at least don't share it quick enough for the other head to know about this mount before the remount happens (which it does almost immediately with the Puppet mount provider). |
|
A possible fix on the Puppet side (inside the mount provider) would be for it to first scan the 'addr' of the host the mount is currently connected to. Then when it issues the 'remount', to provide, 'addr=(the same ip it's already connected to)'. This way, it doesn't allow the mount command to do a fresh dns lookup. |
|
@helperton The mount provider[1] does not do any resolving of a potential hostname and thus cannot cache that value which I believe would lead to a system that was not idempotent. It is sloppy that Isilon returns a different name instead of hiding that and routing the request for you. Anyhow, I think the best way to tackle this is to use a fixed name with an entry in /etc/hosts. You can use https://github.com/ghoneycutt/puppet-module-hosts for this. A simple approach would be to identify nodes that share an nfs server and then use hiera to ensure the mount and the host entry. [1] - https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/mount.rb |
|
Going to close this as the issue is with the NFS server implementation and not the nfs module and we have a work around. |
|
I never said the mount provider did any resolving of hostnames. The OS command 'mount' does that. My point is that the mount provider could do 'cat /proc/mounts' and look for the current value of 'addr=' for the mount in question, prior to issuing the 'remount' and provide 'addr=' with the remount so the OS mount command doesn't need to do a fresh DNS lookup. |
|
@helperton @ipoppo Thanks for your help on this!! |
|
@helperton If you want to go that route, I would open a bug with Puppet Labs on the mount provider and see if they are receptive to that implementation. If you do, please add me as a watcher, as I'm curious :) |
|
We're an Enterprise customer, so I'm not sure you can track the issue, but here it is: |
Specify
ensure: mountedwill cause the following logSpecify
ensure: presentdoes not generate log and create /mnt/package but device not mounted (as expected)The text was updated successfully, but these errors were encountered: