Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forced remount because options changed when no options changed (2014.7 regression) #18630

Closed
nvx opened this issue Dec 2, 2014 · 82 comments
Closed
Labels
Bug broken, incorrect, or confusing behavior P2 Priority 2 Platform Relates to OS, containers, platform-based utilities like FS, system based apps Regression The issue is a bug that breaks functionality known to work in previous releases. severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale State-Module
Milestone

Comments

@nvx
Copy link
Contributor

nvx commented Dec 2, 2014

Running a highstate on minions with some NFS mounts results in the mount being remounted every time. This did not occur under 2014.1

----------
[INFO    ] Completed state [/nfsmnt] at time 17:21:20.145018
[INFO    ] Running state [/nfsmnt] at time 17:21:20.146726
[INFO    ] Executing state mount.mounted for /nfsmnt
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -o rw,tcp,bg,hard,intr,remount -t nfshost:/nfsmnt /nfsmnt ' in directory '/root'
[INFO    ] {'umount': 'Forced remount because options changed'}
[INFO    ] Completed state [/nfsmnt] at time 17:21:20.267764
...
          ID: /nfsmnt
    Function: mount.mounted
      Result: True
     Comment:
     Started: 10:04:16.078806
    Duration: 68.802 ms
     Changes:
              ----------
              umount:
                  Forced remount because options changed

Running mount -l shows the following:

...
nfshost:/nfsmnt on /nfsmnt type nfs (rw,remount,tcp,bg,hard,intr,addr=x.x.x.x)

I can only assume it's breaking due to the addr option (which is automatically filled by the OS by the looks of it, it was never manually specified as a mount option) or the ordering.

The mount.mounted state looks as follows:

/nfsmnt:
  mount.mounted:
    - device: nfshost:/nfsmnt
    - fstype: nfs
    - opts: rw,tcp,bg,hard,intr
# salt-call --versions-report
           Salt: 2014.7.0
         Python: 2.6.8 (unknown, Nov  7 2012, 14:47:45)
         Jinja2: 2.5.5
       M2Crypto: 0.21.1
 msgpack-python: 0.1.12
   msgpack-pure: Not Installed
       pycrypto: 2.3
        libnacl: Not Installed
         PyYAML: 3.08
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: Not Installed
@rallytime rallytime added Bug broken, incorrect, or confusing behavior severity-low 4th level, cosemtic problems, work around exists Regression The issue is a bug that breaks functionality known to work in previous releases. labels Dec 2, 2014
@rallytime rallytime added this to the Approved milestone Dec 2, 2014
@rallytime
Copy link
Contributor

@nvx Thank you for this very helpful bug report! We'll check this out.

@garethgreenaway
Copy link
Contributor

This might be related to #18474

@fbretel
Copy link
Contributor

fbretel commented Dec 9, 2014

@garethgreenaway I don't think so: applying the patch manually didn't fix the issue for me. Besides I believe the option checking should not fail in the first place: as @nvx said: no option changed.
This issue arose for me when upgrading salt-2014.1.10-4.el5.noarch -> salt-2014.7.0-3.el5.noarch if that helps.

@garethgreenaway
Copy link
Contributor

As a guess I would agree with @nvx it's remounting it because the addr option appears in the list of mounted file systems but not in the mount options specified in the state. There are a few that we flag as "hidden" options, we may have to do the same for the addr option.

@kevinbowman-ta
Copy link

Not sure if "me too"s are useful, but I'm experiencing this as well, and also with an NFS mount. My SLS entry looks like this:

/egtnas:
  file.directory:
    - makedirs: True
  mount.mounted:
    - device: 10.0.7.48:/
    - fstype: nfs
    - mkmnt: True
    - opts: rw,proto=tcp,port=2049

and mount -l includes this line:

10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)

but when I run salt-call -l debug state.highstate then I see this in the log output:

[INFO    ] Executing state mount.mounted for /egtnas
[INFO    ] Executing command 'mount -l' in directory '/root'
[DEBUG   ] stdout: /dev/xvda1 on / type ext4 (rw) [cloudimg-rootfs]
<... snip ...>
10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)
[INFO    ] Executing command 'mount -l' in directory '/root'
[DEBUG   ] stdout: /dev/xvda1 on / type ext4 (rw) [cloudimg-rootfs]
<... snip ...>
10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)
[INFO    ] Executing command 'mount -o rw,proto=tcp,port=2049,remount -t nfs 10.0.7.48:/ /egtnas ' in directory '/root'
[INFO    ] {'umount': 'Forced remount because options changed'}
[INFO    ] Completed state [/egtnas] at time 11:10:16.359984

and then the output includes this:

          ID: /egtnas
    Function: mount.mounted
      Result: True
     Comment:
     Started: 11:10:16.335117
    Duration: 24.867 ms
     Changes:
              ----------
              umount:
                  Forced remount because options changed

I'm running version 2014.7.0+ds-2trusty1 from the PPA:

           Salt: 2014.7.0
         Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
         Jinja2: 2.7.2
       M2Crypto: 0.21.1
 msgpack-python: 0.3.0
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
        libnacl: Not Installed
         PyYAML: 3.10
          ioflo: Not Installed
          PyZMQ: 14.0.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: 0.9.1

@garethgreenaway
Copy link
Contributor

Planning to take a look at this today, hoping that some of the fixes I put in that got merged this morning addressed this. Thanks for the reports.

@eliasp
Copy link
Contributor

eliasp commented Dec 30, 2014

The fix provided by @garethgreenaway in #18978 changed, but didn't fix the situation for me.
Using now salt/states/mount.py from 2014.7 as of ec90619 and get now this result from my NFS mount states:

----------
          ID: library-storage-mount
    Function: mount.mounted
        Name: /media/remotefs/library
      Result: None
     Comment: Remount would be forced because options (bg) changed
     Started: 23:35:40.916259
    Duration: 73.019 ms
     Changes:   

@garethgreenaway
Copy link
Contributor

@eliasp Can you include your state?

@eliasp
Copy link
Contributor

eliasp commented Dec 30, 2014

@garethgreenaway Sure, sorry…

{{ share }}-storage-mount:
    mount.mounted:
        - name: {{ pillar['storage']['mountroot'] }}/{{ share }}
        - device: {{ host }}:/{{ share }}
        - fstype: nfs
        - mkmnt: True
        - opts:
            - defaults
            - bg
            - soft
            - intr
            - timeo=5
            - retrans=5
            - actimeo=10
            - retry=5
        - require:
            - pkg: nfs-common

The current mount already includes the bg option state.highstate test=True wants to change:

$ mount | ack-grep -i library
134.2.xx.xx:/library on /media/remotefs/library type nfs (rw,bg,soft,intr,timeo=5,retrans=5,actimeo=10,retry=5)

The corresponding entry in /proc/self/mountinfo doesn't contain the bg option:

53 22 0:37 / /media/remotefs/library rw,relatime - nfs 134.2.xx.xx:/library rw,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=10,acregmax=10,acdirmin=10,acdirmax=10,soft,proto=tcp,timeo=5,retrans=5,sec=sys,mountaddr=134.2.xx.xx,mountvers=3,mountport=33308,mountproto=udp,local_lock=none,addr=134.2.xx.xx

Besides that, I stumbled upon a traceback while working on this. In case the device/resource is currently busy, the mount state will backtrace instead of handling this more graceful.
I'll file a separate issue for this.

@eliasp
Copy link
Contributor

eliasp commented Dec 31, 2014

Another related case I found here with a CIFS mount:
State:

elite-mount:
    mount.mounted:
        - name: {{ pillar['linux']['remotefspath'] }}/{{ name }}_share
        - device: //{{ data.address }}/{{ data.share }}
        - fstype: cifs
        - mkmnt: True
        - opts:
            - username={{ data.user }}
            - password={{ data.password }}
            - _netdev
            - soft

state.highstate test=True:

          ID: elite-mount
    Function: mount.mounted
        Name: /media/remotefs/elite_share
      Result: None
     Comment: Remount would be forced because options (password=plain-text-password-redacted) changed
     Started: 01:49:46.970902
    Duration: 86.95 ms
     Changes:   

Entry in /proc/self/mountinfo:

45 22 0:35 / /media/remotefs/elite_share rw,relatime - cifs //192.168.1.2/data_mirror rw,vers=1.0,cache=strict,username=backupuser,domain=8B9865J,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.2,file_mode=0755,dir_mode=0755,nounix,serverino,rsize=61440,wsize=65536,actimeo=1

So there are two issues here:

  1. It trips over the (non-)existing password param
  2. It shows the value of password as plaintext (redacted above as plain-text-password-redacted)

@eliasp
Copy link
Contributor

eliasp commented Dec 31, 2014

… and another option which causes mount.mounted to stumble:
Forced remount because options (soft) changed

This is caused by the CIFS state described in my previous comment.

@garethgreenaway
Copy link
Contributor

Is soft a valid flag for cifs?

@eliasp
Copy link
Contributor

eliasp commented Dec 31, 2014

Yes, soft is a valid flag which is used by default, so my usage of it in the state is actually redundant but not wrong.

On December 31, 2014 3:18:06 AM CET, garethgreenaway notifications@github.com wrote:

Is soft a valid flag for cifs?


Reply to this email directly or view it on GitHub:
#18630 (comment)

Sent from my Android device with K-9 Mail. Please excuse my brevity.

@eliasp
Copy link
Contributor

eliasp commented Jan 6, 2015

Created #19369 to address some additional issues I ran into in my setup. With #19369 applied everything works for me so far.

@rallytime
Copy link
Contributor

Thanks for following up everyone! @nvx with the two pull requests applied, is this issue fixed for you as well?

@rallytime rallytime added the fixed-pls-verify fix is linked, bug author to confirm fix label Jan 6, 2015
@RobertFach
Copy link
Contributor

I can confirm that #19369 will help with this problem... I have the same problem with nfs mounts that have a "bg" option set. The problem is that these options are not reflected in the proc filesystem.

In which release could I see the fix?

@eliasp
Copy link
Contributor

eliasp commented Jan 7, 2015

In which release could I see the fix?

This will be in 2014.7.1 and 2015.2.0.

If you want to test it now, place salt/states/mount.py from 2014.7 your states repository/directory in a new subdirectory named _states.

Then run salt your-minion saltutil.sync_all to distribute it to your minion(s) where you'd want to test it.
See also Dynamic Module Distribution for more details on how to do this.

Don't forget to remove _states/mount.py once you updated your minions to the next release.

@nvx
Copy link
Contributor Author

nvx commented Jan 12, 2015

Hmm I updated states/mount.py from the 2014.7 branch and I now get this error:

----------
          ID: /nfsmnt
    Function: mount.mounted
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.6/site-packages/salt/state.py", line 1533, in call
                  **cdata['kwargs'])
                File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
                  self.gen.throw(type, value, traceback)
                File "/usr/lib/python2.6/site-packages/salt/utils/context.py", line 41, in func_globals_inject
                  yield
                File "/usr/lib/python2.6/site-packages/salt/state.py", line 1533, in call
                  **cdata['kwargs'])
                File "/var/cache/salt/minion/extmods/states/mount.py", line 191, in mounted
                  if opt not in active[real_name]['opts'] and opt not in active[real_name]['superopts'] and opt not in mount_invisible_options:
              KeyError: 'superopts'
     Started: 14:43:01.999878
    Duration: 40.286 ms
     Changes:

I tried updating modules/mount.py as well but it didn't help.

/nfsmnt:
  mount.mounted:
    - device: nfshost:/nfsmnt
    - fstype: nfs
    - opts: rw,tcp,bg,hard,intr

This could be an issue in 2014.7 unrelated to the fixing of this issue though. Is there a known-good revision that has this issue fixed that I can test to confirm fix?

@garethgreenaway
Copy link
Contributor

@nvx what OS? distribution? Kernel version?

@nvx
Copy link
Contributor Author

nvx commented Jan 12, 2015

Same as described in the initial post.

RHEL5 x64

Linux boxen 2.6.18-400.1.1.el5 #1 SMP Sun Dec 14 06:01:17 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
           Salt: 2014.7.0
         Python: 2.6.8 (unknown, Nov  7 2012, 14:47:45)
         Jinja2: 2.5.5
       M2Crypto: 0.21.1
 msgpack-python: 0.1.12
   msgpack-pure: Not Installed
       pycrypto: 2.3
        libnacl: Not Installed
         PyYAML: 3.08
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: Not Installed

@garethgreenaway
Copy link
Contributor

Thanks. 2.6.18 obviously doesn't have the superopts bits on the proc file. Will take a look and submit at a fix.

@garethgreenaway
Copy link
Contributor

Was able to duplicate this in a docker instance. Couple questions, are you running it inside a Docker instance? What happens when you run the blkid command on the machine in question?

@garethgreenaway
Copy link
Contributor

@pkruithof Perfect! Exactly what I was seeing as well. the vers option is showing up in the mounted file system as vers=4.0 but when specified in the salt state it's vers=4, salt sees the difference and forces a remount. Looks like a scenario we need to account for, I'll look at a fix later today.

robnagler pushed a commit to radiasoft/salt-conf that referenced this issue Apr 20, 2016
robnagler pushed a commit to radiasoft/salt-conf that referenced this issue Apr 20, 2016
@pkruithof
Copy link
Contributor

@garethgreenaway any progress on this by any chance?

@bergei
Copy link

bergei commented Sep 6, 2016

I see this in 2015.8.11

/srv/salt-images:
  mount.mounted:
    - device: {{ salt['pillar.get']('nfsmount') }}
    - fstype: nfs
    - opts: nfsvers=3,rsize=32768,wsize=32768,noatime,nodiratime
    - dump: 0
    - pass_num: 0
    - persist: True
    - mkmnt: True
          ID: /srv/salt-images
    Function: mount.mounted
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 14:03:06.516272
    Duration: 139.938 ms
     Changes:   
              ----------
              umount:
                  Forced unmount and mount because options (nfsvers=3) changed

@Da-Juan
Copy link
Contributor

Da-Juan commented Oct 20, 2016

Same in 2016.3.3:

# Mount CIFS share
backups_mount:
  mount.mounted:
    - name: {{ backup_directory }}
    - device: //my.server.com/backups
    - fstype: cifs
    - opts: vers=3.0,credentials=/etc/backups.cifs,uid=900,gid=34,file_mode=0660,dir_mode=0770
    - require:
      - file: {{ backup_directory }}
      - file: /etc/backups.cifs
----------
          ID: /etc/backups.cifs
    Function: file.managed
      Result: True
     Comment: File /etc/backups.cifs is in the correct state
     Started: 14:31:52.839528
    Duration: 22.704 ms
     Changes:   
----------
          ID: backups_mount
    Function: mount.mounted
        Name: /srv/backups
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 14:31:52.864405
    Duration: 93.889 ms
     Changes:   
              ----------
              umount:
                  Forced remount because options (credentials=/etc/backups.cifs) changed

@jodok
Copy link
Contributor

jodok commented Jan 8, 2017

i had this issue as well but specifying the opts as list helped:

    - opts:
      - noatime
      - nobarrier

@cheald
Copy link
Contributor

cheald commented Feb 6, 2017

I'm having the same issue on 2016.11.2 with the size option on a tmpfs mount:

  mount.mounted:
    - device: tmpfs
    - fstype: tmpfs
    - opts:
      - rw
      - size=256M
      - noatime
    - mkmnt: true
          ID: /var/lib/varnish
    Function: mount.mounted
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 02:05:46.434034
    Duration: 20.417 ms
     Changes:
              ----------
              umount:
                  Forced remount because options (size=256M) changed

@corywright
Copy link
Contributor

This is causing problems for us as well.

My state:

{% for export in pillar.get('nfs_exports',[]) %}
/mnt/{{ export }}:
  mount.mounted:
    - device: 10.10.10.25:/var/nfs/{{ export }}
    - fstype: nfs
    - opts: auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800
    - persist: True
    - mkmnt: True
    - require:
      - pkg: nfs-common
{% endfor %}

Output:

      ID: /mnt/report_export
Function: mount.mounted
  Result: True
 Comment: Target was already mounted. Updated the entry in the fstab.
 Started: 20:50:48.478451
Duration: 105.511 ms
 Changes:   
          ----------
          persist:
              update
          umount:
              Forced unmount and mount because options (nfsvers=4) changed

I tried removing the nfsvers=4 option from the state to see if that helped, but on the next highstate it complained about a different option:

      ID: /mnt/report_export
Function: mount.mounted
  Result: True
 Comment: Target was already mounted. Updated the entry in the fstab.
 Started: 20:51:51.493903
Duration: 65.774 ms
 Changes:   
          ----------
          persist:
              update
          umount:
              Forced unmount and mount because options (nolock) changed

This issue seems to have been forgotten.

@garethgreenaway
Copy link
Contributor

garethgreenaway commented Feb 9, 2017

@corywright Compare the options in your state with the options that are listed in /proc/mounts. Mount options are a royal pain since they change with what is used when the mount command is issued and what actually ends up being used. I've seen nfsvers=4 translate into nfsvers=4.0 or similar. And I wonder if nolock is one that ends up being hidden in the /proc/mounts file.

@corywright
Copy link
Contributor

@garethgreenaway Thanks Gareth. I can see the differences there.

Is there a solution? Or should the mount.mounted state with - persist: True be avoided for NFS volumes?

It seems like the documentation currently directs users to implement states that can be problematic in production (unexpectedly unmounting busy nfs volumes during highstates).

@shallot
Copy link

shallot commented Mar 21, 2017

It stands to reason that the forced unmount and mount because options changed in case of "vers=4" vs. "vers=4.0" is counter-intuitive. If mount(8) accepts "4" as alias for "4.0" there, so should the salt mount module. I realize it will be a minor pain to maintain an endless list of these aliases, but such is life.

@garethgreenaway
Copy link
Contributor

@shallot the salt module and state module do accept "4" as the value for nfsvers, the issue is on the system side outside of Salt. That "4" can be translated to 4.0, 4.1, etc. so the next time the state runs the values are different and the volume is remounted.

@shallot
Copy link

shallot commented Mar 21, 2017

@garethgreenaway sure, but when the system being modified doesn't actually see the difference between two invocations of such a salt state, then it didn't make sense to remount.

I can fathom a situation where someone applies such a salt state with the actual intention of having the mount upgraded to whatever is the latest 4.x version, but it seems too far-fetched to be the default.

When the nifty upgrade feature implicitly risks downtime or user data loss, its default should be conservative, to require the user to make that choice explicit.

@dguitarbite
Copy link

Long read! But I can confirm that I have this issue with nfsvers=4.1 which complains and usually fails to do this because my workloads are using the shares!

@ysagon
Copy link

ysagon commented Jun 21, 2017

Long read too, and I have the issue with the options "ac" of nfs on Centos 6.9

@kshcherban
Copy link

Same issue with EFS aka nfsvers=4.1, salt 2016.11.7.

@davidegiunchidiennea
Copy link

davidegiunchidiennea commented Nov 7, 2017

I got the same problem. I've resolved it by replacing:

nfs-montato:
  mount.mounted:
    - name: /var/www
    - device: "my.hostname:/data"
    - opts: "rw,rsize=32768,wsize=32768,hard,tcp,nfsvers=3,timeo=3,retrans=10"
    - fstype: nfs

with:

nfs-montato:
  mount.mounted:
    - name: /var/www
    - device: "my.hostname:/data"
    - opts: "rw,rsize=32768,wsize=32768,hard,tcp,vers=3,timeo=3,retrans=10"
    - fstype: nfs

Now the remount is not forced every time, and it works.
That's because if i mount a share with the option nfsvers=3 and then run the command "mount", i can see that parameter show as vers=3, not "nfsvers" !
Looking at the nfs manpange:

 vers=n         This option is an alternative to the nfsvers option.  It is included for compatibility with other operating systems

So, using "vers" instead of "nfsvers" is a good workaround.

@dguitarbite
Copy link

@davidegiunchidiennea That sounds like the solution!

We could actually say that this is a good solution and document this somewhere.

@matze502
Copy link

matze502 commented Feb 23, 2018

I've got the same problem here with option noauto. (Version 2016.11.4)

          ID: /tmp/install
    Function: mount.mounted
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 13:59:50.574278
    Duration: 83.513 ms
     Changes:
              ----------
              umount:
                  Forced unmount and mount because options (noauto) changed

State is:

/tmp/install:
  mount.mounted:
    - device: hostname:/share
    - fstype: nfs
    - opts: noauto,ro,vers=3
    - dump: 3
    - pass_num: 0
    - persist: True
    - mkmnt: True

@cathode911
Copy link

cathode911 commented Apr 15, 2018

same bug here with user_xattr for ext4 fs
State:

lvm-lv-srv-mount:
  mount.mounted:
    - name: /srv
    - device: /dev/mapper/sys-srv
    - fstype: ext4
    - opts: noatime,nodev,user_xattr
    - dump: 0
    - pass_num: 2
    - persist: True
    - mkmnt: True

Output:

          ID: lvm-lv-srv-mount
    Function: mount.mounted
        Name: /srv
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 21:53:42.155559
    Duration: 99.332 ms
     Changes:
              ----------
              umount:
                  Forced remount because options (user_xattr) changed

@stale
Copy link

stale bot commented Jul 29, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@dimitrijepetrovic
Copy link

Just an update, I can still see this issue

@MartinEmrich
Copy link

Same here (Salt 3004):

Forced remount because options (wsize=1048576) changed

yet wsize was 1048576 all along, and is also reported by /proc/mounts and /etc/mtab

@metaphys
Copy link

  • I also have the issue with the "noauto" parameter (obviously it does not appears in /proc/mounts as it is not per se a mount option but a mere information for the OS not to mount the partition at boot time.
  • I also have it with the btrfs user=blahblah option (for a subvolume) as it appears in the output of the mount command but not in the /proc/mounts file

debian 11 / salt 3004.2

@CodingMinds
Copy link

Created a new bug report with a proposal for a solution (at least for noauto): #65865

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P2 Priority 2 Platform Relates to OS, containers, platform-based utilities like FS, system based apps Regression The issue is a bug that breaks functionality known to work in previous releases. severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale State-Module
Projects
None yet
Development

No branches or pull requests