Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFS mounts are remounted at every execution #57520

Closed
nohaj opened this issue Jun 2, 2020 · 11 comments · Fixed by #57968
Closed

NFS mounts are remounted at every execution #57520

nohaj opened this issue Jun 2, 2020 · 11 comments · Fixed by #57968
Assignees
Labels
Bug broken, incorrect, or confusing behavior Magnesium Mg release after Na prior to Al Platform Relates to OS, containers, platform-based utilities like FS, system based apps severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@nohaj
Copy link

nohaj commented Jun 2, 2020

Hello,

I'm trying to mount some NFS shares on my servers and it seems that I'm facing the same problem as the unresolved issues #15289 and #53688 .

[root@myserver ~]# salt-call --versions-report
Salt Version:
           Salt: 3000.2

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: Not Installed
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.6.2
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.5 (default, Apr  2 2020, 13:16:51)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.5.3
            ZMQ: 4.1.4

System Versions:
           dist: centos 7.8.2003 Core
         locale: UTF-8
        machine: x86_64
        release: 3.10.0-1127.8.2.el7.x86_64
         system: Linux
        version: CentOS Linux 7.8.2003 Core
/mnt/backup:
  mount.mounted:
    - device: myserver:/nfs/backup
    - fstype: nfs
    - options: "vers=4.1,hard,proto=tcp,sec=sys"
    - mkmnt: true
    - persist: true

At the first execution, the folder is created and the mount is done.

Then, for all the others executions I get :

     ID: /mnt/backup
    Function: mount.mounted
      Result: True
     Comment: Target was successfully mounted. Entry already exists in the fstab.
     Started: 14:18:59.361976
    Duration: 322.251 ms
     Changes:   
              ----------
              mount:
                  True
              umount:
                  Forced unmount because devices don't match. Wanted: myserver:/nfs/backup, current: myserver:/nfs/backup, /root/myserver:/nfs/backup

Same in python :

>>> os.path.realpath('myserver:/nfs/backup')
'/root/myserver:/nfs/backup'

Am I doing something wrong ?

Regards,

Johan

@nohaj nohaj added the Bug broken, incorrect, or confusing behavior label Jun 2, 2020
@dmurphy18 dmurphy18 added the info-needed waiting for more info label Jun 2, 2020
@dmurphy18
Copy link
Contributor

@nohaj Can you upgrade to Salt 3000.3 as there were a number of issues fixed in runner's etc. in that release, and this might have affected nfs mounts too. And then rerun your tests.
Thanks

@dmurphy18 dmurphy18 added this to the Blocked milestone Jun 2, 2020
@nohaj
Copy link
Author

nohaj commented Jun 3, 2020

Hello,

Just updated in 3000.3, same issue =(

@dmurphy18 dmurphy18 removed this from the Blocked milestone Jun 8, 2020
@dmurphy18 dmurphy18 removed the info-needed waiting for more info label Jun 8, 2020
@dmurphy18
Copy link
Contributor

This might be related to #39292 causing issues since the fstab is not quite the same.

@sagetherage sagetherage added Platform Relates to OS, containers, platform-based utilities like FS, system based apps severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around labels Jun 15, 2020
@sagetherage sagetherage added this to the Approved milestone Jun 15, 2020
@nohaj
Copy link
Author

nohaj commented Jun 18, 2020

OK so I added another NFS share today on some servers but in NFS 4.0 version (different NFS server) and I don't have this issue with this mount.

So I switched the others mounts in 4.0 version and the issue is gone.

- options: "vers=4.1,hard,proto=tcp,sec=sys"
+ options: "vers=4.0,hard,proto=tcp,sec=sys"

Does it help you ?

@piterpunk
Copy link

The mount.mounted state doesn't expect the parameter "options". It should be "opts" as stated by documentation. I did a few tests with Salt 3001 with the following state:

/srv/mirrors:
  mount.mounted:
    - device: marvin:/srv/www/htdocs/mirrors
    - fstype: nfs
    - opts: "vers=4.1,hard,proto=tcp"
    - mkmnt: true
    - persist: true

In the first execution it works as expected:

local:
----------
          ID: /srv/mirrors
    Function: mount.mounted
      Result: True
     Comment: Target was successfully mounted. Added new entry to the fstab.
     Started: 04:39:04.325111
    Duration: 310.876 ms
     Changes:   
              ----------                                                                                                            
              mount:
                  True
              persist:
                  new

Summary for local                                                                                                                   
------------                                                                                                                        
Succeeded: 1 (changed=1)

The filesystem is mounted and the fstab entry created. But if I apply the state a second time, the fstab is updated:

local:
----------
          ID: /srv/mirrors
    Function: mount.mounted
      Result: True
     Comment: Target was already mounted. Updated the entry in the fstab.
     Started: 04:40:14.435902
    Duration: 77.426 ms
     Changes:   
              ----------                                                                                                            
              persist:
                  update

Summary for local                                                                                                                   
------------                                                                                                                        
Succeeded: 1 (changed=1)

Now, the third time it works as expected, with no changes:

local:
----------
          ID: /srv/mirrors
    Function: mount.mounted
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 04:40:49.145368
    Duration: 74.772 ms
     Changes:   

Summary for local
------------
Succeeded: 1

But if I umount the /srv/mirrors and apply the state again, the fstab will be updated again:

local:
----------
          ID: /srv/mirrors
    Function: mount.mounted
      Result: True
     Comment: Target was successfully mounted. Updated the entry in the fstab.
     Started: 04:42:31.015303
    Duration: 304.953 ms
     Changes:   
              ----------
              mount:
                  True
              persist:
                  update

Summary for local
------------
Succeeded: 1 (changed=1)

And the fstab updating cycle restarts.

This one happens because when the filesystem is umounted, the generated fstab entry uses the options in the same order that is in sls file. But when the filesystem is mounted, the generated entry uses the options alphabetically ordered. So, they doesn't matches and it keeps updating the fstab depending if the state is applied with the filesystem mounted or not.

While I haven't got the same message of the original post, this seems to be a related problem and keeps the fstab being updating even with the correct entry present.

I have the fix for this issue, but I don't know if I should include it on PR #57669 which already modifies states/mount.py or create a new one. What would be the better approach?

@nohaj
Copy link
Author

nohaj commented Jul 15, 2020

The mount.mounted state doesn't expect the parameter "options". It should be "opts" as stated by documentation. I did a few tests with Salt 3001 with the following state:

You're right. The "options" parameter come from a formula, but inside the formula it's indeed "opts" =)

@piterpunk
Copy link

You're right. The "options" parameter come from a formula, but inside the formula it's indeed "opts" =)

Can you share this formula so I can test all the workflow?

@nohaj
Copy link
Author

nohaj commented Jul 15, 2020

I use this one : https://github.com/saltstack-formulas/nfs-formula

@piterpunk
Copy link

@nohaj , probably you have something custom in there.

The linked nfs-formula also don't use "options", it expects "opts" on calling state or "mount_opts" on pillar.

Can you apply the state using salt-call with "-l debug" or "-l trace" and put the output here?

@nohaj
Copy link
Author

nohaj commented Jul 16, 2020

Sorry about that @piterpunk, I am actually using "opts" and not "options". I don't know why I wrote that in my initial post.

And you are right, the third time nothing happen anymore, so it's not as bad as i thought.

@piterpunk
Copy link

@nohaj nice! So the bug is exactly the one I fixed. Thanks!

@sagetherage sagetherage added the Magnesium Mg release after Na prior to Al label Jul 22, 2020
@sagetherage sagetherage modified the milestones: Approved, Magnesium Jul 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Magnesium Mg release after Na prior to Al Platform Relates to OS, containers, platform-based utilities like FS, system based apps severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants