Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] mount.fstab_present does not mount the filesystem #60464

Closed
cmonty14 opened this issue Jun 30, 2021 · 9 comments
Closed

[BUG] mount.fstab_present does not mount the filesystem #60464

cmonty14 opened this issue Jun 30, 2021 · 9 comments
Labels
Bug broken, incorrect, or confusing behavior cannot-reproduce cannot be replicated with info/context provided Duplicate Duplicate of another issue or PR - will be closed severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around State-Module
Milestone

Comments

@cmonty14
Copy link

Description
I want to mount a filesystem using a related entry in /etc/fstab.
However, the filesystem is not mounted although the entry in /etc/fstab is created correctly.
This issue is related to #57560, but the PR #57669 seems not working.

Setup

umount_mnt-backup_lve_db:
  service.dead:
    - name: mnt-backup_lve_db.automount
  mount.unmounted:
    - name: /mnt/backup_lve_db

/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY:
  file.managed:
    -   source: salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    -   makedirs: True
    -   user: root
    -   group: root
    -   mode: 644

{% set device = salt['cmd.run']('blkid -o device -t LABEL=backup-db') %}

/mnt/backup_lve_db:
  file.directory:
    - name: /mnt/backup_lve_db
    - user: root
    - group: root
    - mode: 755
  mount.fstab_present:
    - name: {{ device }}
    - fs_file: /mnt/backup_lve_db
    - fs_vfstype: xfs
    - fs_mntops:
      - noauto
      - x-systemd.automount
    - mount_by: uuid
    - mount: True

Steps to Reproduce the behavior
Run the state with an empty, but formatted disk.
The output is reporting no error:

salt 'vlcdbts4.devsys.net.sap' state.apply manager_org_1.stc-lve-automnt-backupdb
vlcdbts4.devsys.net.sap:
----------
          ID: umount_mnt-backup_lve_db
    Function: service.dead
        Name: mnt-backup_lve_db.automount
      Result: True
     Comment: The service mnt-backup_lve_db.automount is already dead
     Started: 10:07:55.812412
    Duration: 353.253 ms
     Changes:
----------
          ID: umount_mnt-backup_lve_db
    Function: mount.unmounted
        Name: /mnt/backup_lve_db
      Result: True
     Comment: Target was already unmounted
     Started: 10:07:56.167519
    Duration: 33.91 ms
     Changes:
----------
          ID: /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    Function: file.managed
      Result: True
     Comment: File /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY is in the correct state
     Started: 10:07:56.204708
    Duration: 19.565 ms
     Changes:
----------
          ID: /mnt/backup_lve_db
    Function: file.directory
      Result: True
     Comment: The directory /mnt/backup_lve_db is in the correct state
     Started: 10:07:56.224422
    Duration: 1.125 ms
     Changes:
----------
          ID: /mnt/backup_lve_db
    Function: mount.fstab_present
        Name: /dev/sdc
      Result: True
     Comment: /mnt/backup_lve_db entry added in /etc/fstab.
     Started: 10:07:56.225681
    Duration: 10.242 ms
     Changes:
              ----------
              persist:
                  new

Summary for vlcdbts4.devsys.net.sap
------------
Succeeded: 5 (changed=1)
Failed:    0
------------
Total states run:     5
Total run time: 418.095 ms

But the device is not mounted.
See attached screenshot for details.
2021-06-30_10-23-41

Expected behavior
If the state runs successfully, the device must be mounted.

Versions Report
Both (minion and master) uses the same OS and Salt version.

salt --versions-report ``` vlcspsumasrv:~ # salt --versions-report Salt Version: Salt: 3002.2

Dependency Versions:
cffi: 1.13.2
cherrypy: unknown
dateutil: 2.7.3
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
Jinja2: 2.10.1
libgit2: 0.28.4
M2Crypto: 0.35.2
Mako: Not Installed
msgpack: 0.5.6
msgpack-pure: Not Installed
mysql-python: Not Installed
pycparser: 2.17
pycrypto: 3.9.0
pycryptodome: Not Installed
pygit2: 0.28.2
Python: 3.6.13 (default, Mar 10 2021, 18:30:35) [GCC]
python-gnupg: Not Installed
PyYAML: 5.3.1
PyZMQ: 17.0.0
smmap: Not Installed
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.2.3

System Versions:
dist: sles 15.2 n/a
locale: UTF-8
machine: x86_64
release: 5.3.18-24.67-default
system: Linux
version: SLES 15.2 n/a


@cmonty14 cmonty14 added Bug broken, incorrect, or confusing behavior needs-triage labels Jun 30, 2021
@welcome
Copy link

welcome bot commented Jun 30, 2021

Hi there! Welcome to the Salt Community! Thank you for making your first contribution. We have a lengthy process for issues and PRs. Someone from the Core Team will follow up as soon as possible. In the meantime, here’s some information that may help as you continue your Salt journey.
Please be sure to review our Code of Conduct. Also, check out some of our community resources including:

There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Community Events Calendar.
If you have additional questions, email us at saltproject@vmware.com. We’re glad you’ve joined our community and look forward to doing awesome things with you!

@OrangeDog
Copy link
Contributor

https://github.com/saltstack/salt/blob/v3002.2/salt/states/mount.py#L1274
The code in that PR will only mount if the entry is being added. It won't do anything if it's already in fstab but not mounted.
However, it said "entry added in" so it should have then tried the mount. I can't see how it could skip that.

Do note that 3002.2 has critical security vulnerabilities and you should upgrade (and conduct an intrusion audit) immediately.

@OrangeDog OrangeDog added severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around State-Module and removed needs-triage labels Jun 30, 2021
@OrangeDog OrangeDog added this to the Approved milestone Jun 30, 2021
@cmonty14
Copy link
Author

Do note that 3002.2 has critical security vulnerabilities and you should upgrade (and conduct an intrusion audit) immediately.

Unfortunately the distribution I use here is not providing a newer Salt version.
However, could you please advise how to perform intrusion audit?

@OrangeDog
Copy link
Contributor

Then you need to do it yourself, via the salt repos or pip or whatever.
You need to check whether you had mitigations for the (multiple) issues, then determine whether any were exploited and whether you have any malware on your system.

@piterpunk
Copy link

Ho @cmonty14 ,

Just run your states here and it seems to work Ok:

# salt-call -l debug state.apply issue_60464
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: marvin.mylab
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Override  __grains__: <module 'salt.loaded.int.log_handlers.sentry_mod' from '/usr/lib/python3.9/site-packages/salt/log/handlers/sentry_mod.py'>
[DEBUG   ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Override  __utils__: <module 'salt.loaded.int.grains.zfs' from '/usr/lib/python3.9/site-packages/salt/grains/zfs.py'>
[DEBUG   ] Unable to resolve address fe80::dd72:61ee:744d:c2ee: [Errno 1] Unknown host
[DEBUG   ] Unable to resolve address fe80::221:85ff:fe51:5c07: [Errno 1] Unknown host
[DEBUG   ] Elapsed time getting FQDNs: 0.09605860710144043 seconds
[DEBUG   ] Loading static grains from /etc/salt/grains
[DEBUG   ] LazyLoaded zfs.is_supported
[DEBUG   ] Connecting to master. Attempt 1 of 1
[DEBUG   ] "marvin.mylab" Not an IP address? Assuming it is a hostname.
[DEBUG   ] Master URI: tcp://192.168.0.42:4506
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Generated random reconnect delay between '1000ms' and '11000ms' (9312)
[DEBUG   ] Setting zmq_reconnect_ivl to '9312ms'
[DEBUG   ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] salt.crypt._get_key_with_evict: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Connecting the Minion to the Master publish port, using the URI: tcp://192.168.0.42:4505
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Determining pillar cache
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'aes')
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] LazyLoaded state.apply
[DEBUG   ] LazyLoaded direct_call.execute
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] Override  __grains__: <module 'salt.loaded.int.module.grains' from '/usr/lib/python3.9/site-packages/salt/modules/grains.py'>
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] LazyLoaded config.get
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'aes')
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] Gathering pillar data for state run
[DEBUG   ] Finished gathering pillar data for state run
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] In saltenv 'base', looking at rel_path 'issue_60464.sls' to resolve 'salt://issue_60464.sls'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/issue_60464.sls' to resolve 'salt://issue_60464.sls'
[DEBUG   ] Fetching file from saltenv 'base', ** attempting ** 'salt://issue_60464.sls'
[DEBUG   ] No dest file found
[INFO    ] Fetching file from saltenv 'base', ** done ** 'issue_60464.sls'
[DEBUG   ] compile template: /var/cache/salt/minion/files/base/issue_60464.sls
[DEBUG   ] Jinja search path: ['/var/cache/salt/minion/files/base']
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'aes')
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] LazyLoaded cmd.run
[INFO    ] Executing command 'blkid' in directory '/root'
[DEBUG   ] stdout: /dev/mapper/testvg-backuplv
[DEBUG   ] output: /dev/mapper/testvg-backuplv
[PROFILE ] Time (in seconds) to render '/var/cache/salt/minion/files/base/issue_60464.sls' using 'jinja' renderer: 0.37203145027160645
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/base/issue_60464.sls:
umount_mnt-backup_lve_db:
  service.dead:
    - name: mnt-backup_lve_db.automount
  mount.unmounted:
    - name: /mnt/backup_lve_db

/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY:
  file.managed:
    -   source: salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    -   makedirs: True
    -   user: root
    -   group: root
    -   mode: 644



/mnt/backup_lve_db:
  file.directory:
    - name: /mnt/backup_lve_db
    - user: root
    - group: root
    - mode: 755
  mount.fstab_present:
    - name: /dev/mapper/testvg-backuplv
    - fs_file: /mnt/backup_lve_db
    - fs_vfstype: xfs
    - fs_mntops:
      - noauto
      - x-systemd.automount
    - mount_by: uuid
    - mount: True

[DEBUG   ] Results of YAML rendering: 
OrderedDict([('umount_mnt-backup_lve_db', OrderedDict([('service.dead', [OrderedDict([('name', 'mnt-backup_lve_db.automount')])]), ('mount.unmounted', [OrderedDict([('name', '/mnt/backup_lve_db')])])])), ('/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY', OrderedDict([('file.managed', [OrderedDict([('source', 'salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY')]), OrderedDict([('makedirs', True)]), OrderedDict([('user', 'root')]), OrderedDict([('group', 'root')]), OrderedDict([('mode', 644)])])])), ('/mnt/backup_lve_db', OrderedDict([('file.directory', [OrderedDict([('name', '/mnt/backup_lve_db')]), OrderedDict([('user', 'root')]), OrderedDict([('group', 'root')]), OrderedDict([('mode', 755)])]), ('mount.fstab_present', [OrderedDict([('name', '/dev/mapper/testvg-backuplv')]), OrderedDict([('fs_file', '/mnt/backup_lve_db')]), OrderedDict([('fs_vfstype', 'xfs')]), OrderedDict([('fs_mntops', ['noauto', 'x-systemd.automount'])]), OrderedDict([('mount_by', 'uuid')]), OrderedDict([('mount', True)])])]))])
[PROFILE ] Time (in seconds) to render '/var/cache/salt/minion/files/base/issue_60464.sls' using 'yaml' renderer: 0.01394343376159668
[DEBUG   ] LazyLoaded config.option
[DEBUG   ] LazyLoaded systemd.booted
[DEBUG   ] LazyLoaded service.start
[DEBUG   ] LazyLoaded service.dead
[INFO    ] Running state [mnt-backup_lve_db.automount] at time 01:24:39.930715
[INFO    ] Executing state service.dead for [mnt-backup_lve_db.automount]
[INFO    ] The named service mnt-backup_lve_db.automount is not available
[INFO    ] Completed state [mnt-backup_lve_db.automount] at time 01:24:39.957291 (duration_in_ms=26.579)
[DEBUG   ] LazyLoaded mount.unmounted
[INFO    ] Running state [/mnt/backup_lve_db] at time 01:24:39.966691
[INFO    ] Executing state mount.unmounted for [/mnt/backup_lve_db]
[DEBUG   ] LazyLoaded mount.active
[INFO    ] Executing command 'mount' in directory '/root'
[DEBUG   ] stdout: proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=32768k,mode=755)
devtmpfs on /dev type devtmpfs (rw,relatime,size=758384k,nr_inodes=189596,mode=755)
/dev/mapper/marvinvg02-root on / type ext4 (rw,relatime)
proc on /proc type proc (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
cgroup_root on /sys/fs/cgroup type tmpfs (rw,relatime,size=8192k,mode=755)
cpuset on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cpu on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cpuacct on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
blkio on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
memory on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
devices on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
freezer on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
net_cls on /sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
perf_event on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
net_prio on /sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
pids on /sys/fs/cgroup/pids type cgroup (rw,relatime,pids)
/dev/sda1 on /boot type ext4 (rw,relatime)
/dev/mapper/marvinvg02-opt on /opt type ext4 (rw,relatime)
/dev/mapper/marvinvg02-tmp on /tmp type ext4 (rw,relatime)
/dev/mapper/marvinvg02-usr on /usr type ext4 (rw,relatime)
/dev/mapper/marvinvg02-var on /var type ext4 (rw,relatime)
/dev/mapper/marvinvg03-home on /home type ext4 (rw,relatime)
tmpfs on /var/run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=32768k,mode=755)
cgroup on /sys/fs/cgroup/elogind type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/elogind/elogind-cgroups-agent,name=elogind)
nfsd on /proc/fs/nfs type nfsd (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/mapper/marvinvg03-srv on /srv type ext4 (rw,relatime)
[DEBUG   ] LazyLoaded disk.blkid
[INFO    ] Executing command blkid in directory '/root'
[DEBUG   ] stdout: /dev/sda1: UUID="408a9d01-2e39-427b-b8d1-660288a45d47" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="58440142-01"
/dev/sda2: UUID="dEcZUL-YOQZ-YxnV-GetQ-lQhc-C4B7-bReRz0" TYPE="LVM2_member" PARTUUID="58440142-02"
/dev/sda3: UUID="JnOiOx-DYTw-UXEP-S9FY-nZOU-YPke-V1E10l" TYPE="LVM2_member" PARTUUID="58440142-03"
/dev/sda5: UUID="FhAL9P-a02Q-f36K-dbqE-Zpg0-erwu-eaxHsu" TYPE="LVM2_member" PARTUUID="58440142-05"
/dev/sda6: UUID="5eUqdY-R9YD-vq0y-FbdS-lddh-np3a-Eg75Tv" TYPE="LVM2_member" PARTUUID="58440142-06"
/dev/sda7: UUID="Y3cFSR-UOUW-EJ88-SzMg-7RrD-rvvA-thsX1r" TYPE="LVM2_member" PARTUUID="58440142-07"
/dev/sda8: UUID="woPQTu-Dl0H-kCWT-Ap18-1hAq-R6B5-uQ4KPp" TYPE="LVM2_member" PARTUUID="58440142-08"
/dev/sda9: UUID="DSH6gb-XIf2-1LXH-pI1g-X4eE-p63f-PKaAnZ" TYPE="LVM2_member" PARTUUID="58440142-09"
/dev/sda10: UUID="BZdvph-zwA4-PUNs-2Au0-VB3k-P5ce-oteFLC" TYPE="LVM2_member" PARTUUID="58440142-0a"
/dev/sda11: UUID="UY7sWV-HBC5-Dlz0-D3xu-bJKA-3rA1-7ZWGZc" TYPE="LVM2_member" PARTUUID="58440142-0b"
/dev/sda12: UUID="0MVDrt-8wyh-MKLe-Imok-qh9v-qEJZ-qdu5wE" TYPE="LVM2_member" PARTUUID="58440142-0c"
/dev/sda13: UUID="3OX4kQ-AunF-hFoe-sWlX-0yn5-Jt9Y-qA7Kqx" TYPE="LVM2_member" PARTUUID="58440142-0d"
/dev/sda14: UUID="XyuPeU-M1VY-LbHz-LmzM-OxsL-OQVp-mjGKIN" TYPE="LVM2_member" PARTUUID="58440142-0e"
/dev/sda15: UUID="GjnFNn-Ua03-L7AT-pCcP-o4bZ-TWBf-viEexB" TYPE="LVM2_member" PARTUUID="58440142-0f"
/dev/sda16: UUID="nC3ZEZ-ahyt-lPpJ-8B0T-k47u-xKeu-lLeolT" TYPE="LVM2_member" PARTUUID="58440142-10"
/dev/sda17: UUID="0yrLRq-g7UJ-mhkG-4rY7-ifXx-Rwjh-616Bje" TYPE="LVM2_member" PARTUUID="58440142-11"
/dev/mapper/marvinvg02-root: UUID="85b659a9-a191-495f-884d-ee2012c2962b" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg02-var: UUID="5bf635c8-e5cc-4b81-b3e3-1dcdf2baca4a" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg02-usr: UUID="e12a6b64-3b27-4043-a251-b0cd14219d9a" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg02-tmp: UUID="ff9b0513-6539-4b3f-afa0-dd04d86d0aa0" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg02-opt: UUID="740c1106-71d0-4000-be21-6e9fb2f0075d" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg02-swap: UUID="c6d88417-3db0-4acb-93b9-a8caf6ee25d6" TYPE="swap"
/dev/mapper/marvinvg03-home: UUID="2c9b747b-23a2-4eb6-a51b-3e48ab6b0fd0" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/marvinvg03-srv: UUID="d95bd725-e7b1-40b5-aadf-d594684b1f3f" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/testvg-backuplv: LABEL="backup-db" UUID="ded8ed70-ea20-4708-8227-bdb3202cd33e" BLOCK_SIZE="512" TYPE="xfs"
[INFO    ] Target was already unmounted
[INFO    ] Completed state [/mnt/backup_lve_db] at time 01:24:40.159716 (duration_in_ms=193.023)
[DEBUG   ] LazyLoaded file.managed
[INFO    ] Running state [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY] at time 01:24:40.178350
[INFO    ] Executing state file.managed for [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY]
[DEBUG   ] LazyLoaded file.user_to_uid
[DEBUG   ] LazyLoaded cp.hash_file
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'aes')
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] In saltenv 'base', looking at rel_path 'manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY' to resolve 'salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY' to resolve 'salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY'
[INFO    ] File /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY is in the correct state
[INFO    ] Completed state [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY] at time 01:24:40.503763 (duration_in_ms=325.407)
[INFO    ] Running state [/mnt/backup_lve_db] at time 01:24:40.506205
[INFO    ] Executing state file.directory for [/mnt/backup_lve_db]
[INFO    ] The directory /mnt/backup_lve_db is in the correct state
[INFO    ] Completed state [/mnt/backup_lve_db] at time 01:24:40.520627 (duration_in_ms=14.418)
[INFO    ] Running state [/dev/mapper/testvg-backuplv] at time 01:24:40.522785
[INFO    ] Executing state mount.fstab_present for [/dev/mapper/testvg-backuplv]
[INFO    ] Executing command blkid in directory '/root'
[DEBUG   ] stdout: /dev/mapper/testvg-backuplv: LABEL="backup-db" UUID="ded8ed70-ea20-4708-8227-bdb3202cd33e" BLOCK_SIZE="512" TYPE="xfs"
[INFO    ] Executing command 'mount' in directory '/root'
[INFO    ] {'persist': 'new'}
[INFO    ] Completed state [/dev/mapper/testvg-backuplv] at time 01:24:40.669565 (duration_in_ms=146.775)
[DEBUG   ] File /var/cache/salt/minion/accumulator/3030695528 does not exist, no need to cleanup
[DEBUG   ] LazyLoaded state.check_result
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506', 'aes')
[DEBUG   ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'marvin.mylab', 'tcp://192.168.0.42:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://192.168.0.42:4506
[DEBUG   ] Trying to connect to: tcp://192.168.0.42:4506
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] LazyLoaded highstate.output
[DEBUG   ] Using pkg_resources to load entry points
[DEBUG   ] LazyLoaded nested.output
local:
----------
          ID: umount_mnt-backup_lve_db
    Function: service.dead
        Name: mnt-backup_lve_db.automount
      Result: True
     Comment: The named service mnt-backup_lve_db.automount is not available
     Started: 01:24:39.930712
    Duration: 26.579 ms
     Changes:   
----------
          ID: umount_mnt-backup_lve_db
    Function: mount.unmounted
        Name: /mnt/backup_lve_db
      Result: True
     Comment: Target was already unmounted
     Started: 01:24:39.966693
    Duration: 193.023 ms
     Changes:   
----------
          ID: /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    Function: file.managed
      Result: True
     Comment: File /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY is in the correct state
     Started: 01:24:40.178356
    Duration: 325.407 ms
     Changes:   
----------
          ID: /mnt/backup_lve_db
    Function: file.directory
      Result: True
     Comment: The directory /mnt/backup_lve_db is in the correct state
     Started: 01:24:40.506209
    Duration: 14.418 ms
     Changes:   
----------
          ID: /mnt/backup_lve_db
    Function: mount.fstab_present
        Name: /dev/mapper/testvg-backuplv
      Result: True
     Comment: /mnt/backup_lve_db entry added in /etc/fstab.
              Mounted /dev/mapper/testvg-backuplv on /mnt/backup_lve_db                                                             
     Started: 01:24:40.522790
    Duration: 146.775 ms
     Changes:   
              ----------                                                                                                            
              persist:
                  new

Summary for local                                                                                                                   
------------                                                                                                                        
Succeeded: 5 (changed=1)
Failed:    0
------------
Total states run:     5                                                                                                             
Total run time: 706.202 ms
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Closing AsyncZeroMQReqChannel instance

But I am not in the same environment than yours, can you remove the fstab entry and re-run your state using salt-call in the minion and with -l debug?

Also, can you confirm the salt version running in the minion? The command salt 'vlcdbts4.devsys.net.sap' grains.get saltversion will show it.

@cmonty14
Copy link
Author

cmonty14 commented Jul 1, 2021

salt 'vlcdbts4.devsys.net.sap' grains.get saltversion
vlcdbts4.devsys.net.sap:
    3000
vlcdbts4:~ # salt-call -l debug state.apply manager_org_1.stc-lve-automnt-backupdb                                                                                                       [319/551]
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/susemanager.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/susemanager.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: vlcdbts4.devsys.net.sap
[DEBUG   ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/susemanager.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/susemanager.conf
[DEBUG   ] Checking if minion is running in the public cloud
[DEBUG   ] Requesting URL http://169.254.169.254/metadata/instance/compute/?api-version=2017-08-01&format=text using GET method
[DEBUG   ] Requesting URL http://169.254.169.254/computeMetadata/v1/instance/ using GET method
[DEBUG   ] Using backend: tornado
[DEBUG   ] Requesting URL http://169.254.169.254/latest/meta-data/ using GET method
[DEBUG   ] Using backend: tornado
[DEBUG   ] Using backend: tornado
[DEBUG   ] Response Status Code: 599
[DEBUG   ] Response Status Code: 599
[DEBUG   ] Response Status Code: 599
[DEBUG   ] Error while parsing IPv4 address: ::1
[DEBUG   ] Expected 4 octets in '::1'
[DEBUG   ] Error while parsing IPv4 address: fe80::f816:3eff:fe8e:2c75
[DEBUG   ] Expected 4 octets in 'fe80::f816:3eff:fe8e:2c75'
[DEBUG   ] Trying lscpu to get CPU socket count
[DEBUG   ] LazyLoaded zfs.is_supported
[DEBUG   ] Connecting to master. Attempt 1 of 1
[DEBUG   ] Error while parsing IPv4 address: vlcspsumasrv.devsys.net.sap
[DEBUG   ] Only decimal digits permitted in 'vlcspsumasrv' in 'vlcspsumasrv.devsys.net.sap'
[DEBUG   ] Error while parsing IPv6 address: vlcspsumasrv.devsys.net.sap
[DEBUG   ] At least 3 parts expected in 'vlcspsumasrv.devsys.net.sap'
[DEBUG   ] "vlcspsumasrv.devsys.net.sap" Not an IP address? Assuming it is a hostname.
[DEBUG   ] Master URI: tcp://10.237.81.20:4506
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Generated random reconnect delay between '1000ms' and '11000ms' (1402)
[DEBUG   ] Setting zmq_reconnect_ivl to '1402ms'
[DEBUG   ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', 'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] salt.crypt._get_key_with_evict: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Connecting the Minion to the Master publish port, using the URI: tcp://10.237.81.20:4505
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Determining pillar cache
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded state.apply
[DEBUG   ] LazyLoaded direct_call.execute
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] LazyLoaded config.get
[DEBUG   ] key: test, ret: _|-
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] Gathering pillar data for state run
[DEBUG   ] Finished gathering pillar data for state run
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] Could not find file 'salt://manager_org_1/stc-lve-automnt-backupdb.sls' in saltenv 'base'
[DEBUG   ] In saltenv 'base', looking at rel_path 'manager_org_1/stc-lve-automnt-backupdb/init.sls' to resolve 'salt://manager_org_1/stc-lve-automnt-backupdb/init.sls'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/init.sls' to resolve 'salt://manager_org_1/stc-lve-automnt[237/551]
/init.sls'
[DEBUG   ] compile template: /var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/init.sls
[DEBUG   ] Jinja search path: [u'/var/cache/salt/minion/files/base']
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] LazyLoaded cmd.run
[INFO    ] Executing command 'blkid' in directory '/root'
[DEBUG   ] stdout: /dev/sdc
[DEBUG   ] output: /dev/sdc
[PROFILE ] Time (in seconds) to render '/var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/init.sls' using 'jinja' renderer: 0.0763800144196
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/init.sls:


umount_mnt-backup_lve_db:
  service.dead:
    - name: mnt-backup_lve_db.automount
  mount.unmounted:
    - name: /mnt/backup_lve_db

/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY:
  file.managed:
    -   source: salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    -   makedirs: True
    -   user: root
    -   group: root
    -   mode: 644

stc-lve-automount-backupdb_append_/etc/fstab:
  file.append:
    - name: /etc/fstab
    - text:
      - "## Backup DB device"

/mnt/backup_lve_db:
  file.directory:
    - user: root
    - group: root
    - mode: 755

stc-lve-automount-backupdb_mountpoint_/etc/fstab:
  mount.fstab_present:
    - name: /dev/sdc
    - fs_file: /mnt/backup_lve_db
    - fs_vfstype: xfs
    - fs_mntops:
      - noauto
      - x-systemd.automount
    - mount_by: uuid
    - mount: True
[DEBUG   ] Results of YAML rendering:
OrderedDict([(u'umount_mnt-backup_lve_db', OrderedDict([(u'service.dead', [OrderedDict([(u'name', u'mnt-backup_lve_db.automount')])]), (u'mount.unmounted', [OrderedDict([(u'name', u'/mnt/backup_
lve_db')])])])), (u'/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY', OrderedDict([(u'file.managed', [OrderedDict([(u'source', u'salt://manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/D
O_NOT_DELETE_THIS_DIRECTORY')]), OrderedDict([(u'makedirs', True)]), OrderedDict([(u'user', u'root')]), OrderedDict([(u'group', u'root')]), OrderedDict([(u'mode', 644)])])])), (u'stc-lve-automou
nt-backupdb_append_/etc/fstab', OrderedDict([(u'file.append', [OrderedDict([(u'name', u'/etc/fstab')]), OrderedDict([(u'text', [u'## Backup DB device'])])])])), (u'/mnt/backup_lve_db', OrderedDi
ct([(u'file.directory', [OrderedDict([(u'user', u'root')]), OrderedDict([(u'group', u'root')]), OrderedDict([(u'mode', 755)])])])), (u'stc-lve-automount-backupdb_mountpoint_/etc/fstab', OrderedD
ict([(u'mount.fstab_present', [OrderedDict([(u'name', u'/dev/sdc')]), OrderedDict([(u'fs_file', u'/mnt/backup_lve_db')]), OrderedDict([(u'fs_vfstype', u'xfs')]), OrderedDict([(u'fs_mntops', [u'n
oauto', u'x-systemd.automount'])]), OrderedDict([(u'mount_by', u'uuid')]), OrderedDict([(u'mount', True)])])]))])
[PROFILE ] Time (in seconds) to render '/var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/init.sls' using 'yaml' renderer: 0.00188589096069
[DEBUG   ] LazyLoaded config.option
[ERROR   ] Failed to import module images, this is due most likely to a syntax error:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1658, in _load_module
    mod = imp.load_module(mod_namespace, fn_, fpath, desc)
  File "/var/cache/salt/minion/extmods/modules/images.py", line 196
    scheme, _, path, *_ = urllib.parse.urlparse(url)
                     ^
SyntaxError: invalid syntax
[DEBUG   ] LazyLoaded state.sls
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/susemanager.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/susemanager.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: vlcdbts4.devsys.net.sap
[DEBUG   ] Could not LazyLoad boto3.assign_funcs: 'boto3.assign_funcs' is not available.
[DEBUG   ] Error loading module.boto3_elasticsearch: __init__ failed
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1721, in _load_module
    module_init(self.opts)
  File "/usr/lib/python2.7/site-packages/salt/modules/boto3_elasticsearch.py", line 92, in __init__                                                                                      [155/551]
    __utils__['boto3.assign_funcs'](__name__, 'es')
  File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1269, in __getitem__
    func = super(LazyLoader, self).__getitem__(item)
  File "/usr/lib/python2.7/site-packages/salt/utils/lazy.py", line 111, in __getitem__
    raise KeyError(key)
KeyError: u'boto3.assign_funcs'
[DEBUG   ] key: ifttt.secret_key, ret: _|-
[DEBUG   ] key: ifttt:secret_key, ret: _|-
[DEBUG   ] key: pushbullet.api_key, ret: _|-
[DEBUG   ] key: pushbullet:api_key, ret: _|-
[DEBUG   ] key: victorops.api_key, ret: _|-
[DEBUG   ] key: victorops:api_key, ret: _|-
[DEBUG   ] DSC: Only available on Windows systems
[DEBUG   ] Module PSGet: Only available on Windows systems
[DEBUG   ] LazyLoaded service.dead
[INFO    ] Running state [mnt-backup_lve_db.automount] at time 11:18:23.579828
[INFO    ] Executing state service.dead for [mnt-backup_lve_db.automount]
[INFO    ] Executing command systemctl in directory '/root'
[DEBUG   ] stdout: * mnt-backup_lve_db.automount
   Loaded: loaded
   Active: inactive (dead)
    Where: /mnt/backup_lve_db
[DEBUG   ] retcode: 3
[INFO    ] Executing command systemctl in directory '/root'
[DEBUG   ] stdout: inactive
[DEBUG   ] retcode: 3
[INFO    ] Executing command systemctl in directory '/root'
[DEBUG   ] stdout: Failed to get unit file state for mnt-backup_lve_db.automount: No such file or directory
[DEBUG   ] retcode: 1
[DEBUG   ] sysvinit script 'ma' found, but systemd unit 'ma.service' already exists
[DEBUG   ] sysvinit script 'rscd' found, but systemd unit 'rscd.service' already exists
[INFO    ] The service mnt-backup_lve_db.automount is already dead
[INFO    ] Completed state [mnt-backup_lve_db.automount] at time 11:18:23.621362 (duration_in_ms=41.533)
[DEBUG   ] LazyLoaded mount.unmounted
[INFO    ] Running state [/mnt/backup_lve_db] at time 11:18:23.643730
[INFO    ] Executing state mount.unmounted for [/mnt/backup_lve_db]
[INFO    ] Executing command 'mount' in directory '/root'
[DEBUG   ] stdout: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=32965576k,nr_inodes=8241394,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,size=49459200k)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
/dev/sda1 on / type ext4 (rw,relatime,data=ordered) [ROOT]
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14038)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
/dev/sdb on /hana type xfs (rw,relatime,attr2,inode64,noquota) [hana]
/etc/mount.map on /net type autofs (rw,relatime,fd=-1,pgrp=7573,timeout=30,minproto=5,maxproto=5,indirect,pipe_ino=-1)
tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=6594652k,mode=700,uid=1001,gid=79)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
unixhomes:/unixhome2 on /net/sapmnt.HOME type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.17.228.136,mountvers=3,mountport=
635,mountproto=udp,local_lock=none,addr=10.17.228.136,_netdev)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
/dev/rbd0 on /backup_rbd_lve_db type xfs (rw,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota) [LVE-BCKP-DB]
/dev/rbd1 on /backup_rbd_lve_os type xfs (rw,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota) [LVE-BCKP-OS]
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=6594652k,mode=700)
derotvi0119:/dlm_autodelete7days on /net/sapmnt.dlm.autodelete7days type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.1.1
87,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.70.1.187,_netdev)
tmpfs on /run/user/58037 type tmpfs (rw,nosuid,nodev,relatime,size=6594652k,mode=700,uid=58037,gid=17)
ls0110:/sapmnt/av on /sapmnt/av type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.21.146.81,mountvers=3,mountport=60818,
mountproto=udp,local_lock=none,addr=10.21.146.81,_netdev)
imtoolspub:/sapmnt/imtoolspub01/a/tools on /sapmnt/imtoolspub type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.21.146.9
1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.21.146.91,_netdev)
lxadmin:/vol/applsw_linux/q_linuxadmin on /sapmnt/linuxadmin type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.0.[74/551]
vers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.70.0.90,_netdev)
derotvi0119:/dlm_production on /sapmnt/production type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.1.187,mountvers=3,mou
ntport=635,mountproto=udp,local_lock=none,addr=10.70.1.187,_netdev)
derotvi0305:/derotvi0305a_newdb/q_newdb on /net/sapmnt.production.newdb type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70
.0.97,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.70.0.97,_netdev)
derotvi0119:/dlm_autodelete30days on /net/sapmnt.dlm.autodelete30days type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.1
.187,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.70.1.187,_netdev)
tmpfs on /run/user/163 type tmpfs (rw,nosuid,nodev,relatime,size=6594652k,mode=700,uid=163,gid=1003)
systemd-1 on /sapmnt/dlm type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=102131519)
systemd-1 on /net/sapmnt.dlm.lve type autofs (rw,relatime,fd=64,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=102133019)
derotvi0119:/derotvi0119b_DLM/q_files on /sapmnt/dlm type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.1.187,mountvers=3,
mountport=635,mountproto=udp,local_lock=none,addr=10.70.1.187)
derotvi0119:/dlm_lve on /net/sapmnt.dlm.lve type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.1.187,mountvers=3,mountport
=635,mountproto=udp,local_lock=none,addr=10.70.1.187)
tmpfs on /run/user/2038783 type tmpfs (rw,nosuid,nodev,relatime,size=6594652k,mode=700,uid=2038783,gid=17)
[INFO    ] Executing command blkid in directory '/root'
[DEBUG   ] stdout: /dev/sda1: LABEL="ROOT" UUID="ba1d90d8-6b56-4671-bed7-04a77ca6f8eb" TYPE="ext4" PARTUUID="9fb825fb-01"
/dev/rbd0: LABEL="LVE-BCKP-DB" UUID="a3a9d2b4-0544-4303-8680-3230f393bfe9" TYPE="xfs"
/dev/sdb: LABEL="hana" UUID="7e79a320-d1ec-4f4d-b34c-588a7227c0d9" TYPE="xfs"
/dev/rbd1: LABEL="LVE-BCKP-OS" UUID="a5521a37-53f0-4aad-9364-a90a21836b46" TYPE="xfs"
/dev/sdc: LABEL="backup-db" UUID="c8a17328-fab2-466a-a888-569d61b722f9" TYPE="xfs"
[INFO    ] Target was already unmounted
[INFO    ] Completed state [/mnt/backup_lve_db] at time 11:18:23.676800 (duration_in_ms=33.07)
[DEBUG   ] LazyLoaded file.managed
[INFO    ] Running state [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY] at time 11:18:23.715754
[INFO    ] Executing state file.managed for [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY]
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] In saltenv 'base', looking at rel_path 'manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY' to resolve 'salt://manager_org_1/stc-lve-automnt-backupd
b/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY'
[DEBUG   ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY' to resolve 'salt://
manager_org_1/stc-lve-automnt-backupdb/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY'
[INFO    ] File /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY is in the correct state
[INFO    ] Completed state [/mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY] at time 11:18:23.741314 (duration_in_ms=25.56)
[INFO    ] Running state [/etc/fstab] at time 11:18:23.741651
[INFO    ] Executing state file.append for [/etc/fstab]
[INFO    ] File changed:
---

+++

@@ -17,3 +17,4 @@

 ## Enter customized entries below
 lxadmin:/vol/applsw_linux/q_linuxadmin         /sapmnt/linuxadmin      nfs     noauto,x-systemd.automount,x-systemd.mount-timeout=30,_netdev   0 0
 /swapfile              none    swap    defaults        0 0
+## Backup DB device
[INFO    ] Completed state [/etc/fstab] at time 11:18:23.745244 (duration_in_ms=3.593)
[INFO    ] Running state [/mnt/backup_lve_db] at time 11:18:23.745497
[INFO    ] Executing state file.directory for [/mnt/backup_lve_db]
[INFO    ] The directory /mnt/backup_lve_db is in the correct state
[INFO    ] Completed state [/mnt/backup_lve_db] at time 11:18:23.746868 (duration_in_ms=1.371)
[INFO    ] Running state [/dev/sdc] at time 11:18:23.747127
[INFO    ] Executing state mount.fstab_present for [/dev/sdc]
[INFO    ] Executing command blkid in directory '/root'
[DEBUG   ] stdout: /dev/sdc: LABEL="backup-db" UUID="c8a17328-fab2-466a-a888-569d61b722f9" TYPE="xfs"
[INFO    ] {u'persist': u'new'}
[INFO    ] Completed state [/dev/sdc] at time 11:18:23.759429 (duration_in_ms=12.302)
[DEBUG   ] File /var/cache/salt/minion/accumulator/139726607382992 does not exist, no need to cleanup
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'vlcdbts4.devsys.net.sap', u'tcp://10.237.81.20:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.237.81.20:4506
[DEBUG   ] Trying to connect to: tcp://10.237.81.20:4506
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] LazyLoaded highstate.output
[DEBUG   ] LazyLoaded nested.output
[DEBUG   ] LazyLoaded nested.output
local:                                                                                                                                                                                     [4/551]
----------
          ID: umount_mnt-backup_lve_db
    Function: service.dead
        Name: mnt-backup_lve_db.automount
      Result: True
     Comment: The service mnt-backup_lve_db.automount is already dead
     Started: 11:18:23.579829
    Duration: 41.533 ms
     Changes:
----------
          ID: umount_mnt-backup_lve_db
    Function: mount.unmounted
        Name: /mnt/backup_lve_db
      Result: True
     Comment: Target was already unmounted
     Started: 11:18:23.643730
    Duration: 33.07 ms
     Changes:
----------
          ID: /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY
    Function: file.managed
      Result: True
     Comment: File /mnt/backup_lve_db/DO_NOT_DELETE_THIS_DIRECTORY is in the correct state
     Started: 11:18:23.715754
    Duration: 25.56 ms
     Changes:
----------
          ID: stc-lve-automount-backupdb_append_/etc/fstab
    Function: file.append
        Name: /etc/fstab
      Result: True
     Comment: Appended 1 lines
     Started: 11:18:23.741651
    Duration: 3.593 ms
     Changes:
              ----------
              diff:
                  ---

                  +++

                  @@ -17,3 +17,4 @@

                   ## Enter customized entries below
                   lxadmin:/vol/applsw_linux/q_linuxadmin               /sapmnt/linuxadmin      nfs     noauto,x-systemd.automount,x-systemd.mount-timeout=30,_netdev   0 0
                   /swapfile            none    swap    defaults        0 0
                  +## Backup DB device
----------
          ID: /mnt/backup_lve_db
    Function: file.directory
      Result: True
     Comment: The directory /mnt/backup_lve_db is in the correct state
     Started: 11:18:23.745497
    Duration: 1.371 ms
     Changes:
----------
          ID: stc-lve-automount-backupdb_mountpoint_/etc/fstab
    Function: mount.fstab_present
        Name: /dev/sdc
      Result: True
     Comment: /mnt/backup_lve_db entry added in /etc/fstab.
     Started: 11:18:23.747127
    Duration: 12.302 ms
     Changes:
              ----------
              persist:
                  new

Summary for local
------------
Succeeded: 6 (changed=2)
Failed:    0
------------
Total states run:     6
Total run time: 117.429 ms
[DEBUG   ] Closing AsyncZeroMQReqChannel instance

I will now upgrade Salt Minion and verify if this solves the issue.

@cmonty14
Copy link
Author

cmonty14 commented Jul 1, 2021

After installing salt-minion-3000-46.142.2.x86_64.rpm the issue is still reproducible.

@OrangeDog
Copy link
Contributor

The linked issue was fixed in 3002, which is later than 3000.

@OrangeDog OrangeDog added cannot-reproduce cannot be replicated with info/context provided Duplicate Duplicate of another issue or PR - will be closed labels Jul 1, 2021
@cmonty14
Copy link
Author

cmonty14 commented Jul 1, 2021

The linked issue was fixed in 3002, which is later than 3000.

Fair enough.
I will reopen this issue when I patched Salt 3002 and still can reproduce it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior cannot-reproduce cannot be replicated with info/context provided Duplicate Duplicate of another issue or PR - will be closed severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around State-Module
Projects
None yet
Development

No branches or pull requests

3 participants