Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unable to execute cron-style Jobs on CentOS7 / Python3 #57649

Open
bit-punk opened this issue Jun 12, 2020 · 8 comments
Open

[BUG] Unable to execute cron-style Jobs on CentOS7 / Python3 #57649

bit-punk opened this issue Jun 12, 2020 · 8 comments
Labels
Bug broken, incorrect, or confusing behavior python3 regarding Python 3 support severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@bit-punk
Copy link

Description
Currently i intend to let our minions execute a highstate-job at 4:15am declaring the job with a cron-like syntax in the pillar:

schedule:
  highstate-base:
    function: state.highstate
    cron: '15 4 * * *'

Even though i follow the official configuration example i cannot get the job executed. In the minion's log i get the following error:

2020-06-12 09:11:38,708 [salt.utils.schedule:1088][ERROR   ][4294] Missing python-croniter. Ignoring job highstate-base.
2020-06-12 09:11:39,677 [salt.utils.schedule:1088][ERROR   ][4294] Missing python-croniter. Ignoring job highstate-base.
2020-06-12 09:11:40,677 [salt.utils.schedule:1088][ERROR   ][4294] Missing python-croniter. Ignoring job highstate-base.

I have tried to solve the issue by running pip3 install croniter but this did not help.

Steps to Reproduce the behavior

  1. install a fresh VM using Centos7 (CentOS-7-x86_64-Minimal-1908.iso)
  2. install the salt-master and the salt-minion from RPMs (2019.2 / Py3)
  3. configure and accept the minion to contact the salt-master on localhost
  4. create a new schedule-configurationfile ("etc/salt/minion.d/basehighstate.conf") with a cron-style schedule-time like shown above
  5. restart the minion
  6. looking at the minion's log you should see the ERROR messages indicating that croniter is not available

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
Salt Version:
           Salt: 2019.2.5
 
Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: Not Installed
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8.1
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.33.0
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: Not Installed
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 3.6.8 (default, Apr  2 2020, 13:34:55)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.4.2
            ZMQ: 4.1.4
 
System Versions:
           dist: centos 7.7.1908 Core
         locale: UTF-8
        machine: x86_64
        release: 3.10.0-1062.el7.x86_64
         system: Linux
        version: CentOS Linux 7.7.1908 Core
@bit-punk bit-punk added the Bug broken, incorrect, or confusing behavior label Jun 12, 2020
@frogunder
Copy link
Contributor

@bit-punk Thank you for reporting this issue, I am seeing the same thing.

Thanks.

@frogunder frogunder added this to the Approved milestone Jun 12, 2020
@frogunder frogunder added the severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around label Jun 12, 2020
@sagetherage sagetherage added python3 regarding Python 3 support Magnesium Mg release after Na prior to Al labels Jun 15, 2020
@sagetherage sagetherage added this to Planning in Magnesium Jun 15, 2020
@sagetherage sagetherage modified the milestones: Approved, Magnesium Jul 29, 2020
@sagetherage sagetherage removed the Magnesium Mg release after Na prior to Al label Sep 29, 2020
@sagetherage sagetherage removed this from Planning in Magnesium Sep 29, 2020
@sagetherage sagetherage modified the milestones: Magnesium, Approved Sep 29, 2020
@garethgreenaway
Copy link
Contributor

@bit-punk Apologies for the delay on this one. I wasn't able to reproduce it. Can you provide the output from pip3 freeze? Thanks!

@haykhovsepyan
Copy link

The same issue from my side python3-croniter already installed

# dpkg -l | grep croniter 
ii  python3-croniter                      0.3.29-2ubuntu1
# pip3 install croniter
Requirement already satisfied: croniter in /usr/lib/python3/dist-packages (0.3.29)

And get the same error

salt-minion[728]: [ERROR ] Missing python-croniter. Ignoring job os_update.

Any update?

@ShadowMonster
Copy link

ShadowMonster commented Jan 28, 2023

Same here, but on Ubuntu 22.04
Jan 28 12:48:56 host salt-minion[56675]: [ERROR ] Missing python-croniter. Ignoring job highstate_conformity.

@enricotagliani
Copy link

It was the same for me too. In my case I fixed it by restarting the salt-minion service after installing croniter with pip.

@japtain-cack
Copy link

japtain-cack commented Jun 5, 2023

According to this documentation, croniter must be installed on the minion, so I'm providing data from the minion's perspective. However, the master looks very similar, and I can provide that info if needed.

sudo salt-call pip.list
local:
    ----------
    ...

    contextvars:
        2.4
    croniter:
        1.3.15
    cryptography:
        39.0.2

   ...
sudo salt-call --versions-report
Salt Version:
          Salt: 3006.1

Python Version:
        Python: 3.10.11 (main, May  5 2023, 02:31:54) [GCC 11.2.0]

Dependency Versions:
          cffi: 1.14.6
      cherrypy: 18.6.1
      dateutil: 2.8.1
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 3.1.2
       libgit2: Not Installed
  looseversion: 1.0.2
      M2Crypto: Not Installed
          Mako: Not Installed
       msgpack: 1.0.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     packaging: 22.0
     pycparser: 2.21
      pycrypto: Not Installed
  pycryptodome: 3.9.8
        pygit2: Not Installed
  python-gnupg: 0.4.8
        PyYAML: 5.4.1
         PyZMQ: 23.2.0
        relenv: 0.12.3
         smmap: Not Installed
       timelib: 0.2.4
       Tornado: 4.5.3
           ZMQ: 4.3.4

System Versions:
          dist: amzn 2
        locale: utf-8
       machine: x86_64
       release: 4.14.314-237.533.amzn2.x86_64
        system: Linux
       version: Amazon Linux 2

Croniter is installed with this state:

include:
  - apps.pip3

croniter_install:
  pip.installed:
    - name: croniter
    - require:
      - pkg: pip3_install

image

The company I work for, uses salt in all our environments. We have been experiencing issues with schedules for a over a year, with every salt version up to 3006.x, and have tried many times to get salt to detect croniter. I was holding out, and waiting for onedir to be fully rolled out, as I was hoping the virtual environment would solve the issues.

The info above comes from a fresh install, on a new VM. We have even tried installing croniter using the pakcage manager, pip, and yum. This is happening across our entire infrastructure on every VM.

I prefer not installing duplicate packages on the system with a mixture of pip, yum/apt, and in the salt onedir virtualenv. This makes a mess, causes package conflicts, and is a nightmare to troubleshoot when something goes wrong. All packages should be installed via pip, in onedir, period, full stop.

We have corporate policies that every host must be rebuilt, from the ground up, every 30 days. So, the VM above, which is an application box (minion), and the salt-master, are all brand new VMs with a fresh install of salt, croniter, etc.

It should also be noted, that if you install pip packages on the master, with sudo salt-pip install foo, that package will be installed as root, not the salt user. You will get permission issues all over, on all dependencies for that package. You must then fix permissions manually or salt will trip over itself. This can be avoided with modifying the command slightly, sudo -u salt salt-pip install foo.

From the minion, the command sudo salt-call pip.install foo works fine, and the proper permissions are used for the salt user when the package gets installed.

We recently upgraded from 3005.1 to 3006.1. Which btw, the milestone for 3006.1 said TBD, and the repo pulled 3005 out from under us. We were not able to properly plan a migration, since the milestone never said anything but TBD on the time frame. We are past all that, but surprises like that don't help us justify our reasons to continue using saltstack.

We also had issues with the bootstrap install script. It doesn't seem to be consistent and despite supplying a version, like bootstrap.sh stable 3005.1, it seems to do whatever it wants. Some boxes gave us 3005.1, and some installed 3006.1, regardless of the version supplied. I had to use a manual install method to obtain consistency with version pinning. Just fyi.

@japtain-cack
Copy link

japtain-cack commented Jun 14, 2023

Until we get traction on this, and the scheduler becomes less flaky, we have decided to use the anti-pattern of setting up cron jobs on each host. I should also note, that due to the issues we've had with salt, management is having us consider using another product.

@Adam-Zvolanek
Copy link

Have any updates been made to the croniter not or improperly installed on salt-minions? Or perhaps an alternative method to executing salt states on minions apart from setting up native (to minion) cron solutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior python3 regarding Python 3 support severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

9 participants