Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[salt.utils.gitfs ][CRITICAL] Invalid gitfs configuration parameter 'saltenv' in remote git+ssh://git@ourgitserver/ourgitrepo.git. #42082

Closed
stamak opened this issue Jul 3, 2017 · 13 comments
Labels
info-needed waiting for more info ZD The issue is related to a Zendesk customer support ticket.
Milestone

Comments

@stamak
Copy link

stamak commented Jul 3, 2017

Description of Issue/Question

From time to time saltmaster raises an interesting critical error in logs:

2017-07-03 13:23:11,091 [salt.utils.gitfs ][CRITICAL][16966] Invalid gitfs configuration parameter 'saltenv' in remote git+ssh://git@ourgitserver/ourgitrepo.git. Valid parameters are: base, mountpoint, root, ssl_verify, name. See the GitFS Walkthrough in the Salt documentation for further information.

Setup

saltmaster config

fileserver_backend:
  - git
gather_job_timeout: 11
gitfs_provider: gitpython
gitfs_remotes:
  - git+ssh://git@ourgitserver/ourgitrepo.git:
    - saltenv:
      - our-dev:
        - ref: 'our-dev'
gitfs_root: states
gitfs_saltenv_whitelist:
  - 'our-dev'

Steps to Reproduce Issue

Logs

2017-07-03 13:23:11,091 [salt.utils.gitfs ][CRITICAL][16966] Invalid gitfs configuration parameter 'saltenv' in remote git+ssh://git@ourgitserver/ourgitrepo.git. Valid parameters are: base, mountpoint, root, ssl_verify, name. See the GitFS Walkthrough in the Salt documentation for further information.

Versions Report

salt-master --versions-report                                                                      
Salt Version:
           Salt: 2016.11.5

Dependency Versions:
           cffi: 1.10.0
       cherrypy: 4.0.0
       dateutil: 1.5
      docker-py: Not Installed
          gitdb: 0.6.4
      gitpython: 1.0.1
          ioflo: Not Installed
         Jinja2: 2.9.6
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: 1.2.3
      pycparser: 2.17
       pycrypto: 2.6.1
   pycryptodome: 3.4.3
         pygit2: Not Installed
         Python: 2.7.5 (default, Nov  6 2016, 00:28:07)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: 0.9.0
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4


System Versions:
           dist: centos 7.2.1511 Core
        machine: x86_64
        release: 3.10.0-327.28.3.el7.x86_64
         system: Linux
        version: CentOS Linux 7.2.1511 Core
@gtmanfred gtmanfred added the info-needed waiting for more info label Jul 3, 2017
@gtmanfred gtmanfred added this to the Blocked milestone Jul 3, 2017
@gtmanfred
Copy link
Contributor

I have been unable to reproduce this.

I am going to leave my master running through the holiday tomorrow. and see what happens.

Do you have any other tips about reproducing this? any commands that cause it to happen?

I also tried salt-run fileserver.update a bunch of times and couldn't reproduce it.

Also, @terminalmage have you see this at all?

Thanks,
Daniel

@terminalmage
Copy link
Contributor

@stamak Can you try to stop the salt-master service and see if you have any dangling salt-master processes that are still running (ps aux | grep salt-master)?

The error message you're seeing can only happen before 2016.11.0, so my best guess as to the cause for this is that once salt was upgraded from a pre-2016.11.0 release to its current version, some salt-master processes remained after the service was restarted.

@stamak
Copy link
Author

stamak commented Jul 6, 2017

most interesting that it disappeared after service restart 1 day ago

@terminalmage I have just stopped salt-master and made sure there is no any other salt-master process running

Propose to wait some time and see whether it happens again

thx everyone

@terminalmage
Copy link
Contributor

I doubt it will. The only explanation for this was that the daemon was running an older version and it was upgraded without restarting the daemon.

@rickh563 rickh563 added the ZD The issue is related to a Zendesk customer support ticket. label Jul 6, 2017
@rickh563
Copy link

rickh563 commented Jul 6, 2017

ZD-1576

@stamak
Copy link
Author

stamak commented Jul 31, 2017

BTW: it appears from time to time

What you could recommend to do if I face it next time?

@terminalmage
Copy link
Contributor

It doesn't make any sense that this would ever happen in 2016.11.x. How did you install Salt on your master? Literally the only explanation is that files from an older release are present.

@samodid
Copy link
Contributor

samodid commented Aug 15, 2017

We install salt from centos repo. We have template config in states from which generate saltmaster configuration.

@meggarr
Copy link

meggarr commented Aug 24, 2017

I saw this issue when using

salt-run cache.clear_git_lock gitfs type=update

@stamak
Copy link
Author

stamak commented Sep 5, 2017

/var/log/salt/master:2017-09-05 06:48:41,958 [salt.utils.gitfs ][CRITICAL][348] Invalid gitfs configuration parameter 'saltenv' in remote git+ssh://git@ourgitserver/ourgitrepo.git. Valid parameters are: base, mountpoint, root, ssl_verify, name. See the GitFS Walkthrough in the Salt documentation for further information.

Error is still present :(

@terminalmage
Copy link
Contributor

#43458 should fix this. The key was in @meggarr's comment. This wasn't being generated by the normal fileserver update process, but by incorrect calls to init_remotes() in the cache runner.

@sathieu
Copy link
Contributor

sathieu commented Nov 29, 2018

Shouldn't this bug be closed?

@terminalmage
Copy link
Contributor

Yep! Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
info-needed waiting for more info ZD The issue is related to a Zendesk customer support ticket.
Projects
None yet
Development

No branches or pull requests

7 participants