Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Salt-Cloud fails to create VMware Snapshot from orchestrator on second run. #50323

Open
rbthomp opened this issue Oct 30, 2018 · 8 comments
Open
Labels
Bug broken, incorrect, or confusing behavior Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@rbthomp
Copy link
Contributor

rbthomp commented Oct 30, 2018

Description of Issue/Question

I have an orchestration job that creates a VMware snapshot when called by a reactor. In many instances multiple events come in firing off multiple orchestration jobs to create VMware snapshots. The first time a flood of events come in the VMware snapshots are created without issues, but anytime a flood of events comes in after that snapshots fail to be created. The strange thing is if I call salt-cloud directly I can create the snapshots without issues, even when the orchestration job fails to create them. I can correct this by restarting the salt-master, after which I can create snapshots for the first flood of events, but sub-sequential events fail.

IT looks like salt-cloud fails authentication after the first sent of received events. This is not a VMware permissions issue as the first orchestration jobs that run succeed, and if I restart salt-master it works for the first set of events.

Setup

  • Setup a salt-cloud environment with a VMware provider.

_reactor/test.sls

run_test_orch:
  runner.state.orchestrate:
    - args:
      - mods: orch.test
      - pillar:
          server: {{ data['id'] }}

orch/test.sls

{% set server = salt['pillar.get']('server') %}

create_snapshot:
  salt.runner:
    - name: cloud.action
    - func: create_snapshot
    - instance: {{ server.split('.')[0] }}
    - snapshot_name: test snapshot
    - description: test snapshot
    - memdump: False
  • Create events on the event bus that fire of the orchestration job.
  • You can do this by creating a test event state.

test/test.sls

test_orch:
  event.send:
    - data:
      status: 'test'

Logs

Are sessions from a previous job not getting cleaned up correctly? msg = "The object 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686' has already been deleted or has not been completely created",

[INFO    ] Completed state [cloud.action] at time 16:32:26.203653 (duration_in_ms=7438.608)
[DEBUG   ] File /var/cache/salt/master/accumulator/140633135747344 does not exist, no need to cleanup
[DEBUG   ] LazyLoaded state.check_result
[DEBUG   ] LazyLoaded state.check_result
[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Gathering reactors for tag salt/run/20181030163222358153/ret
[ERROR   ] (vmodl.fault.ManagedObjectNotFound) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = "The object 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686' has already been deleted or has not been completely created",
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   obj = 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686'
}
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 848, in get_content
    content = service_instance.content.propertyCollector.RetrieveContents([filter_spec])
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 580, in <lambda>
    self.f(*(self.args + (obj,) + args), **kwargs)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 386, in _InvokeMethod
    return self._stub.InvokeMethod(self, info, args)
  File "/usr/lib/python2.7/site-packages/pyVmomi/SoapAdapter.py", line 1370, in InvokeMethod
    raise obj # pylint: disable-msg=E0702
vmodl.fault.ManagedObjectNotFound: (vmodl.fault.ManagedObjectNotFound) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = "The object 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686' has already been deleted or has not been completely created",
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   obj = 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686'
}
[DEBUG   ] Failed to execute 'vmware.list_nodes_min()' while querying for running nodes: The object 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686' has already been deleted or has not been completely created
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 2393, in run_parallel_map_providers_query
    cloud.clouds[data['fun']]()
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 1617, in list_nodes_min
    vm_list = salt.utils.vmware.get_mors_with_properties(_get_si(), vim.VirtualMachine, vm_properties)
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 947, in get_mors_with_properties
    content = get_content(*content_args, **content_kwargs)
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 859, in get_content
    raise salt.exceptions.VMwareRuntimeError(exc.msg)
EBUG   ] Rendered data from file: /var/cache/salt/master/files/base/orch/test.sls:


create_snapshot:
  salt.runner:
    - name: cloud.action
    - func: create_snapshot
    - instance: Hostname
    - snapshot_name: test snapshot
    - description: test snapshot
    - memdump: False

[DEBUG   ] Results of YAML rendering:
OrderedDict([(u'create_snapshot', OrderedDict([(u'salt.runner', [OrderedDict([(u'name', u'cloud.action')]), OrderedDict([(u'func', u'create_snapshot')]), OrderedDict([(u'instance', u'Hostname')]), OrderedDict([(u'snapshot_name', u'test snapshot')]), OrderedDict([(u'description', u'test snapshot')]), OrderedDict([(u'memdump', False)])])]))])
[PROFILE ] Time (in seconds) to render '/var/cache/salt/master/files/base/orch/test.sls' using 'yaml' renderer: 0.00243091583252
[DEBUG   ] LazyLoaded config.option
[DEBUG   ] LazyLoaded salt.runner
[INFO    ] Running state [cloud.action] at time 17:11:11.695028
[INFO    ] Executing state salt.runner for [cloud.action]
[DEBUG   ] Unable to fire args event due to missing __orchestration_jid__
[DEBUG   ] LazyLoaded saltutil.runner
[DEBUG   ] LazyLoaded cloud.action
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/afcu.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/afcu.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/beacons.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/beacons.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/salt_events.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/salt_events.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/susemanager-mine.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/susemanager-mine.conf
[DEBUG   ] Changed git to gitfs in minion opts' fileserver_backend list
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt-master.somedomain.local
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Loading static grains from /etc/salt/grains
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/run/20181030171112123958/new; data = {u'fun': u'runner.cloud.action', u'fun_args': [{u'instance': u'Hostname', u'memdump': False, u'snapshot_name': u'test snapshot', u'description': u'test snapshot', u'func': u'create_snapshot'}], u'jid': u'20181030171112123958', u'user': u'UNKNOWN', u'_stamp': '2018-10-30T23:11:12.198675'}
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Gathering reactors for tag salt/run/20181030171112123958/new
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Changed git to gitfs in master opts' fileserver_backend list
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt-master.somedomain.local
[DEBUG   ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG   ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG   ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[ERROR   ] (vim.fault.NotAuthenticated) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = 'The session is not authenticated.',
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   object = 'vim.Folder:group-d1',
   privilegeId = 'System.View'
}
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 803, in get_content
    container_ref, [obj_type], True)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 580, in <lambda>
    self.f(*(self.args + (obj,) + args), **kwargs)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 386, in _InvokeMethod
    return self._stub.InvokeMethod(self, info, args)
  File "/usr/lib/python2.7/site-packages/pyVmomi/SoapAdapter.py", line 1370, in InvokeMethod
    raise obj # pylint: disable-msg=E0702
vim.fault.NotAuthenticated: (vim.fault.NotAuthenticated) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = 'The session is not authenticated.',
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   object = 'vim.Folder:group-d1',
   privilegeId = 'System.View'
}
[DEBUG   ] Failed to execute 'vmware.list_nodes_min()' while querying for running nodes: Not enough permissions. Required privilege: System.View
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 2393, in run_parallel_map_providers_query
    cloud.clouds[data['fun']]()
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 1617, in list_nodes_min
    vm_list = salt.utils.vmware.get_mors_with_properties(_get_si(), vim.VirtualMachine, vm_properties)
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 947, in get_mors_with_properties
    content = get_content(*content_args, **content_kwargs)
  File "/usr/lib/python2.7/site-packages/salt/utils/vmware.py", line 808, in get_content
    '{}'.format(exc.privilegeId))
VMwareApiError: Not enough permissions. Required privilege: System.View
. Removed list of systems provied from vsphere.
.
.
..
.
.
..
.

[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Adding minions for job 20181030171112123958: []
[DEBUG   ] Sending event: tag = salt/run/20181030171112123958/ret; data = {u'fun_args': [{u'instance': u'Hostname', u'memdump': False, u'snapshot_name': u'test snapshot', u'description': u'test snapshot', u'func': u'create_snapshot'}], u'jid': u'20181030171112123958', u'return': {u'Not Found': [u'Hostname'], u'Not Actioned/Not Running': [u'Hostname']}, u'success': True, u'_stamp': '2018-10-30T23:11:14.064183', u'user': u'UNKNOWN', u'fun': u'runner.cloud.action'}
[INFO    ] Runner completed: 20181030171112123958
[INFO    ] {u'return': {u'Not Found': [u'Hostname'], u'Not Actioned/Not Running': [u'Hostname']}}
[INFO    ] Completed state [cloud.action] at time 17:11:14.065592 (duration_in_ms=2370.565)
[DEBUG   ] File /var/cache/salt/master/accumulator/140632157446800 does not exist, no need to cleanup
[DEBUG   ] Gathering reactors for tag salt/run/20181030171112123958/ret
[DEBUG   ] LazyLoaded state.check_result
[DEBUG   ] LazyLoaded state.check_result
[DEBUG   ] LazyLoaded local_cache.prep_jid
[DEBUG   ] Adding minions for job 20181030171111082818: []
[DEBUG   ] Sending event: tag = salt/run/20181030171111082818/ret; data = {u'fun_args': [{u'pillar': OrderedDict([(u'server', u'Hostname.afcucorp.test')]), u'mods': u'orch.test'}], u'jid': u'20181030171111082818', u'return': {u'outputter': u'highstate', u'data': {u'salt-master.somedomain.local': {u'salt_|-create_snapshot_|-cloud.action_|-runner': {u'comment': u"Runner function 'cloud.action' executed.", u'name': u'cloud.action', u'__orchestration__': True, u'start_time': '17:11:11.695027', u'result': True, u'duration': 2370.565, u'__run_num__': 0, u'__jid__': u'20181030171112123958', u'__sls__': u'orch.test', u'changes': {u'return': {u'Not Found': [u'Hostname'], u'Not Actioned/Not Running': [u'Hostname']}}, u'__id__': u'create_snapshot'}}}, u'retcode': 0}, u'success': True, u'_stamp': '2018-10-30T23:11:14.108159', u'user': u'Reactor', u'fun': u'runner.state.orchestrate'}
[DEBUG   ] LazyLoaded highstate.output
[DEBUG   ] LazyLoaded nested.output
salt-master.somedomain.local:
----------
          ID: create_snapshot
    Function: salt.runner
        Name: cloud.action
      Result: True
     Comment: Runner function 'cloud.action' executed.
     Started: 17:11:11.695027
    Duration: 2370.565 ms
     Changes:
              ----------
              return:
                  ----------
                  Not Actioned/Not Running:
                      - Hostname
                  Not Found:
                      - Hostname

Summary for salt-master.somedomain.local
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:   2.371 s
[INFO    ] Runner completed: 20181030171111082818

Versions Report

Salt Version:
           Salt: 2018.3.3

Dependency Versions:
           cffi: 1.6.0
       cherrypy: Not Installed
       dateutil: 1.5
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: 0.26.3
        libnacl: Not Installed
       M2Crypto: 0.28.2
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: 0.26.4
         Python: 2.7.5 (default, Jul 13 2018, 13:06:57)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: centos 7.5.1804 Core
         locale: UTF-8
        machine: x86_64
        release: 3.10.0-862.14.4.el7.x86_64
         system: Linux
        version: CentOS Linux 7.5.1804 Core
@Ch3LL
Copy link
Contributor

Ch3LL commented Oct 31, 2018

was this previously working on an older version?

also ping @saltstack/team-cloud any ideas here?

@Ch3LL Ch3LL added the Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged label Oct 31, 2018
@Ch3LL Ch3LL added this to the Blocked milestone Oct 31, 2018
@rbthomp
Copy link
Contributor Author

rbthomp commented Oct 31, 2018

@Ch3LL I'm not sure I just implemented this on 2018.3.3.

@IAC-Automation
Copy link

I'm having the same issue on second run of a salt-cloud vm creation. That is, if the second run is long enough for the VMware session to time out. Forgive me for poor formatting but system, salt-master info, and traceback below:

Salt Version:
Salt: 2019.2.2

Dependency Versions:
cffi: 1.11.5
cherrypy: 5.6.0
dateutil: 2.6.1
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.10.1
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: 0.33.0
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.6.1
mysql-python: Not Installed
pycparser: 2.14
pycrypto: Not Installed
pycryptodome: Not Installed
pygit2: Not Installed
Python: 3.6.8 (default, Oct 7 2019, 17:58:22)
python-gnupg: Not Installed
PyYAML: 3.12
PyZMQ: 17.0.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.5.2
ZMQ: 4.3.1

System Versions:
dist: centos 8.0.1905 Core
locale: UTF-8
machine: x86_64
release: 4.18.0-80.11.2.el8_0.x86_64
system: Linux
version: CentOS Linux 8.0.1905 Core

Exception occurred in runner cloud.create: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/salt/utils/vmware.py", line 800, in get_content
container_ref, [obj_type], True)
File "/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py", line 706, in
self.f(*(self.args + (obj,) + args), **kwargs)
File "/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py", line 512, in _InvokeMethod
return self._stub.InvokeMethod(self, info, args)
File "/usr/local/lib/python3.6/site-packages/pyVmomi/SoapAdapter.py", line 1397, in InvokeMethod
raise obj # pylint: disable-msg=E0702
pyVmomi.VmomiSupport.vim.fault.NotAuthenticated: (vim.fault.NotAuthenticated) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'The session is not authenticated.',
faultCause = ,
faultMessage = (vmodl.LocalizableMessage) [],
object = 'vim.Folder:group-d1',
privilegeId = 'System.View'
}

              During handling of the above exception, another exception occurred:

              Traceback (most recent call last):
                File "/usr/lib/python3.6/site-packages/salt/client/mixins.py", line 381, in low
                  data['return'] = func(*args, **kwargs)
                File "/usr/lib/python3.6/site-packages/salt/runners/cloud.py", line 189, in create
                  info = client.create(provider, instances, **salt.utils.args.clean_kwargs(**kwargs))
                File "/usr/lib/python3.6/site-packages/salt/cloud/__init__.py", line 422, in create
                  mapper.create(vm_))
                File "/usr/lib/python3.6/site-packages/salt/cloud/__init__.py", line 1253, in create
                  output = self.clouds[func](vm_)
                File "/usr/lib/python3.6/site-packages/salt/cloud/clouds/vmware.py", line 2579, in create
                  container_ref=container_ref
                File "/usr/lib/python3.6/site-packages/salt/utils/vmware.py", line 899, in get_mor_by_property
                  object_list = get_mors_with_properties(service_instance, object_type, property_list=[property_name], container_ref=container_ref)
                File "/usr/lib/python3.6/site-packages/salt/utils/vmware.py", line 944, in get_mors_with_properties
                  content = get_content(*content_args, **content_kwargs)
                File "/usr/lib/python3.6/site-packages/salt/utils/vmware.py", line 805, in get_content
                  '{}'.format(exc.privilegeId))
              salt.exceptions.VMwareApiError: Not enough permissions. Required privilege: System.View

NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"

@stale
Copy link

stale bot commented Jan 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Jan 19, 2020
@sagetherage
Copy link
Contributor

not stale

@stale
Copy link

stale bot commented Jan 22, 2020

Thank you for updating this issue. It is no longer marked as stale.

@stale stale bot removed the stale label Jan 22, 2020
@sagetherage sagetherage added Bug broken, incorrect, or confusing behavior team-cloud labels Jan 24, 2020
@Ch3LL
Copy link
Contributor

Ch3LL commented Feb 6, 2020

as you point out with this error: msg = "The object 'vim.view.ContainerView:session[52cc35f2-f7df-209b-fd4f-1574410b47ec]52ac5f07-30d7-6d44-ad51-95f635eb8686' has already been deleted or has not been completely created", ooks like for some reason we aren't keeping the same authenticated session, will need to get this fixed.

@Ch3LL Ch3LL added severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around P4 Priority 4 and removed Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged labels Feb 6, 2020
@Ch3LL Ch3LL modified the milestones: Blocked, Approved Feb 6, 2020
@sagetherage sagetherage removed the P4 Priority 4 label Jun 3, 2020
@Dejv56
Copy link

Dejv56 commented Jan 10, 2022

I am still experiencing the same "The session is not authenticated" error when using salt.utils.vmware.get_service_instance to create instance object in custom proxy module. I am running v3002.2, but i tried to upgrade utils module to master branch, but problem is persistent.

Proxy minion is returning data for several minutes/hours without a hitch and suddenly i get error and only way to fix it is restarting proxy minion. Then it works again, for some time.

It's weird that software acquired by VMware doesn't provide tools to manage Vsphere/ESX properly. Salt Extension module for SaltStack is nice but its still in developement. Even when finished, the rest api is very limited in comparison to soap api, for example managing snapshots is not existent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

5 participants