Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

salt-cloud (digitalocean provider-specific?): multiple providers with the same personal_access_token gives errors when deleting a VM #33439

Closed
jf opened this issue May 23, 2016 · 8 comments
Labels
Bug broken, incorrect, or confusing behavior P4 Priority 4 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Milestone

Comments

@jf
Copy link
Contributor

jf commented May 23, 2016

Description of Issue/Question

(Note that I haven't tested this to see if this affects other providers as well. Will try to do so tomorrow)

The example in the docs for Digital Ocean (https://docs.saltstack.com/en/latest/topics/cloud/config.html#digitalocean) shows an example where the location key is specified in the cloud provider config (instead of the VM profile config). To cater for multiple locations, one would be tempted to use multiple providers, with the only difference being location. This works, and we get no problems when creating VMs, but when you try to delete a VM, salt-cloud erroneously sees one VM as being connected to 2 providers (the key here is both providers have the same personal_access_token):

The following virtual machines are set to be destroyed:
  do-sg:
    digital_ocean:
      vm1
  do-ny:
    digital_ocean:
      vm1

Proceed? [N/y] 

Setup

cloud.providers.d/do-provider.conf:

do-sg:
  driver: digital_ocean
  personal_access_token: REUSE_SAME_TOKEN
  ssh_key_file: /some/key/file
  ssh_key_names: somekeyname
  location: Singapore 1

do-ny:
  driver: digital_ocean
  personal_access_token: REUSE_SAME_TOKEN
  ssh_key_file: /some/key/file
  ssh_key_names: somekeyname
  location: New York 1

cloud.profiles.d/do-profiles.conf:

do-sg-ubuntu:
  provider: do-sg
  image: 14.04.4 x64
  size: 2gb

Steps to Reproduce Issue

salt-cloud -p do-sg-ubuntu vm1; salt-cloud -d vm1

If the token for do-ny is changed, so that both providers are using different tokens, salt-cloud -d vm1 will delete the VM properly without any error messages.

Versions Report

Salt Version:
           Salt: 2015.8.10

Dependency Versions:
         Jinja2: 2.7.2
       M2Crypto: Not Installed
           Mako: 0.9.1
         PyYAML: 3.10
          PyZMQ: 14.0.1
         Python: 2.7.6 (default, Jun 22 2015, 17:58:13)
           RAET: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.4
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 1.5
          gitdb: 0.5.4
      gitpython: 0.3.2 RC1
          ioflo: Not Installed
        libgit2: Not Installed
        libnacl: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.3.0
   mysql-python: 1.2.3
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
   python-gnupg: Not Installed
          smmap: 0.8.2
        timelib: Not Installed

System Versions:
           dist: Ubuntu 14.04 trusty
        machine: x86_64
        release: 3.13.0-86-generic
         system: Ubuntu 14.04 trusty
@Ch3LL
Copy link
Contributor

Ch3LL commented May 25, 2016

@jf would you mind posting the error you are seeing when attempting to delete the VM?

@Ch3LL Ch3LL added the info-needed waiting for more info label May 25, 2016
@Ch3LL Ch3LL modified the milestones: Approved, Blocked May 25, 2016
@jf
Copy link
Contributor Author

jf commented May 25, 2016

hey @Ch3LL ! It's actually there in my first "quote". I'll reproduce it here again for clarity:

The following virtual machines are set to be destroyed:
  do-sg:
    digital_ocean:
      vm1
  do-ny:
    digital_ocean:
      vm1

Proceed? [N/y] 

"vm1" is detected as being in 2 different places when it really is only in 1.

@jf
Copy link
Contributor Author

jf commented May 25, 2016

oh sorry, perhaps you meant the actual error. Hang on... I'll need to create, then delete

@jf
Copy link
Contributor Author

jf commented May 25, 2016

ok, here it is:

The following virtual machines are set to be destroyed:
  do-sg:
    digital_ocean:
      vm1
  do-ny:
    digital_ocean:
      vm1

Proceed? [N/y] y
... proceeding
[INFO    ] Destroying in non-parallel mode.
[INFO    ] Starting new HTTPS connection (1): api.digitalocean.com
[INFO    ] Starting new HTTPS connection (1): api.digitalocean.com
[INFO    ] Starting new HTTPS connection (1): api.digitalocean.com
[INFO    ] Starting new HTTPS connection (1): api.digitalocean.com
Error: There was an error destroying machines: An error occurred while querying DigitalOcean. HTTP Code: 422  Error: u'{"id":"unprocessable_entity","message":"Droplet already has a pending event."}'

@jf
Copy link
Contributor Author

jf commented May 26, 2016

@Ch3LL just to add on: on the cloud provider's side, it may look like everything has been cleaned up (since the VM is gone). However, the VM still lingers on in salt: salt \* still attempts to make a call to the deleted VM, and salt-key still has the private key. (*NOTE that this is still happening with the 2016.3.0 release)

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 14, 2016

@jf thanks for all of the additional investigation work. Looks like we need to add the ability to handle multiple providers with the same key on digital ocean.

@Ch3LL Ch3LL added Bug broken, incorrect, or confusing behavior P4 Priority 4 severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around Salt-Cloud RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. and removed info-needed waiting for more info labels Jun 14, 2016
@Ch3LL Ch3LL modified the milestones: Approved, Blocked Jun 14, 2016
@jf
Copy link
Contributor Author

jf commented Jun 15, 2016

No prob, @Ch3LL. Thanks for getting back to me!

@stale
Copy link

stale bot commented May 25, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label May 25, 2018
@stale stale bot closed this as completed Jun 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P4 Priority 4 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Projects
None yet
Development

No branches or pull requests

2 participants