Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

salt-master salt-cloud not acting idempotent #34687

Closed
hnagri opened this issue Jul 15, 2016 · 4 comments
Closed

salt-master salt-cloud not acting idempotent #34687

hnagri opened this issue Jul 15, 2016 · 4 comments
Labels
Bug broken, incorrect, or confusing behavior P3 Priority 3 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Milestone

Comments

@hnagri
Copy link

hnagri commented Jul 15, 2016

http://stackoverflow.com/questions/38390216/salt-master-salt-cloud-not-acting-idempotent
I am trying to test salt-cloud saltify to deploy/install salt-minions on target machines.

I created three vagrant machines and names them master, minion-01and minion-02.

all the machines were same like this;

root@master:/home/vagrant# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.4 LTS
Release:    14.04
Codename:   trusty

then on master I followed this http://repo.saltstack.com/#ubuntu to install salt-master(manually of course).

then in master I added these three files.

in /etc/salt/cloud.providers:

root@master:/etc/salt/cloud.providers.d# cat bare_metal.conf 
my-saltify-config:
  minion:
    master: 192.168.33.10
  driver: saltify

in /etc/salt/cloud.profiles.d:

root@master:/etc/salt/cloud.profiles.d# cat saltify.conf 
make_salty:
  provider: my-saltify-config
  script_args: git v2016.3.1

/etc/salt/saltify-map


root@master:/etc/salt# cat saltify-map 
make_salty:
  - minion-01:
      ssh_host: 192.168.33.11
      ssh_username: vagrant
      password: vagrant
  - minion-02:
      ssh_host: 192.168.33.12
      ssh_username: vagrant
      password: vagrant

then on minion I ran salt-cloud -m /etc/salt/saltify-map It was very slow but It ran without errors. keys of both minion-01 and minion-02 was accepted by salt master.

I could do this:

   root@master:/home/vagrant# salt 'minion*' test.ping
    minion-01:
        True
    minion-02:
        True

and this;

root@master:/home/vagrant# salt-key 
Accepted Keys:
minion-01
minion-02
Denied Keys:
Unaccepted Keys:
Rejected Keys:
The Problem;

Now when I again executed this salt-cloud -m /etc/salt/saltify-map salt-master re-ran the whole execution and then I had this;


root@master:/home/vagrant# salt 'minion*' test.ping
minion-02:
    Minion did not return. [No response]
minion-01:
    Minion did not return. [No response]

and this;

root@master:/etc/salt# salt-key 
Accepted Keys:
minion-01
minion-02
Denied Keys:
minion-01
minion-02
Unaccepted Keys:
Rejected Keys:

In short salt-cloud is not acting idempotent.

What am I doing wrong ?

The second problem is, though on the first run salt-cloud -m /etc/salt/saltify-map installs and accepts key of minion-01 and minion-02 on salt-master, but the minion machines have all these things installed along with salt-minion

root@minion-02:/home/vagrant# salt
salt         salt-call    salt-cp      salt-master  salt-proxy   salt-ssh     salt-unity
salt-api     salt-cloud   salt-key     salt-minion  salt-run     salt-syndic 

How do I make sure that only salt-minion gets installed.

Thanks.

PS:


root@master:/etc/salt# salt-master --version
salt-master 2016.3.1 (Boron)
@rallytime
Copy link
Contributor

@hnagri Hm - I wonder if this is a bug in the saltify driver itself, rather than all of salt-cloud. I am not able to reproduce this with other drivers like linode or ec2. The map call correctly exits with warning messages stating that the map file minions already exist:

# salt-cloud -m /etc/salt/map
[WARNING ] 'rally-2' already exists, removing from the create map.
[WARNING ] 'rally-1' already exists, removing from the create map.
rally-1:
    ----------
    Message:
        Already running.
rally-2:
    ----------
    Message:
        Already running.

I tested this at the HEAD of the 2015.8 branch, the HEAD of the 2016.3 branch, as well as the 2016.3.1 release. All results were the same.

ping @techhat

@rallytime rallytime added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around Salt-Cloud P3 Priority 3 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. labels Jul 15, 2016
@rallytime rallytime added this to the Approved milestone Jul 15, 2016
@jfoboss
Copy link
Contributor

jfoboss commented Apr 7, 2017

Hi! Is there any progress with this bug? Will it be fixed in the next release? If not, in which version will it be?

I'm trying to deploy salt-minions with saltify in my environment & got same error... :(

@stale
Copy link

stale bot commented Sep 29, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Sep 29, 2018
@stale stale bot closed this as completed Oct 6, 2018
@rcmoutinho
Copy link

You need to define this extra line in your cloud.providers file:

force_minion_config: true

Seems to be an improvement on the Salt 2018.3.0.
https://docs.saltproject.io/en/latest/topics/cloud/misc.html#force-minion-config

The force_minion_config option requests the bootstrap process to overwrite an existing minion configuration file and public/private key files. Default: False

This might be important for drivers (such as saltify) which are expected to take over a connection from a former salt master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P3 Priority 3 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. Salt-Cloud severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Projects
None yet
Development

No branches or pull requests

4 participants