Skip to content
This repository has been archived by the owner on Dec 9, 2020. It is now read-only.

Advanced Openshift Install (Ansible) using Vagrant (multi machine) #36

Merged
merged 27 commits into from
Oct 24, 2016

Conversation

petenorth
Copy link
Contributor

As suggested by Jason DeTiberus creating this pull request.

This project demonstrates an Openshift Enterprise 3.3 / Openshift Container Platform 3.3 advanced installation using ansible.

On my machine the entire process (excluding the creation of the initial RHEL 7.2 vagrant box) takes around 40 mins.

This is to create a single master / two node set up.

My laptop has 16 GiB, c. 7 GiB used after installation.

Intel® Core™ i7-3520M CPU @ 2.90GHz × 4

Note: this isn't targeting the 'oc cluster up' use case or the CDK use case. It is targeting users who need to become familiar with the Openshift advanced installation process using Ansible.

@detiber
Copy link
Contributor

detiber commented Oct 13, 2016

@abutcher @tbielawa ptal, I would like to see if we can provide some suggestions for consolidating the two Vagrantfiles and also providing support for libvirt.

@detiber
Copy link
Contributor

detiber commented Oct 13, 2016

@sdodson ptal as well

@petenorth
Copy link
Contributor Author

@detiber I'll look at consolidating and providing support for libvirt.

@detiber
Copy link
Contributor

detiber commented Oct 13, 2016

@petenorth thanks, the existing Vagrantfile in openshift-ansible should help out with libvirt support.

Copy link
Contributor

@detiber detiber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really starting to shape up really well. I would suggest using the Ansible Local provisioner would allow for avoiding Ansible issues on the host machine: https://www.vagrantup.com/docs/provisioning/ansible_local.html

VAGRANTFILE_API_VERSION = '2'

# Validate required plugins
REQUIRED_PLUGINS = %w(vagrant-registration vagrant-hostmanager landrush)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably want to limit vagrant-registration to just enterprise deployments, assuming origin deployments would be CentOS/Fedora only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes agreed (was on my list!)

box_name = 'rhel/7.2'
else
box_name = 'centos/7'
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably have an option for Fedora here as well, maybe a combination of DEPLOYMENT_TYPE and HOST_OS, where 'enterprise' is only valid for RHEL and origin could be any of the above.

# We must repair /etc/hosts

sudo sed -ri 's/127\.0\.0\.1\s.*/127.0.0.1 localhost localhost.localdomain/' /etc/hosts
SCRIPT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks static, any reason not to have this as just a file alongside the Vagrantfile?

Also, is there an issue with the ssh key injection with Vagrant? We shouldn't need the root password.


if ENV['DEPLOYMENT_TYPE'] == 'enterprise'
config.vm.provision "shell", inline: "echo \"setting up Red Hat subscriptions\""
config.vm.provision "shell", path: "repos.sh"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure there is a guarantee that this step is run prior to the admin1 provisioning.

@petenorth
Copy link
Contributor Author

Jason,

I'm fairly close to getting the ansible local provisioner working but there seems to be some differences in behaviour between the origin install and enterprise with respect to the --limit option.

The ansible provisioner seems to use the --limit option regardless of whether it is specified.

For Origin the only value which works for me is --limit="*", --limit="all" doesn't work.

For Enterprise neither --limit="*" or --limit="all" doesn't work.

The ansible version that gets installed is different:

Origin:

ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides

Enterprise

ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides

Any ideas?

Peter.

@petenorth
Copy link
Contributor Author

Update on the --limit question, think I've got it, need to specify an empty string i.e. --limit=""

@detiber
Copy link
Contributor

detiber commented Oct 17, 2016

@petenorth are you specifying the limit as a command line flag, or as an option within the vagrant config?

@petenorth
Copy link
Contributor Author

@detiber the ansible local provisioner sets it to --limit="all" if you don't specify it,

@petenorth
Copy link
Contributor Author

Moved to ansible local provisioner.
Using vagrant user and vagrant private keys (as suggested in ansible local vagrant documentation).

Outstanding:

  1. required plugin checks.
  2. libvirt provider.
  3. clean up Vagrantfile (set deployment type as a variable once rather rather then endless ENV[]).

@petenorth
Copy link
Contributor Author

Some cleanup of Vagrantfile complete.
Required plugins fixed.

Outstanding:

libvirt provider.

@petenorth
Copy link
Contributor Author

petenorth commented Oct 18, 2016

@detiber I maybe seeing some instability with the ansible local install. If confirmed the main difference between what I had originally and the current state of this pull request would be the use of the vagrant user for the install and/or the use of --limit="".

Think this was due to not confirming that router and integrated docker registry were not up and running.

@detiber
Copy link
Contributor

detiber commented Oct 18, 2016

@petenorth can you elaborate on the instability you are seeing?

@petenorth
Copy link
Contributor Author

I have only seen it once and the messages I was getting were that the client etcd was not configured correctly (this is from memory). I only saw it once and I noticed that the router and registry had not finished deploying .

A fresh run and making sure that the registry and router were running correctly seemed to fix the issue.

@detiber
Copy link
Contributor

detiber commented Oct 19, 2016

@petenorth working up a few suggestions, PR incoming after I finish some testing (with libvirt support as well).

@petenorth
Copy link
Contributor Author

@detiber OK Thanks.

detiber and others added 9 commits October 19, 2016 17:26
- remove script files and convert to inline scripts
- add libvirt support
- remove static inventory and use ansible_local provisioner config
- add ORIGIN_OS env var to be able to choose Fedora (untested)
- change vagrant-cachier scope to machine to avoid locking issues around the
  cache
- updated sync folder to use the default /vagrant (was causing issues with
  ansible_local provisioner integration)
- Vagrant Ansible provisioner does not handle nested hashes, instead change to
  json strings instead
@detiber
Copy link
Contributor

detiber commented Oct 24, 2016

@petenorth Just started some more testing. I'm not seeing the same issue you are with the hosts file not being updated properly. It seems to be working correctly (at least for CentOS7/Ansible 2.1).

One thing that I am noticing is that if I do not set VAGRANT_LOG=debug, them I am hitting weird deadlock issues on provisioning with libvirt. The issue seems to go away once I set logging to debug, though.

@detiber
Copy link
Contributor

detiber commented Oct 24, 2016

@petenorth My previous test run has completed and I see a deployed router and registry. I haven't tested any further, though.

@detiber
Copy link
Contributor

detiber commented Oct 24, 2016

@petenorth I'm going to go ahead and merge this, and any further issues can be sorted out through additional issues/PRs.

@detiber detiber merged commit 8c52fb6 into openshift:master Oct 24, 2016
jaywryan pushed a commit to jaywryan/openshift-ansible-contrib that referenced this pull request Jul 3, 2018
Advanced Openshift Install (Ansible) using Vagrant (multi machine)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants