-
Notifications
You must be signed in to change notification settings - Fork 373
Advanced Openshift Install (Ansible) using Vagrant (multi machine) #36
Conversation
@sdodson ptal as well |
@detiber I'll look at consolidating and providing support for libvirt. |
@petenorth thanks, the existing Vagrantfile in openshift-ansible should help out with libvirt support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really starting to shape up really well. I would suggest using the Ansible Local provisioner would allow for avoiding Ansible issues on the host machine: https://www.vagrantup.com/docs/provisioning/ansible_local.html
VAGRANTFILE_API_VERSION = '2' | ||
|
||
# Validate required plugins | ||
REQUIRED_PLUGINS = %w(vagrant-registration vagrant-hostmanager landrush) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably want to limit vagrant-registration to just enterprise deployments, assuming origin deployments would be CentOS/Fedora only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes agreed (was on my list!)
box_name = 'rhel/7.2' | ||
else | ||
box_name = 'centos/7' | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably have an option for Fedora here as well, maybe a combination of DEPLOYMENT_TYPE and HOST_OS, where 'enterprise' is only valid for RHEL and origin could be any of the above.
# We must repair /etc/hosts | ||
|
||
sudo sed -ri 's/127\.0\.0\.1\s.*/127.0.0.1 localhost localhost.localdomain/' /etc/hosts | ||
SCRIPT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks static, any reason not to have this as just a file alongside the Vagrantfile?
Also, is there an issue with the ssh key injection with Vagrant? We shouldn't need the root password.
|
||
if ENV['DEPLOYMENT_TYPE'] == 'enterprise' | ||
config.vm.provision "shell", inline: "echo \"setting up Red Hat subscriptions\"" | ||
config.vm.provision "shell", path: "repos.sh" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure there is a guarantee that this step is run prior to the admin1 provisioning.
Jason, I'm fairly close to getting the ansible local provisioner working but there seems to be some differences in behaviour between the origin install and enterprise with respect to the --limit option. The ansible provisioner seems to use the --limit option regardless of whether it is specified. For Origin the only value which works for me is --limit="*", --limit="all" doesn't work. For Enterprise neither --limit="*" or --limit="all" doesn't work. The ansible version that gets installed is different: Origin: ansible 2.1.1.0 Enterprise ansible 2.2.0 Any ideas? Peter. |
Update on the --limit question, think I've got it, need to specify an empty string i.e. --limit="" |
@petenorth are you specifying the limit as a command line flag, or as an option within the vagrant config? |
@detiber the ansible local provisioner sets it to --limit="all" if you don't specify it, |
Moved to ansible local provisioner. Outstanding:
|
Some cleanup of Vagrantfile complete. Outstanding: libvirt provider. |
@detiber I maybe seeing some instability with the ansible local install. If confirmed the main difference between what I had originally and the current state of this pull request would be the use of the vagrant user for the install and/or the use of --limit="". Think this was due to not confirming that router and integrated docker registry were not up and running. |
@petenorth can you elaborate on the instability you are seeing? |
I have only seen it once and the messages I was getting were that the client etcd was not configured correctly (this is from memory). I only saw it once and I noticed that the router and registry had not finished deploying . A fresh run and making sure that the registry and router were running correctly seemed to fix the issue. |
@petenorth working up a few suggestions, PR incoming after I finish some testing (with libvirt support as well). |
@detiber OK Thanks. |
- remove script files and convert to inline scripts - add libvirt support - remove static inventory and use ansible_local provisioner config - add ORIGIN_OS env var to be able to choose Fedora (untested) - change vagrant-cachier scope to machine to avoid locking issues around the cache - updated sync folder to use the default /vagrant (was causing issues with ansible_local provisioner integration)
- Vagrant Ansible provisioner does not handle nested hashes, instead change to json strings instead
@petenorth Just started some more testing. I'm not seeing the same issue you are with the hosts file not being updated properly. It seems to be working correctly (at least for CentOS7/Ansible 2.1). One thing that I am noticing is that if I do not set VAGRANT_LOG=debug, them I am hitting weird deadlock issues on provisioning with libvirt. The issue seems to go away once I set logging to debug, though. |
@petenorth My previous test run has completed and I see a deployed router and registry. I haven't tested any further, though. |
@petenorth I'm going to go ahead and merge this, and any further issues can be sorted out through additional issues/PRs. |
Advanced Openshift Install (Ansible) using Vagrant (multi machine)
As suggested by Jason DeTiberus creating this pull request.
This project demonstrates an Openshift Enterprise 3.3 / Openshift Container Platform 3.3 advanced installation using ansible.
On my machine the entire process (excluding the creation of the initial RHEL 7.2 vagrant box) takes around 40 mins.
This is to create a single master / two node set up.
My laptop has 16 GiB, c. 7 GiB used after installation.
Intel® Core™ i7-3520M CPU @ 2.90GHz × 4
Note: this isn't targeting the 'oc cluster up' use case or the CDK use case. It is targeting users who need to become familiar with the Openshift advanced installation process using Ansible.