Skip to content
This repository has been archived by the owner on Dec 9, 2020. It is now read-only.

Commit

Permalink
Merge pull request #36 from petenorth/master
Browse files Browse the repository at this point in the history
Advanced Openshift Install (Ansible) using Vagrant (multi machine)
  • Loading branch information
detiber committed Oct 24, 2016
2 parents 3fdfa7f + 0698efa commit 8c52fb6
Show file tree
Hide file tree
Showing 8 changed files with 391 additions and 0 deletions.
111 changes: 111 additions & 0 deletions vagrant/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
Overview
--------

This is a Vagrant based project that demonstrates an advanced Openshift Origin Latest or Container Platform 3.3 install i.e. one using an Ansible playbook.

The documentation for the installation process can be found at

https://docs.openshift.org/latest/welcome/index.html

or

https://docs.openshift.com/container-platform/3.3/install_config/install/planning.html


Pre-requisites
--------------

* (If intending to install Openshift Container Platform then) a Red Hat Account is required so that the VM can be registered via subscription manager.
* Vagrant installed ( I run with 1.7.4 which is a bit old)
* VirtualBox installed ( I run with 5.0.14 which is also a bit old)

Install the following vagrant plugins:

* landrush (1.1.2)
* vagrant-hostmanager (1.8.5)
* (If intending to install Openshift Container Platform then) vagrant-registration (found within the Red Hat CDK 2.2)

The Openshift Container Platform install requires importing a RHEL 7.2 box, the easiest way to do this is use the packet tool from hashicorp. The steps are described at

https://stomp.colorado.edu/blog/blog/2015/12/24/on-building-red-hat-enterprise-linux-vagrant-boxes/

The iso image that the vagrant image is created from should be the 'RHEL 7.2 Binary DVD' image on the Red Hat downloads site. The box name I have used in the Vagrantfile is 'rhel/7.2'

When installing Openshift Container Platform the Vagrantfile assumes a Red Hat Employee subscription 'Employee SKU'. If you aren't a Red Hat Employee then simply hard code the Pool ID of the subscription that gives you access to the Openshift Container Platform rpms (this could be a 30 day trial subscription).

Installation
------------

git clone https://github.com/openshift/openshift-ansible-contrib.git
cd vagrant-openshift-cluster/vagrant

then for an Origin install

vagrant up

or for an Openshift Container Platform install

export DEPLOYMENT_TYPE=enterprise
vagrant up (you will be prompted for your Red Hat account details and the sudo account password on the host during this process)

then for either carry on with

vagrant ssh admin1
su - (when prompted the password is 'redhat')
/vagrant/deploy.sh (when prompted respond with 'yes' and the password for the remote machines is 'redhat')

An ansible playbook will start (this is openshift installing), it uses the etc_ansible_hosts file of the git repo copied to /etc/ansible/hosts. If installing Openshift Container Platform then (via the DEPLOYMENT_TYPE environment variable) the variable 'deployment_type' in /etc/ansible/hosts is set to 'openshift-enterprise'.

The hosts file creates an install with one master and two nodes. The NFS share gets created on admin1.

The /etc/ansible/hosts file makes use of the 'openshift_ip' property to force the use of the eth1 network interface which is using the 192.168.50.x ip addresses of the vagrant private network.

Once complete AND after confirming that the docker-registry pod is up and running then

Logon to https://master1.example.com:8443 as admin/admin123, create a project test then

ssh to master1:

ssh master1
oc login -u=system:admin
oc annotate namespace test openshift.io/node-selector='region=primary' --overwrite

On the host machine (the following assumes RHEL/Centos, other OS may differ) first verify the contents of /etc/dnsmasq.d/vagrant-landrush gives

server=/example.com/127.0.0.1#10053

then update the dns entries thus

vagrant landrush set apps.example.com 192.168.50.20

In the web console create a PHP app and wait for it to complete the deployment. Navigate to the overview page for the test app and click on the link for the service i.e.

cakephp-example-test.apps.example.com

What has just been demonstrated? The new app is deployed into a project with a node selector which requires the region label to be 'primary', this means the app gets deployed to either node1 or node2. The landrush DNS wild card entry for apps.example.com points to master1 which is where the router is running, therefore being able to render the home page of the app means that the SDN of Openshift is working properly with Vagrant.

Notes
-----

The landrush plugin creates a small DNS server to that the guest VMs can resolve each others hostnames and also the host can resolve the guest VMs hostnames. The landrush DNS server is listens on 127.0.0.1 on port 10053. It uses a dnsmasq process to redirect dns traffic to landrush. If this isn't working verify that:

cat /etc/dnsmasq.d/vagrant-landrush

gives

server=/example.com/127.0.0.1#10053

and that /etc/resolv.conf has an entry

# Added by landrush, a vagrant plugin
nameserver 127.0.0.1










195 changes: 195 additions & 0 deletions vagrant/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

require 'socket'

hostname = Socket.gethostname
localmachineip = IPSocket.getaddress(Socket.gethostname)
puts %Q{ This machine has the IP '#{localmachineip} and host name '#{hostname}'}

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = '2'

deployment_type = ENV['DEPLOYMENT_TYPE'] || 'origin'
origin_os = ENV['ORIGIN_OS'] || 'centos'
rhsm_pool = ENV['RHSM_POOL'] || 'Employee SKU'

if deployment_type == 'openshift-enterprise'
REQUIRED_PLUGINS = %w(vagrant-registration vagrant-hostmanager landrush)
else
REQUIRED_PLUGINS = %w(vagrant-hostmanager landrush)
end

errors = []

def message(name)
"#{name} plugin is not installed, run `vagrant plugin install #{name}` to install it."
end
# Validate and collect error message if plugin is not installed
REQUIRED_PLUGINS.each { |plugin| errors << message(plugin) unless Vagrant.has_plugin?(plugin) }
unless errors.empty?
msg = errors.size > 1 ? "Errors: \n* #{errors.join("\n* ")}" : "Error: #{errors.first}"
fail Vagrant::Errors::VagrantError.new, msg
end

if deployment_type == 'openshift-enterprise'
box_name = 'rhel/7.2'
elsif origin_os == 'centos'
box_name = 'centos/7'
else
box_name = 'fedora/24-cloud-base'
end

NETWORK_BASE = '192.168.50'
INTEGRATION_START_SEGMENT = 20

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

# if Vagrant.has_plugin?("vagrant-cachier")
# config.cache.scope = :machine
# end

config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.ignore_private_ip = false
# config.hostmanager.include_offline = true

config.landrush.enabled = true
config.landrush.tld = 'example.com'
config.landrush.guest_redirect_dns = false

if deployment_type == 'openshift-enterprise'
# vagrant-registration
if ENV.has_key?('SUB_USERNAME') && ENV.has_key?('SUB_PASSWORD')
config.registration.username = ENV['SUB_USERNAME']
config.registration.password = ENV['SUB_PASSWORD']
end

# Proxy Information from environment
config.registration.proxy = PROXY = (ENV['PROXY'] || '')
config.registration.proxyUser = PROXY_USER = (ENV['PROXY_USER'] || '')
config.registration.proxyPassword = PROXY_PASSWORD = (ENV['PROXY_PASSWORD'] || '')
config.registration.auto_attach = true
end

config.vm.provider "virtualbox" do |v, override|
#v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant","1"]
v.memory = 1024
v.cpus = 1
override.vm.box = box_name
provider_name = 'virtualbox'
end

config.vm.provider "libvirt" do |libvirt, override|
libvirt.cpus = 1
libvirt.memory = 1024
libvirt.driver = 'kvm'
override.vm.box = box_name
provider_name = 'libvirt'
end

config.vm.synced_folder '.', '/home/vagrant/sync', disabled: true
config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.vm.synced_folder ".vagrant", "/vagrant_hidden", type: "rsync"

config.vm.define "master1" do |master1|
master1.vm.network :private_network, ip: "#{NETWORK_BASE}.#{INTEGRATION_START_SEGMENT}"
master1.vm.hostname = "master1.example.com"
end

config.vm.define "node1" do |node1|
node1.vm.network :private_network, ip: "#{NETWORK_BASE}.#{INTEGRATION_START_SEGMENT + 1}"
node1.vm.hostname = "node1.example.com"
end

config.vm.define "node2" do |node2|
node2.vm.network :private_network, ip: "#{NETWORK_BASE}.#{INTEGRATION_START_SEGMENT + 2}"
node2.vm.hostname = "node2.example.com"
end

config.vm.define "admin1" do |admin1|
admin1.vm.network :private_network, ip: "#{NETWORK_BASE}.#{INTEGRATION_START_SEGMENT + 3}"
admin1.vm.hostname = "admin1.example.com"

if deployment_type == 'openshift-enterprise'
config_playbook = "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml"
else
config_playbook = "/home/vagrant/openshift-ansible/playbooks/byo/config.yml"
end

ansible_groups = {
OSEv3: ["master1", "node1", "node2"],
'OSEv3:children': ["masters", "nodes", "etcd", "nfs"],
'OSEv3:vars': {
ansible_become: true,
ansible_ssh_user: 'vagrant',
deployment_type: deployment_type,
openshift_master_identity_providers: "[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]",
openshift_master_htpasswd_users: "{'admin': '$2y$11$jJioXC3WgyRq.FVy1vqtfuywDwEZp18d9Kkqb4MgFVzlgCGQNwy36'}",
openshift_master_default_subdomain: 'apps.example.com',
osm_default_node_selector: 'region=primary',
openshift_hosted_registry_selector: 'region=infra',
openshift_hosted_registry_replicas: 1,
openshift_hosted_registry_storage_kind: 'nfs',
openshift_hosted_registry_storage_access_modes: ['ReadWriteMany'],
openshift_hosted_registry_storage_host: 'admin1.example.com',
openshift_hosted_registry_storage_nfs_directory: '/srv/nfs',
openshift_hosted_registry_storage_volume_name: 'registry',
openshift_hosted_registry_storage_volume_size: '2Gi',
rhsm_user: "#{ENV.fetch('SUB_USERNAME', '')}",
rhsm_password: "#{ENV.fetch('SUB_PASSWORD', '')}",
rhsm_pool: rhsm_pool,
},
etcd: ["master1"],
nfs: ["admin1"],
masters: ["master1"],
nodes: ["master1", "node1", "node2"],
}

ansible_host_vars = {
master1: {
openshift_ip: '192.168.50.20',
openshift_node_labels: "\"{'region': 'infra', 'zone': 'default'}\"",
openshift_schedulable: true,
ansible_host: '192.168.50.20',
ansible_ssh_private_key_file: "/home/vagrant/.ssh/master1.key"
},
node1: {
openshift_ip: '192.168.50.21',
openshift_node_labels: "\"{'region': 'primary', 'zone': 'east'}\"",
openshift_schedulable: true,
ansible_host: '192.168.50.21',
ansible_ssh_private_key_file: "/home/vagrant/.ssh/node1.key"
},
node2: {
openshift_ip: '192.168.50.22',
openshift_node_labels: "\"{'region': 'primary', 'zone': 'west'}\"",
openshift_schedulable: true,
ansible_host: '192.168.50.22',
ansible_ssh_private_key_file: "/home/vagrant/.ssh/node2.key"
},
admin1: {
ansible_connection: 'local',
deployment_type: deployment_type
}
}

admin1.vm.provision :ansible_local do |ansible|
ansible.verbose = true
ansible.install = true
ansible.limit = 'OSEv3:localhost'
ansible.playbook = 'install.yaml'
ansible.groups = ansible_groups
ansible.host_vars = ansible_host_vars
end

admin1.vm.provision :ansible_local do |ansible|
ansible.verbose = true
ansible.install = false
ansible.limit = "OSEv3:localhost"
ansible.playbook = config_playbook
ansible.groups = ansible_groups
ansible.host_vars = ansible_host_vars
end
end
end
7 changes: 7 additions & 0 deletions vagrant/ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[defaults]
host_key_checking = no
retry_files_enabled = False

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
pipelining = True
38 changes: 38 additions & 0 deletions vagrant/install.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
- name: Configure ssh keys
hosts: localhost
tasks:
- command: find /vagrant_hidden/machines -name private_key
register: private_keys

- file:
src: "{{ item }}"
dest: "/home/vagrant/.ssh/{{ item | regex_replace('^.*/machines/([^/]*)/.*', '\\1') }}.key"
state: link
with_items: "{{ private_keys.stdout_lines }}"


- name: Host bootstrapping
hosts: all
roles:
- role: rhsm-subscription
when: "{{ deployment_type == 'openshift-enterprise' }}"
- role: rhsm-repos
when: "{{ deployment_type == 'openshift-enterprise' }}"
tasks:
# Vagrant's "change host name" capability for Fedora/EL
# maps hostname to loopback, conflicting with hostmanager.
# We must repair /etc/hosts
- replace:
dest: /etc/hosts
regexp: '^(127\.0\.0\.1\s*)\S*\.example\.com (.*)'
replace: '\1\2'

- name: Configure ssh keys
hosts: admin1
tasks:
- include: tasks/install_bootstrap_origin.yaml
when: "{{ deployment_type == 'origin' }}"

- include: tasks/install_bootstrap_enterprise.yaml
when: "{{ deployment_type == 'openshift-enterprise' }}"
1 change: 1 addition & 0 deletions vagrant/roles/rhsm-repos
1 change: 1 addition & 0 deletions vagrant/roles/rhsm-subscription
4 changes: 4 additions & 0 deletions vagrant/tasks/install_bootstrap_enterprise.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
- package:
name: atomic-openshift-utils
state: present

0 comments on commit 8c52fb6

Please sign in to comment.