Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding coreos install media #14

Merged
merged 19 commits into from Nov 24, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions .gitignore
@@ -1,3 +1,3 @@
dev-tools/.vagrant
dev-tools/config/monorail_rack.cfg
dev-tools/bin/pxe*
example/.vagrant
example/config/monorail_rack.cfg
example/bin/pxe*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh good catch!

3 changes: 3 additions & 0 deletions .gitmodules
Expand Up @@ -19,3 +19,6 @@
[submodule "on-tasks"]
path = on-tasks
url = https://github.com/RackHD/on-tasks.git
[submodule "on-imagebuilder"]
path = on-imagebuilder
url = https://github.com/RackHD/on-imagebuilder.git
184 changes: 125 additions & 59 deletions example/README.md
@@ -1,114 +1,180 @@
## DOCUMENTATION

The monorail_rack setup script is an easy "one button push" script to deploy an virtual rack within virtualbox to emulate a monorail server and some number of virtualbox PXE-booting clients. The enviornment is tied together using a virtual network called closednet set to our defualt subnet of 172.31.128.x for servicing DHCP and TFTP to the PXE clients.
The monorail_rack setup script is an easy "one button push" script to deploy
a 'virtual rack' using virtualbox. This emulates a RackHD server and some number
of virtual servers - using virtualbox PXE-booting VMs. Private virtual networks
simulate the connections between servers that would otherwise be on a switch
in a rack.

## PRE-REQS
The virtual network `closednet` is set to our default subnet of 172.31.128.x
to connect DHCP and TFTP from RackHD to the PXE clients.

We expect the latest version of GIT, Vagrant, and Ansible installed onto the host system.
## PRE-REQS / SCRIPT EXPECTATIONS

We expect static files to be located in the correct path from the parent directory of dev-tools:
We expect the latest version of git, Vagrant, and Ansible installed onto your
system in order to use this script.

i.e.
~/<repos directory>/RackHD/on-http/static/http/common/
We also rely on the this projects structure of submodules to link the source
into the VM (through vagrant). The ansible roles are written to expect the
source to be autoloaded on the virtual machine with directory mappings
configured in Vagrantfile:

Our static files can be built locally using the tools found here:
https://github.com/RackHD/on-imagebuilder
for example:

## SET UP INSTRUCTIONS
~/<repos directory>/RackHD/on-http/static/http/common/

The static files that RackHD uses can be built locally using the tools found in
the on-imagebuilder repository (https://github.com/RackHD/on-imagebuilder),
and this script will download the latest built versions that are stored in
bintray from that open source repository's outputs.

Clone RackHD repo to your local git directory.
## SET UP INSTRUCTIONS

i.e.
~/<repos directory>/RackHD/
Clone RackHD repo to your local git directory.

$ git clone https://github.com/RackHD/RackHD
$ cd RackHD

Within the example directory, create config and run the setup command:

$ cd ~/<repos directory>/RackHD/example/config/
Change into the directory `example`, create config and run the setup command:

$ pushd example/config/
$ cp ./monorail_rack.cfg.example ./monorail_rack.cfg
$ popd

Edits can be made to this new file to adjust the number of pxe clients created.
Please see below for more information on the configuration file.

$ cd ~/<repos directory>/RackHD/example/bin/
Edits can be made to this new file to adjust the number of pxe clients created.

$ pushd bin/
$ ./monorail_rack

Copy local basic static files to common directory:
Now ssh into the RackHD server and start the services

$ cp ~/<static files directory>/* ~/<repos directory>/RackHD/on-http/static/http/common/
$ vagrant ssh
$ sudo nf start

Now ssh into the monorail server:
## TESTING

$ vagrant ssh dev
Once you've started the services, the RackHD API will be available on your local
machine through port 9090. For example, you should be able to view the RackHD
API documentation that's set up with the service at http://localhost:9090/docs.

Bring up all monorail services:
You can also interact with the APIs using curl from the command line of your
local machine.

$ sudo nf start
or $ sudo nf start [graph,http,dhcp,tftp,syslog]
To view the list of nodes that has been discovered:
$ curl http://localhost:9090/api/1.1/nodes | python -m json.tool

Now that the services are running we can begin powering on pxe clients and watch them boot.
View the list of catalogs logged into RackHD:
$ curl http://localhost:9090/api/1.1/catalogs | python -m json.tool

(both of these should result in empty lists in a brand new installation)

Provision an existing monorail server:
### Install a default workflow for Virtualbox VMs and a SKUs definition

$ vagrant provision
This example includes a workflow that we'll use when we identify a "virtualbox"
SKU with RackHD. This workflow sets up no-op out of band management settings
for a demo and triggers an installation of CoreOS as a default flow to run
once the "virtualbox" SKU has been identified. We'll load it into our library
of workflows:

## CONFIGURATION FILE
cd ~/src/rackhd/example
# make sure you're in the example directory to reference the sample JSON correctly

```
# monorail_rack.cfg
# used to customize default deployment
# edit $pxe_count to change the amount of virtualbox PXE-booting clients are created when running
# the monorail_rack setup script.
curl -H "Content-Type: application/json" \
-X PUT --data @samples/virtualbox_install_coreos.json \
http://localhost:9090/api/1.1/workflows

# deployment variables
pxe_count=1
```
To enable that workflow, we also need to include a SKU definition that includes
the option of another workflow to run once the SKU has been identified. This
takes advantage of the `Graph.SKU.Discovery` workflow, which will attempt to
identify a SKU and run another workflow if specified.

Changing the number of $pxe_count within the running configuration script will effect how many headless pxe clients are created when running the monorail_rack setup script.
cd ~/src/rackhd/example
# make sure you're in the example directory to reference the sample JSON correctly

Please note, and example configuration file is provided and you must copy that file to a new file with the same name excluding the .example extension.
curl -H "Content-Type: application/json" \
-X POST --data @samples/virtualbox_sku.json \
http://localhost:9090/api/1.1/skus

View the current SKU definitions:

## ENVIRONMENT BREAKDOWN
$ curl http://localhost:9090/api/1.1/skus | python -m json.tool
[
{
"createdAt": "2015-11-21T00:46:04.068Z",
"discoveryGraphName": "Graph.DefaultVirtualBox.InstallCoreOS",
"discoveryGraphOptions": {},
"id": "564fbecc1dee9e7d2f1d33ca",
"name": "Noop OBM settings for VirtualBox nodes",
"rules": [
{
"equals": "VirtualBox",
"path": "dmi.System Information.Product Name"
}
],
"updatedAt": "2015-11-21T00:46:04.068Z"
}
]

Remove an existing monorail server:
## HACKING THESE SCRIPTS

$ vagrant destroy
If you're having on this script or the ansible roles to change the
functionality, you can shortcut some of this process by just invoking
`vagrant provision` to use ansible to update the VM that's already been created.

Please note all pxe clients must to be removed by hand currently.

### CHANGE NODE VERSION

## CHANGE NODE VERSION
Currently this example uses `n` (https://github.com/tj/n) to install node
version `0.10.40`. You can change what version of node is used by default by
logging into the Vagrant instance and using the `n` command:

Currently the monorail server is built with Node v0.10.40 but this can be changed.
vagrant ssh
sudo ~/n/bin/n <version>

Install additional Node versions

$ sudo ~/n/bin/n <version>
### CONFIGURATION FILE

Use n's menu system to change running Node version
```
# monorail_rack.cfg
# used to customize default deployment
# edit $pxe_count to change the amount of virtualbox PXE-booting clients are created when running
# the monorail_rack setup script.

$ sudo ~/n/bin/n
# deployment variables
pxe_count=1
```

## CHANGE CODE VERSION USED
Changing the number of `pxe_count` within the running configuration script will
effect how many headless pxe clients are created when running the monorail_rack
setup script.

To checkout to a different commit than what is referenced by git submodule, edit the vagrant file (RackHD/example/Vagrantfile) to specify the branch variable for the ansible provisioner.
Please note, and example configuration file is provided and you must copy that
file to a new file with the same name excluding the .example extension.

### CHANGE WHAT BRANCH IS USED

To checkout to a different commit than what is referenced by git submodule,
edit the vagrant file (RackHD/example/Vagrantfile) to specify the `branch`
variable for the ansible provisioner. A commented out line exists in
`Vagrantfile` you can enable and edit.

```
# If you wish to use a specific commit, include the variable below.
ansible.extra_vars = { branch: "master" }
```

## TESTING
## ENVIRONMENT BREAKDOWN

Test node was discovered from the monorail server:
The monorail_rack script doesn't currently have the capability to shut down or
remove anything. To get rid of the RackHD server you can use:

$ vagrant destroy

$ curl localhost:8080/api/1.1/nodes | python -m json.tool
Any PXE client VMs you created will need to be removed by hand.

Check Cataloging has happend:
## Running the web UI

$ mongo pxe --eval 'db.catalogs.count()'
We are experimenting with single page web UI applications within the repository
`on-web-ui` (https://github.com/rackhd/on-web-ui). That repository includes a
README and is set up to host the UI externally to the RackHD. Follow the
README instructions in that repository to run the application, and you can
change the settings while running to point to this instance of RackHD at
`https://localhost:9090/`
16 changes: 14 additions & 2 deletions example/Vagrantfile
Expand Up @@ -15,10 +15,13 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
target.vm.provision "ansible" do |ansible|
ansible.playbook = "dev.yml"

# If you wish to use a specific commit, include the variable below.
# If you wish to use a specific branch, enable the variable below
# and the repos role will check out code on that branch across
# the RackHD git repositories.
# ansible.extra_vars = { branch: "master" }

# if the playbook seems hung try uncommenting below to debug
# if the playbook seems hung try uncommenting below to enable
# debuging level output
# ansible.verbose = "vvv"
end

Expand All @@ -28,8 +31,17 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
end

# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# target.vm.network :public_network

target.vm.network "private_network", ip: "172.31.128.1", virtualbox__intnet: "closednet"
target.vm.network "forwarded_port", guest: 8080, host: 9090

# If true, then any SSH connections made will enable agent forwarding.
# Default value: false
target.ssh.forward_agent = true

end
end
35 changes: 16 additions & 19 deletions example/bin/monorail_rack
Expand Up @@ -6,8 +6,8 @@
##################
# INCLUDE CONFIG #
##################

source ../config/monorail_rack.cfg
SCRIPT_DIR=$(cd $(dirname $0) && pwd)
source $SCRIPT_DIR/../config/monorail_rack.cfg


############
Expand Down Expand Up @@ -46,13 +46,6 @@ done
echo "I'll set up monorail server now..."
vagrant up dev

# This is a place holder for diffrent yml calls for the above todo~
# if [ $getCommonFiles eq "yes" ]
# then
# vagrant up dev1
# fi


######################
# DEPLOY PXE CLIENTS #
######################
Expand All @@ -61,16 +54,20 @@ if [ $pxe_count ]
then
for (( i=1; i <= $pxe_count; i++ ))
do
echo "deploying pxe: $i"
vmName="pxe-$i"

vboxmanage createvm --name $vmName --register;
vboxmanage createhd --filename $vmName --size 8192;
VBoxManage storagectl $vmName --name "SATA Controller" --add sata --controller IntelAHCI
VBoxManage storageattach $vmName --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium $vmName.vdi
vboxmanage modifyvm $vmName --ostype Ubuntu --boot1 net --memory 350;
vboxmanage modifyvm $vmName --nic1 intnet --intnet1 closednet --nicpromisc1 allow-all;
vboxmanage modifyvm $vmName --nictype1 82540EM --macaddress1 auto;

if [[ ! -e $vmName.vdi ]]; then # check to see if PXE vm already exists
echo "deploying pxe: $i"
vboxmanage createvm --name $vmName --register;
vboxmanage createhd --filename $vmName --size 8192;
vboxmanage storagectl $vmName --name "SATA Controller" --add sata --controller IntelAHCI
vboxmanage storageattach $vmName --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium $vmName.vdi
vboxmanage modifyvm $vmName --ostype Ubuntu --boot1 net --memory 768;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we just bump memory to 1024?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with both, and was able to get away with 768MB, so I thought I'd try and keep it as small as possible from a quick/dev setup perspective

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k

vboxmanage modifyvm $vmName --nic1 intnet --intnet1 closednet --nicpromisc1 allow-all;
vboxmanage modifyvm $vmName --nictype1 82540EM --macaddress1 auto;
fi
done
fi

echo "starting the services"
echo "The RackHD documentation will be available shortly at http://localhost:9090/docs"
vagrant ssh dev -c "sudo nf start"
4 changes: 2 additions & 2 deletions example/roles/node/tasks/main.yml
Expand Up @@ -12,8 +12,8 @@
- name: Install node
shell: /home/vagrant/n/bin/n {{ item }}
with_items:
- 4.1.1
- 0.12.7
# - 4.1.1
# - 0.12.7
- 0.10.40
sudo: yes
when: download_n|changed