Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

This page details how Crowbar can be installed and run within VirtualBox.

You might want to have a look at https://github.com/iteh/crowbar-virtualbox

Note: currently this setup has been successfully ran where there is one VM for crowbar and a second VM for the deployment of all OpenStack services. Work related to a multi-node install is ongoing.

Note: If you find your client vm coming up with an IP on the 192.168.56.0/24 network, you'll need to disable the dhcpserver that virtualbox has set up on your hostonly network - it responds faster than the crowbar dhcpserver and hence the client can never finish it's config, and will never get discovered. Remove with something like:

VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0

The Main Prerequisites

Configure Three Host-Only Networks in VirtualBox

VirtualBox provides an extensive command-line interface. The command-line interface can be used to configure every aspect of the installation. To configure the required three Host-only networks enter the following (can be put into a script or run from the shell prompt):

NOTE: IT IS ASSUMED THAT vboxnet0, vboxnet1, AND vboxnet2 DO NOT EXIST ON YOUR MACHINE ALREADY. IF THEY ALREADY EXIST, THIS WILL FAIL. CURRENTLY, DUE TO BUGS IN VIRTUALBOX, THERE IS NO GOOD WAY AROUND THIS.

#!/bin/bash
# Configure first virtual host-only 192.168.124.1/24 network (used for communication between Crowbar and Crowbar-provisioned nodes)
VBoxManage hostonlyif create ipconfig vboxnet0 --ip 192.168.124.1 --netmask 255.255.255.0
# Configure second virtual host-only 192.168.122.1/24 network (used for communication between nodes provisioned for use with Openstack)
VBoxManage hostonlyif create ipconfig vboxnet1 --ip 192.168.122.1 --netmask 255.255.255.0
# Configure third virtual host-only 192.168.126.1/24 network (used for communication to Openstack instances)
VBoxManage hostonlyif create ipconfig vboxnet2 --ip 192.168.126.1 --netmask 255.255.255.0

Create a new VM for the crowbar_admin Server

To create and configure the crowbar_admin server VM hardware (can be put into a script or run from the shell prompt):

#!/bin/bash
# Create and Register the crowbar_admin VM with VirtualBox
VBoxManage createvm --register --name 'crowbar_admin'

# Configure the crowbar_admin VM Storage Controllers
VBoxManage storagectl 'crowbar_admin' --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on
VBoxManage storagectl 'crowbar_admin' --name "SATA Controller" --add sata --controller IntelAHCI --sataportcount 1 --hostiocache off

# Create an empty Fixed-size 8GB Hard Drive VDI
VBoxManage createhd --filename ~/VirtualBox\ VMs/crowbar_admin/crowbar_admin.vdi --size 8192 --variant Fixed

# Configure the settings for the crowbar_admin VM
VBoxManage modifyvm 'crowbar_admin' --nic1 hostonly --hostonlyadapter1 vboxnet0 --nictype1 Am79C973 --cableconnected1 on --nic2 nat --nictype2 Am79C973 --cableconnected2 on --memory 4096 --ostype 'Ubuntu_64' --ioapic on --rtcuseutc on --cpus 1 --pae off --boot1 floppy --boot2 dvd --boot3 disk --chipset piix3 --vram 16

# Attach the bootable Openstack Crowbar ISO to the crowbar_admin VM
VBoxManage storageattach 'crowbar_admin' --storagectl "IDE Controller" --device 0 --port 0 --type dvddrive --medium ~/openstack111014.iso

# Attach the 8GB hard drive to the crowbar_admin VM
VBoxManage storageattach 'crowbar_admin' --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium ~/VirtualBox\ VMs/crowbar_admin/crowbar_admin.vdi

# Start the VM!
VBoxManage startvm crowbar_admin

Install Crowbar

Steps taken from https://github.com/dellcloudedge/crowbar/wiki/Install-crowbar

  1. Boot using the ISO or physical DVD and it will set up Ubuntu 10.10 and stage Crowbar for install
  2. Log in as crowbar/crowbar (Note: for v1 and pre 9/1/11 ISOs, the username/password is openstack/openstack)
  3. Switch to root access

    sudo -i
    
  4. Change to the scripts directory (the installer script is located in this directory):

    cd /tftpboot/ubuntu_dvd/extra
    
  5. Run the installer script:

    # The FQDN can be set to whatever you want your admin node to have, but this default is fine.
    ./install admin.crowbar.org
    
  6. Reboot

    sudo reboot now
    

Once the VM has rebooted, a web interface should be available and accessible from the host at http://192.168.124.10:3000.

Fix Included UEC Image

You do not need to do this if you have built Crowbar from source or if you used the openstack111111.iso. The included UEC pre-built image that comes with the Crowbar ISO is not complete or usable. A proper UEC image must be copied into the Crowbar Admin node before setting up the 'nova' barclamp.

  1. SSH into the Crowbar Admin node

    ssh crowbar@192.168.124.10
    
  2. Bring up the NAT interface (eth1) to enable communication with the external host LAN

    sudo dhclient eth1
    
  3. Download the appropriate UEC image:

    sudo curl -o /tftpboot/ubuntu_dvd/extra/files/ami/ubuntu-11.04-server-uec-amd64.tar.gz http://uec-images.ubuntu.com/releases/natty/release/ubuntu-11.04-server-cloudimg-amd64.tar.gz
    

Create a new VM for use as a Client Node

This node is used to test provisioning from the crowbar_admin Server. To create and configure the crowbar_client server VM hardware (can be put into a script or run from the shell prompt):

#!/bin/bash

# Create and Register the crowbar_client VM with VirtualBox
VBoxManage createvm --register --name 'crowbar_client'

# Configure the crowbar_client VM Storage Controllers
VBoxManage storagectl 'crowbar_client' --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on
VBoxManage storagectl 'crowbar_client' --name "SATA Controller" --add sata --controller IntelAHCI --sataportcount 1 --hostiocache off

# Create an empty Fixed-size 8GB Hard Drive VDI
VBoxManage createhd --filename ~/VirtualBox\ VMs/crowbar_client/crowbar_client.vdi --size 8192 --variant Fixed

# Configure the settings for the crowbar_client VM
VBoxManage modifyvm 'crowbar_client' --nic1 hostonly --hostonlyadapter1 vboxnet0 --nictype1 Am79C973 --cableconnected1 on
VBoxManage modifyvm 'crowbar_client' --nic2 hostonly --hostonlyadapter2 vboxnet1 --nictype2 82545EM --cableconnected2 on
VBoxManage modifyvm 'crowbar_client' --memory 2048 --ostype 'Ubuntu_64' --ioapic on --rtcuseutc on --cpus 1 --pae off --boot1 net --boot2 disk --chipset piix3 --vram 16

# Attach the bootable Openstack Crowbar ISO to the crowbar_client VM
VBoxManage storageattach 'crowbar_client' --storagectl "IDE Controller" --device 0 --port 0 --type dvddrive --medium emptydrive

# Attach the 2GB hard drive to the crowbar_client VM
VBoxManage storageattach 'crowbar_client' --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium ~/VirtualBox\ VMs/crowbar_client/crowbar_client.vdi

# Start the VM!
VBoxManage startvm crowbar_client

Create a new VM for use as an Openstack Client Node

this creates a numbered VM with 3 disks (all dynamically expanding as needed), /dev/sda a small system disk, /dev/sdb and /dev/sdc bigger disks which will automatically get correctly allocated by the swift barclamp.

#!/bin/bash
if [ -z "$1" ]; then
    echo "pass a number as \$1 (storage nodes are numbered, i.e. 1 and 2)"
    exit 2
fi
name="crowbar_swift_$1"

# Create and Register the VM with VirtualBox
VBoxManage createvm --register --name "$name"

# Configure the crowbar_client VM Storage Controllers
VBoxManage storagectl "$name" --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on
VBoxManage storagectl "$name" --name "SATA Controller" --add sata --controller IntelAHCI --sataportcount 16 --hostiocache off

# Create hard disks
VBoxManage createhd --filename ~/VirtualBox\ VMs/"$name"/"$name-main.vdi" --size 5000
VBoxManage createhd --filename ~/VirtualBox\ VMs/"$name"/"$name-stor0.vdi" --size 9000
VBoxManage createhd --filename ~/VirtualBox\ VMs/"$name"/"$name-stor1.vdi" --size 9000

# Configure the settings for the crowbar_client VM
VBoxManage modifyvm "$name" --nic1 hostonly --hostonlyadapter1 vboxnet0 --nictype1 Am79C973 --cableconnected1 on
VBoxManage modifyvm "$name" --nic2 hostonly --hostonlyadapter2 vboxnet1 --nictype2 82545EM --cableconnected2 on
VBoxManage modifyvm "$name" --memory 2048 --ostype 'Ubuntu_64' --ioapic on --rtcuseutc on --cpus 1 --pae off --boot1 net --boot2 disk --chipset piix3 --vram 16

# Attach the bootable Openstack Crowbar ISO to the crowbar_client VM
VBoxManage storageattach "$name" --storagectl "IDE Controller" --device 0 --port 0 --type dvddrive --medium emptydrive

# Attach the hard drives to the crowbar_client VM
VBoxManage storageattach "$name" --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium ~/VirtualBox\ VMs/"$name"/"$name-main.vdi"
VBoxManage storageattach "$name" --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ~/VirtualBox\ VMs/"$name"/"$name-stor0.vdi"
VBoxManage storageattach "$name" --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ~/VirtualBox\ VMs/"$name"/"$name-stor1.vdi"

# Start the VM!
VBoxManage startvm "$name"

Getting OpenStack Installed

Following the process outlined at http://www.youtube.com/watch?v=5r0Dh-r3XZg

  1. Turn on the one or more VMs that will act as hosts which Crowbar can provision (as described above).

    • Wait for the VM to show up in the Crowbar Admin web UI with a blinking yellow icon indicating Sledgehammer is waiting for instructions (state discovered: the VM is ready to be allocated)
    • The VM will display something like the following on the console when it has reached a state ready for allocation:

      BMC_ROUTER=
      BMC_ADDRESS=192.168.124.163
      BMC_NETMASK=255.255.255.0
      HOSTNAME=d08-00-27-cf-03-f6.crowbar.org
      NODE_STATE=false
      
  2. SSH into the crowbar_admin Server (the password is crowbar):

    ssh crowbar@192.168.124.10
    
  3. When nodes are discovered, they must be allocated. The following process can be used to allocate nodes:

    1. Get a list of discovered nodes:

      sudo /opt/dell/barclamps/crowbar/bin/crowbar_machines -U crowbar -P crowbar list
      
    2. Allocate the node(s):

      sudo /opt/dell/barclamps/crowbar/bin/crowbar_machines -U crowbar -P crowbar allocate <<node_name>>
      
    3. The client node will begin the allocation which includes:

      • Initial Chef run
      • Reboot
      • Install of base system via PXE & preseed
      • Reboot into newly installed system (login prompt)

OpenStack Installation Order

Order matters when provisioning OpenStack pieces on the various host nodes. The proper order is as follows:

MySQL

Create a new Proposal for MySQL:

  1. Within the MySQL Proposal setup under the Deployment section, drag the VM hostname from Available Nodes into the mysql-server box.
  2. Click Apply Proposal to initiate setting up the MySQL service on the selected host.

Glance

Create a new Proposal for Glance:

  1. Within the Glance Proposal setup under the Deployment section, drag the VM hostname from Available Nodes into the glance-server box.
  2. Click Apply Proposal to initiate setting up the Glance service on the selected host.

Nova

Create a new Proposal for Nova:

  1. Within the Nova Proposal setup under the Attributes section:
    • When using VirtualBox to run the OpenStack environment, set the libvirt_type to qemu.
  2. Drag the hostname from Available Nodes to nova-multi-controller for the host which will operate as the controller node for the OpenStack cloud.
  3. Drag the hostname from Available Nodes to nova-multi-compute for the host(s) which will operate as the compute nodes (where instances are actually run).
    • In a single node OpenStack setup, the same host will be in both the nova-multi-controller & nova-multi-compute boxes.
  4. Click Apply Proposal to initiate setting up the Nova service on the selected host(s).

Nova Dashboard

Create a new Proposal for Nova Dashboard:

  1. Within the Nova Dashboard Proposal setup under the Deployment section, drag the hostname from Available Nodes to nova_dashboard-server for the host that will serve the Dashboard for the OpenStack cloud.
  2. Click Apply Proposal to initiate setting up the Nova Dashboard service on the selected host.

Keystone

Create a new Proposal for Keystone:

  1. Within the Keystone Proposal setup under the Deployment section, drag the hostname from Available Nodes to keystone-server for the host that will service Keystone for the OpenStack cloud.
  2. Click Apply Proposal to initiate setting up the Glance service on the selected host.

OpenStack Dashboard Login

To access the Openstack Dashboard, go to the System Nodes page and click the node on which you installed the nova-dashboard. From here, you'll see a series of Links available on this particular node. Click on the one that reads, Nova Dashboard. The default login for the OpenStack Dashboard is:

  • Username: admin
  • Password: secrete

Getting Project Zipfile

By default a nova user is created. To get the credentials zipfile, perform the following process:

  1. SSH into the nova controller node created by Crowbar:

    ssh crowbar@192.168.124.xxx
    
  2. Issue the following command to get the credentials for the 'nova' user:

    sudo nova-manage project zipfile --project demo --user nova
    
  3. A file named nova.zip will be deposited in the crowbar user home directory

  4. Exit out of the nova controller node & scp the file to your desktop

    scp crowbar@192.168.124.xxx:~/nova.zip ~/
    

Troubleshooting

  • If the Crowbar web interface suddenly throws an error, restart Apache by running the following command, then try to reload the page:

    /etc/init.d/apache2 restart
    

Dev Mode

Putting the Crowbar service in Dev Mode activates the full functionality of the Bulk Edit feature which is still experimental.

  • To enable Dev mode:

    /opt/dell/bin/dev_mode.sh
    
  • To disable Dev mode:

    1. Kill the running Dev mode process by holding Ctrl-C
    2. Restart Apache:

      service apache2 restart
      
    3. Start Crowbar service (it will likely take a minute)

      service crowbar start
      

Resources

Something went wrong with that request. Please try again.