-
Notifications
You must be signed in to change notification settings - Fork 49
FAQ CS
CSOSKQ1: Can CBTOOL's OpenStack cloud adapter be used to instantiate VMs on RackSpace?
CSOSKA1: Yes. While installing the dependencies on the CBTOOL orchestrator node (see how to do it here, install the python novaclient with the Rackspace authentication extensions with sudo pip install rackspace-novaclient
(if you already installed the "regular" python-novaclient, just use sudo pip install --upgrade rackspace-novaclient
).
The OpenStack (OSK) section of your configuration file should look like this:
[USER-DEFINED : CLOUDOPTION_TESTOPENSTACK ]
OSK_ACCESS = https://identity.api.rackspacecloud.com/v2.0/
OSK_CREDENTIALS = <YOUR_RACKSPACE_USER>-<YOUR_RACKSPACE_API_KEY>-<YOUR_RACKSPACE_USER>
OSK_SECURITY_GROUPS = default
OSK_INITIAL_VMCS = DFW:sut
OSK_LOGIN = cbuser
The possible values for the OSK_INITIAL_VMCS attribute are either "DFW:sut", "ORD:sut", or both. The "RACKSPACE_API_KEY" can be obtained from the "Rackspace Control Panel". Go to the upper right corner, and click on the drop-down menu where your user name is.
CSOSKQ2: Can I use CBTOOL with OpenStack's Nova "Fake Driver"?**
CSOSKA2: Yes. When using the OpenStack Nova Fake Driver, CBTOOL needs to be instructed to forfeit the execution of any steps beyond the number 5 (as shown in the Deployment Detailed Timeline). After all, there will be no actual VM to connect to, and thus the checking of the boot ending, the upload of a copy of the code, and the generic and application startup processes cannot be performed. To this end, execute the following commands:
cldalter vm_defaults check_boot_complete wait_for_0
cldalter vm_defaults transfer_files False
cldalter vm_defaults run_generic_scripts False
cldalter ai_defaults run_application_scripts False
cldalter ai_defaults dont_start_load_manager true
With these commands in place, CBTOOL will basically issue a provisioning request to nova-api, wait until the instance state is "ACTIVE", and the consider the provisioning done.
CSOSKQ3: When using CBTOOL with the OpenStack Cloud Adapter, can I instruct it to create each new VM or Virtual Application on its own tenant/network/subnet/router?
CSOSKA3: CBTOOL has the ability to execute generic scripts at specific points during the VM attachment (e.g., before the provision request is issued to the cloud, after the VM is reported as "started" by the cloud). A small example script which creates a new keystone tenant, a new pubkey pair, a new security group and a neutron network, subnet and router was made available to under the "scenarios/util" directory.
To use it with VMs (i.e., each VM on its own tenant/network/subnet/router) issue the command cldalter vm_defaults execute_script_name=/home/cbuser/cbtool/scenarios/scripts/osk_multitenant.sh
on the CLI. After this each new VM that is attached with a command like vmattach tinyvm staging=execute_provision_originated
will execute the aforementioned script. In case of the API, just add the parameter "pause_step" to the call (e.g., api.vmattach(<CLOUDNAME>, "hadoopslave", pause_step = "execute_provision_originated")
).
To use it with Virtual Applications (i.e., ALL VMs belonging to a VApp on its own tenant/network/subnet/router) issue the command cldalter ai_defaults execute_script_name=/home/cbuser/cloudbench/scenarios/scripts/osk_multitenant.sh
on the CLI. After this each new VAPP that is attached with a command like aiattach nullworkload staging=execute_provision_originated
will execute the aforementioned script. In case of the API, just add the parameter "pause_step" to the call (e.g., api.appattach(<CLOUDNAME>, "iperf", pause_step = "execute_provision_originated")
).
Please note that the commands listed on the script will be executed from the Orchestrator node, and thus require the existence of all relevant OpenStack CLI clients (e.g., openstack, nova and neutron) present there.
CSOSKQ4: Can I instruct CBTOOL to create instances which will "boot from (Cinder) volume"?
CSOSKA4: Yes. Just add the following section on your private configuration file:
[VM_DEFAULTS]
CLOUD_VV = 10 GB
BOOT_FROM_VOLUME = $True
Similarly to what was discussed here, it is possible to have only specific VM roles within an Virtual Application to boot from a Virtual Volume:
[AI_TEMPLATES : CASSANDRA_YCSB]
YCSB_CLOUD_VV = 10
YCSB_BOOT_FROM_VOLUME = $False
SEED_CLOUD_VV = 10
SEED_BOOT_FROM_VOLUME = $True
IMPORTANT : keep in mind that this parameter can be changed in any of the 3 ways discussed here and here (i.e., by changing the private configuration file, by issuing the CLI/API typealter <vapp type> <parameter> <value>
or by dynamically overriding the parameters during an aiattach
CLI/API call).
CSOSKQ5: I am running CBTOOL against an OpenStack cloud that does not have Cinder and/or Neutron Endpoint. How can I tell CBTOOL to not try to contact these Endpoint URLs?
CSOSKA5: Just add the following section on your private configuration file:
[VM_DEFAULTS : OSK_CLOUDCONFIG]
USE_CINDERCLIENT = $False
USE_NEUTRONCLIENT = $False
CSOSKQ6: Do you support Keystone V3?
CSOSKA6: It is supported since commit id 6bc23dbd1f208ae4659b5d70ef1cae31f3381bcf (Oct 3rd 2017)
CSOKQ7: Why there are two different cloud adapters for Openstack in cbtool?
CSOSKA7: We started slowly "porting" our "native" Cloud Adapters to Libcloud-based ones, in order to reduce our code maintenance requirements. This is particularly true for the "native OpenStack" cloud adapter, which had to be changed several times in order to keep compatibility with the different versions of OpenStack. While the transition is not fully complete, we keep the two adapters for OpenStack, the "native" (identified by the "cloud model" OSK) and the "Libcloud" (identified by the "cloud model" OS) active and tested.
CSPDMQ1: My Docker/Swarm cluster is configured in such a way that each newly created Docker instance gets an IP address that is directly reachable by nodes outside the cluster. I have no need to make the instance reachable through a port (by default, port 22 on the instance) exposed on the host. How can I tell CBTOOL to stop exposing the ports?
CSPDMA1: By default, CBTOOL maps the port 22 on each instance to a port on the host, using the value of the parameter PORTS_BASE
on [VM_DEFAULTS : PDM_CLOUDCONFIG]
(by default 10,000), to calculate the port number: VM_NUMBER + PORTS_BASE
. In case your CBTOOL Orchestrator node can directly access access each instance through an IP address that it is valid/reachable even outside the Docker/Swarm cluster (in addition to several CNI plugins, even a simple tool such as pipework can accomplish it), you just need to add the following section to your private configuration file:
[VM_DEFAULTS : PDM_CLOUDCONFIG]
PORTS_BASE = $False
IMPORTANT : keep in mind that this parameter can be changed in any of the 3 ways discussed here and here (i.e., by changing the private configuration file, by issuing the CLI/API cldalter <vapp type> <parameter> <value>
or by dynamically overriding the parameters during an aiattach
or vmattach
CLI/API call).
CSPDMQ2: How can I expose additional ports from the instance through the host?
CSPDMA2: By default, CBTOOL maps the port 22 on each instance to a port on the host, using the value of the parameter PORTS_BASE
on [VM_DEFAULTS : PDM_CLOUDCONFIG]
(by default 10,000), to calculate the port number: VM_NUMBER + PORTS_BASE
. If, in addition to that, you wish to have other additional ports exposed through the host, you can do so by adding a series of comma-separated port numbers on following section to your private configuration file:
[VM_DEFAULTS : PDM_CLOUDCONFIG]
EXTRA_PORTS = 80,8080
These ports are mapped using the using the value of the parameter EXTRA_PORTS_BASE
on [VM_DEFAULTS : PDM_CLOUDCONFIG]
(by default 60,000), to calculate the port number: VM_NUMBER + PORT_POSITION + EXTRA_PORTS_BASE
IMPORTANT : keep in mind that this parameter can be changed in any of the 3 ways discussed here and here (i.e., by changing the private configuration file, by issuing the CLI/API cldalter <vapp type> <parameter> <value>
or by dynamically overriding the parameters during an aiattach
or vmattach
CLI/API call).
For instance, the CLI command aiattach open_daytrader default default none none none geronimo_extra_ports=8080
will deploy the next AI with the port 8080 on the instance with the role "geronimo" exposed through the host's VM_NUMBER + 1 + 60000
CSPDMQ3: How can I expose additional, custom devices from the host to each Docker instance (such as GPUs)?
CSPDMA3: Additional devices can be exposed by adding the following section to your private configuration file:
[VM_DEFAULTS : PDM_CLOUDCONFIG]
EXTRA_DEVICE = /dev/nvidia0:/dev/nvidia0:rwm,/dev/nvidiactl:/dev/nvidiactl:rwm,/dev/nvidia-uvm:/dev/nvidia-uvm:rwm
IMPORTANT : keep in mind that this parameter can be changed in any of the 3 ways discussed here and here (i.e., by changing the private configuration file, by issuing the CLI/API cldalter <vapp type> <parameter> <value>
or by dynamically overriding the parameters during an aiattach
or vmattach
CLI/API call).
CSPDMQ4: I see that, by default, images are being pulled directly from the ibmcb
account on Docker Hub, and they are all Ubuntu-based. How can I change this
CSPDMA4: By changing the parameter IMAGE_PREFIX
on [VM_DEFAULTS : PDM_CLOUDCONFIG]
. By default, this parameter is set to ibmcb/ubuntu_
, but this can be changed by adding the following section to your private configuration file:
[VM_DEFAULTS : PDM_CLOUDCONFIG]
IMAGE_PREFIX = myowndockerepo/centos_
CSNOPQ1: Can I use CBTOOL to deploy and run Virtual Applications on bare-metal nodes, or even pre-deployed, already running VMs?
CSNOPA1: Yes, in a limited fashion. Please take a look at the NO Operation cloud adapter.