forked from cmyster/coti
-
Notifications
You must be signed in to change notification settings - Fork 0
/
conf
237 lines (192 loc) · 7.07 KB
/
conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
### General variables.
# Project name.
# This will be used for files and folder names.
PROJECT="coti"
# Help message
HELP="
Before running the script, make sure that conf is updated and the steps in\n
the script itself (end of it) suits your needs.\n
./run will start its thing and its expecting access to internal resources.\n
There is no reason to think it will work outside the internal network.\n
\n
Parameters:\n
-h, --help: Print this message.\n
--clean: Cleans the system of _any_ virtual resource.\n
--sp: System prepare, making sure the host and resources are ready.\n
--uc: Install the unercloud.\n
--op: Overcloud preperations.\n
--oc: Overcloud deployment.\n
--bnr: Backup and restore, this will backup the undercloud machine and\n
restore onto a new undercloud. It needs a running deployment.\n
--test: Run tests on a deployment.\n
--full: All of the above minus backup and restore.\n
\n
For a full run:\n
./run --sp\n
./run --uc\n
./run --op\n
./run --oc\n
./run --bnr\n
./run --test\n
Or ./run --full\n
\n
You can also run the script with run time parameters, for instace:\n
OS_VER=11 ./run --full\n
To do a full run on OpenStack 11.\n
"
# The script works here.
# It is deleted and recreated in each run.
WORK_DIR=${WORK_DIR:-"/tmp/$PROJECT"}
# Starting location, used to resolve paths.
CWD=$(pwd)
# Shortcut to the tripleo heat templates.
THT="/usr/share/openstack-tripleo-heat-templates"
# Shortcut to libvirt's image folder.
VIRT_IMG="/var/lib/libvirt/images"
# Log file
LOG_FILE=${LOG_FILE:-"$(pwd)/logs/$PROJECT-$(date +%s).log"}
# Selinux state on the undercloud.
UNDER_SEL=${UNDER_SEL:-"permissive"}
# Selinux state on the overcloud.
OVER_SEL=${OVER_SEL:-"permissive"}
# How images are accessed with virt-commands.
LIBGUESTFS_BACKEND="direct"
# Base path to EPEL
EPEL="https://dl.fedoraproject.org/pub/epel/7/x86_64"
# Messge to display on long operations
LONG=${LONG:-"(This step takes time.)"}
# RPM location of latest rhos-release
LATEST_RR=${LATEST_RR:-"http://rhos-release.virt.bos.redhat.com/repos/rhos-release/rhos-release-latest.noarch.rpm"}
# Version to install deploy.
OS_VER=${OS_VER:-12}
PUDDLE_VER=${PUDDLE_VER:-"latest"}
# Argument to pass to rhos-release.
RR_CMD=${RR_CMD:-"$OS_VER"}
# Do we obtain images from predefined images or do we create our own.
# prepare_puddle_images - create our own.
# edit_predefined_images - edit images from RPM.
# predefined_images - use untouched images from RPM.
OBTAIN_IMAGES=${OBTAIN_IMAGES:-"edit_predefined_images"}
# DNS to be used.
# The default here is to use the host's configuration.
DNS="${DNS:-$(grep nameserver /etc/resolv.conf | head -n 1 | cut -d " " -f 2)}"
# Host default nic.
NIC=$(ip route get $DNS | grep dev | awk '{print $5}')
# Host IP.
HOST_IP=$(ifconfig $NIC | grep "inet " | awk '{print $2}')
# Host root password.
HOST_PASS="12345678"
# virt-customize arguments.
CUST_ARGS="-m 8192 --smp 6 -q"
# SSH arguments.
SSH="/usr/bin/ssh -qtt -o StrictHostKeyChecking=no"
USE_TELEMETRY=false
### File server location.
### This is were the script gets stuff like guest images,
### and also where it keeps modified ones.
# Server to download images from.
FILE_SERVER=${FILE_SERVER:-"http://ikook.tlv.redhat.com"}
# Server's domain.
SRV_DOMAIN=${FILE_SERVER##*://}
# RHEL guest image.
RHEL_GUEST=${GUEST_DIR:-"$FILE_SERVER/gen_images/cloud/rhel-guest-image-7.4-142.x86_64.qcow2"}
#RHEL_GUEST=${GUEST_DIR:-"$FILE_SERVER/gen_images/cloud/rhel-guest-image-latest.qcow2"}
GUEST_FILE=$(basename $RHEL_GUEST)
# Director image RPMs path.
OC_IMAGES=${OC_IMAGES:-"$FILE_SERVER/rpms/rhos$OS_VER/rhosp-director-images-$OS_VER-latest.rpm"}
OC_IPA=${OC_IPA:-"$FILE_SERVER/rpms/rhos$OS_VER/rhosp-director-images-$OS_VER-ipa-latest.rpm"}
# Path to the folder that contains all the repos
AUTO_PATH=${AUTO_PATH:-"$FILE_SERVER/auto"}
# Guest image time zone.
# The default is taken from the host's.
GUEST_TZ=${GUEST_TZ:-$(timedatectl | grep "Time zone" | awk '{print $3}')}
# How and where to upload images.
UPLOAD_URL=${UPLOAD_URL:-"${SRV_DOMAIN}:/home/ftp/auto"}
UPLOAD_DIR=${UPLOAD_URL##*:}
UPLOAD_USER=${UPLOAD_USER:-"rhos-qe"}
UPLOAD_PASS=${UPLOAD_PASS:-"qum5net"}
# Nodes and undercloud's root password.
ROOT_PASS=${ROOT_PASS:-"12345678"}
# NTP server to be used.
# Default is taken from the host.
NTP="${NTP:-$(grep "'^server'\|iburst" /etc/ntp.conf | cut -d " " -f 2 | head -n 1)}"
# Backup and restore. Starting Liberty, a backing up and restore mechanism for the
# Undercloud node was created. If set to true here, undercloud-0 will be backed-up,
# destroyed and undercloud-1 will be installed with all the settings and data.
BCK_RES=${BCK_RES:-true}
### Nodes and per node settings.
# Node names.
# Nodes will be _defined_ in this order.
# Keep undercloud first!
NODES=(
"undercloud"
"controller"
"compute"
"ceph"
)
# Network setup per node.
# Always keep provisioning network first and the external network last.
NETWORKS=(
"ControlPlane"
"StorageIP"
"StorageMGT"
"Internal"
"Tenant"
"External"
)
# External network IP range. This is needed to attach an IP for accessing the
# nodes externally bypassing some limitations caused from network isolation.
# There is a pair of ranges, one for internal access and one for external.
# The number here represent the final (rightmost) notation.
# '1' is reserved for the gateway.
DHCP_IN_START=2
DHCP_IN_END=120
DHCP_OUT_START=121
DHCP_OUT_END=250
# Node numbers and settings.
# !! The base name needs to be the same as the node names in the NODES array.
# NAM - Name of the node.
# FLV - Flavor to be used for nodes.
# NUM - How many of that node.
# RAM - RAM in MB.
# SWP - SWAP in MB.
# CPU - How many vCPU.
# DUM - Extra (Dummy) nodes of the same type.
# DSK - Size of disk in GB. Nodes can work with 12, Undercloud might need 22.
# OSD - Ceph specific, this is the size for the Ceph OSD.
# undercloud nodes
undercloud_NAM=${undercloud_NAM:-"undercloud"}
undercloud_NUM=${undercloud_NUM:-1}
undercloud_RAM=${undercloud_RAM:-16384}
undercloud_SWP=${undercloud_SWP:-1024}
undercloud_CPU=${undercloud_CPU:-4}
undercloud_DUM=${undercloud_DUM:-1}
undercloud_DSK=${undercloud_DSK:-22}
# controller nodes
controller_NAM=${controller_NAM:-"control"}
controller_FLV=${controller_FLV:-"control"}
controller_NUM=${controller_NUM:-3}
controller_RAM=${controller_RAM:-16384}
controller_SWP=${controller_SWP:-1024}
controller_CPU=${controller_CPU:-4}
controller_DUM=${controller_DUM:-0}
controller_DSK=${controller_DSK:-17}
# compute nodes
compute_NAM=${compute_NAM:-"compute"}
compute_FLV=${compute_NAM:-"compute"}
compute_NUM=${compute_NUM:-3}
compute_RAM=${compute_RAM:-16384}
compute_SWP=${compute_SWP:-1024}
compute_CPU=${compute_CPU:-4}
compute_DUM=${compute_DUM:-0}
compute_DSK=${compute_DSK:-17}
# ceph nodes
ceph_NAM=${ceph_NAM:-"ceph"}
ceph_FLV=${ceph_FLV:-"ceph-storage"}
ceph_NUM=${ceph_NUM:-3}
ceph_RAM=${ceph_RAM:-8192}
ceph_SWP=${ceph_SWP:-1024}
ceph_CPU=${ceph_CPU:-2}
ceph_DUM=${ceph_DUM:-0}
ceph_DSK=${ceph_DSK:-17}
ceph_OSD=${ceph_OSD:-10}