This repository has been archived by the owner on Sep 23, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 3
/
README.txt
162 lines (105 loc) · 5.68 KB
/
README.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
==============================================================================
See the documentation here: http://...
==============================================================================
NOTE: This README and the online documentation discusses *our use* of
cloudinit.d, we use it in a particular pattern with particular tools (for
example, Chef, which is not required). If you want to do something
differently, there is usually a way to make it happen.
==============================================================================
I. Quick guide for the impatient:
Install epumgmt (dashi branch) into a virtualenv. cloudinitd will be installed
as a dependency.
Export the following environment variables into your shell:
# Credentials for Nimbus Context Broker
# The default is the broker at FutureGrid hotel. Use your Cumulus creds.
export CTXBROKER_KEY=`cat ~/.secrets/CTXBROKER_KEY`
export CTXBROKER_SECRET=`cat ~/.secrets/CTXBROKER_SECRET`
# Credentials for EC2
# The provisioner uses to start worker nodes on EC2 in some situations
export AWS_ACCESS_KEY_ID=`cat ~/.secrets/AWS_ACCESS_KEY_ID`
export AWS_SECRET_ACCESS_KEY=`cat ~/.secrets/AWS_SECRET_ACCESS_KEY`
# Credentials for cloudinit.d itself
# cloudinit.d uses to start the base nodes
export CLOUDINITD_IAAS_ACCESS_KEY="$AWS_ACCESS_KEY_ID"
export CLOUDINITD_IAAS_SECRET_KEY="$AWS_SECRET_ACCESS_KEY"
# Credentials for RabbitMQ
# You make these up
export RABBITMQ_USERNAME="easterbunny"
export RABBITMQ_PASSWORD=`uuidgen`
Run:
RUN_NAME="my_run_name"
cloudinitd boot main.conf -v -v -v -l debug -x -n $RUN_NAME
Inspect:
epumgmt -a status -n $RUN_NAME
==============================================================================
II. For launch plan authors: conventions
There are three layers of value substitutions to understand.
1. The "deps.conf" files (and "deps-common.conf") contain key/value pairs.
There are two kinds of values. Examples:
1A. Literal
epu_git_repo: https://github.com/ooici/epu.git
1B. Variable
rabbitmq_host: ${basenode.hostname}
In the literal kind, you have a straight string value.
In the variable kind, you are telling cloudinit.d that a service called
"x" provides a dynamic value from the launch (in this example, a service
called "basenode" provides "hostname" -- when this key "rabbitmq_host"
is desired later, cloudinit.d will provide the hostname value from wherever
the "svc-basenode" service ended up).
2. Then there are the json files.
These are configuration files for chef-solo that are run on the VM instances
that get started. These files are more complicated than simple key/value,
but there is the same idea present: some values are literal, others obtained
via substitution.
Any substitution here comes from the *deps files*. For example, if you list
"${rabbitmq_host}", the value will come from the dep file containing that
key. For each service you can explicitly list which deps files are "in play"
for that substitution.
For every cloudinit.d launch, temporary files are created with all of the
substitutions enacted. These files are what get transferred to the VM and
serve as input to the boot-time contextualization program: in our case this
is chef-solo.
3. The third and final layer of substitution is in the chef recipes themselves.
These recipes make references to variables in the json files. These json
files are sent to the node as literal configuration files. You can always
debug a chef recipe by looking at the configuration file that is given to
chef-solo and finding the exact string value that was in play.
==============================================================================
III. For launch plan authors: chef json files
Rules for the bootconf json files when using the main recipe "X" which is
what we use most of the time.
* appretrieve:retrieve_method
This can have the value 'archive' or 'git'.
When it is 'archive', the file configured at "appretrieve:archive_url" is
retrieved over http and it is assumed to be a tar.gz archive.
When it is 'git', the following configurations are used:
* appretrieve:git_repo
* appretrieve:git_branch
* appretrieve:git_commit
Note that those are the controls for the "thing installed".
All subsequent dependency resolution happens via the dependency lists that
come as part of that installation -- by way of the server listed in the
"appinstall:package_repo" configuration.
* appinstall:package_repo
The "thing installed" has a dependency list and this package repository
configuration is what is used during the installation process to resolve
the dependencies.
* appinstall:install_method
This can have the following values:
* py_venv_setup
Create a new virtualenv, install using "python setup.py install"
* py_venv_buildout
Create a new virtualenv, install using "bootstrap.py" and "bin/buildout"
* Future: more options for "burned" setups.
* apprun:run_method
This can have the following values:
* sh
The old default, create a shell script for each service listed in the
"services" section in the json file. Then start that shell script (unless
the service is also listed in the "do_not_start" section, for an example
see the provisioner.json file).
* supervised
The new default, each service listed in the "services" section in the json
file is watched by a supervisor process. This will monitor the unix process
and communicate failures off of the machine.
==============================================================================