Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
Attila Farkas committed Jan 11, 2019
2 parents d95a23f + ace5bfb commit affc0c1
Show file tree
Hide file tree
Showing 4 changed files with 45 additions and 8 deletions.
16 changes: 13 additions & 3 deletions user_documentation/rst/application_description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,9 @@ description:
host:
properties:
...
outputs:
ports:
value: { get_attribute: [ YOUR_KUBERNETES_APP, port ]}

policies:
- scalability:
Expand Down Expand Up @@ -87,6 +90,9 @@ Under the node_templates section you can define one or more apps to create a Kub
type: tosca.artifacts.Deployment.Image.Container.Docker
file: YOUR_DOCKER_IMAGE
repository: docker_hub
outputs:
ports:
value: { get_attribute: [ YOUR_KUBERNETES_APP, port ]}

The fields under the **properties** section of the Kubernetes app are derived from a docker-compose file and converted using Kompose. You can find additional information about the properties in the `docker compose documentation <https://docs.docker.com/compose/compose-file/#service-configuration-reference>` and see what `Kompose supports here <http://kompose.io/conversion/>`. The syntax of the property values is currently the same as in docker-compose
file. The Compose properties will be translated into Kubernetes specs on deployment.
Expand All @@ -103,8 +109,9 @@ Under the **properties** section of an app (see **YOUR_KUBERNETES_APP**) you can
* **labels**: map of metadata like Docker labels and/or Kubernetes instructions (see NOTE).

*NOTE*
**labels** can also be used to pass instructions to Kubernetes (full list: http://kompose.io/user-guide/#labels)
* **kompose.service.type: 'nodeport'** will make the container accessible at *<worker_node_ip>:port* where port can be found on the Kubernetes Dashboard under *Discovery and load balancing > Services > my_app > Internal endpoints*

* **labels** can also be used to pass instructions to Kubernetes (full list: http://kompose.io/user-guide/#labels)
**kompose.service.type: 'nodeport'** will make the container accessible at *<worker_node_ip>:port* where port can be found on the Kubernetes Dashboard under *Discovery and load balancing > Services > my_app > Internal endpoints*

Under the **artifacts** section you can define the docker image for the
kubernetes app. Three fields must be defined:
Expand All @@ -117,6 +124,9 @@ Kubernetes networking is inherently different to the approach taken by Docker. T

Since every pod gets its own IP, which any pod can by default use to communicate with any other pod, this means there is no network to explicitly define. If **ports** is defined in the definition above, pods can reach each other over CoreDNS via their hostname (container name).

Under the **outputs** section (this key is **NOT** nested within *node_templates*)
you can define an output to retrieve from Kubernetes via the adaptor. Currently, only port info is obtainable.

Specification of the Virtual Machine
====================================

Expand Down Expand Up @@ -324,7 +334,7 @@ The **properties** subsection defines the scaling policy itself. For monitoring

The subsections have the following roles:

* **sources** supports the dynamic attachment of an external exporter by specifying a list endpoints of exporters (see example above). Each item found under this subsection is configured under Prometheus to start collecting the information provided/exported by the exporters. Once done, the values of the parameters provided by the exporters become available.
* **sources** supports the dynamic attachment of an external exporter by specifying a list endpoints of exporters (see example above). Each item found under this subsection is configured under Prometheus to start collecting the information provided/exported by the exporters. Once done, the values of the parameters provided by the exporters become available. **NEW** MiCADO now supports Kubernetes service discovery - to define such a source, simply pass the name of the app as defined in TOSCA and do not specify any port number
* **constants** subsection is used to predefined fixed parameters. Values associated to the parameters can be referred by the scaling rule as variable (see ``LOWER_THRESHOLD`` above) or in any other sections referred as Jinja2 variable (see ``MYEXPR`` above).
* **queries** contains the list of Prometheus query expressions to be executed and their variable name associated (see ``THELOAD`` above)
* **alerts** subsection enables the utilisation of the alerting system of Prometheus. Each alert defined here is registered under Prometheus and fired alerts are represented with a variable of their name set to True during the evaluation of the scaling rule (see ``myalert`` above).
Expand Down
10 changes: 5 additions & 5 deletions user_documentation/rst/deployment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@ Step 1: Download the ansible playbook.

::

git clone https://github.com/micado-scale/ansible-micado.git ansible-micado
cd ansible-micado
git checkout v0.7.0
wget https://github.com/micado-scale/ansible-micado/releases/download/v0.7.1/ansible-micado-0.7.1.tar.gz
tar -zxvf ansible-micado-0.7.1.tar.gz
cd ansible-micado-0.7.1/

Step 2: Specify cloud credential for instantiating MiCADO workers.
------------------------------------------------------------------
Expand Down Expand Up @@ -134,9 +134,9 @@ Optionally you may use the Ansible Vault mechanism as described in Step 2 to pro
Step 4: Launch an empty cloud VM instance for MiCADO master.
------------------------------------------------------------

This new VM will host the MiCADO core services.
This new VM will host the MiCADO core services.

**a)** Default port number for MiCADO service is ``443``. Optionally, you can modify the port number stored by the variable called ``web_listening_port`` defined in the ansible playbook file called ``micado-master.yml``.
**a)** Default port number for MiCADO service is ``443``. Optionally, you can modify the port number stored by the variable called ``web_listening_port`` defined in the ansible playbook file called ``micado-master.yml``.

**b)** Configure a cloud firewall settings which opens the following ports on the MiCADO master virtual machine:

Expand Down
11 changes: 11 additions & 0 deletions user_documentation/rst/release_notes.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
Release Notes
*************

**v0.7.1 (10 Jan 2018)**

- Fix: Add SKIP back to Dashboard (defaults changed in v1.13.1)
- Fix: URL not found for Kubernetes manifest files
- Fix: Make sure worker node sets hostname correctly
- Fix: Don't update Kubernetes if template not changed
- Fix: Make playbook more idempotent
- Add Support for outputs via TOSCA ADT
- Add Kubernetes service discovery support to Prometheus
- Add new demo: nginx (HTTP request scaling)

**v0.7.0 (12 Dec 2018)**

- Introduce Kubernetes as the primary container orchestration engine
Expand Down
16 changes: 16 additions & 0 deletions user_documentation/rst/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,19 @@ This application demonstrates a deadline policy using CQueue. CQueue provides a
* Step8c: run ``query-nodes.sh`` to see the details of docker nodes hosting your application
* Step9: Run ``./5-undeploy-cq-worker-from-micado.sh`` to remove your application from MiCADO when all items are consumed.
* Step10: You can have a look at the state ``./cqueue-get-job-status.sh <task_id>`` or stdout of container executions ``./cqueue-get-job-status.sh <task_id>`` using one of the task id values printed during Step 3.

nginx
========

This application deploys a http server with nginx. The container features a built-in prometheus exporter for HTTP request metrics. The policy defined for this application scales up/down both nodes and the nginx service based on active http connections. wrk (apt-get install wrk | https://github.com/wg/wrk) is recommended for HTTP load testing.

**Note:** make sure you have the ``jq`` tool and ``wrk`` benchmarking app installed as these are required by the helper scripts. Best results for ``wrk`` are seen on multi-core systems.

* Step1: make a copy of the TOSCA file which is appropriate for your cloud - ``nginx_<your_cloud>.yaml`` - and name it ``nginx.yaml``
* Step2: fill in the requested fields beginning with ``ADD_YOUR_...`` . These will differ depending on which cloud you are using.
* In CloudSigma, for example, the ``libdrive_id`` , ``public_key_id`` and ``firewall_policy`` fields must be completed. Without these, CloudSigma does not have enough information to launch your worker nodes. All information is found on the CloudSigma Web UI. ``libdrive_id`` is the long alphanumeric string in the URL when a drive is selected under “Storage/Library”. ``public_key_id`` is under the “Access & Security/Keys Management” menu as **Uuid**. ``firewall_policy`` can be found when selecting a rule defined under the “Networking/Policies” menu. The following ports must be opened for MiCADO workers: *all inbound connections from MiCADO master*
* Step3: Update the parameter file, called ``_settings``. You need the ip address for the MiCADO master and should name the deployment by setting the APP_ID. ***the application ID can not contain any underscores ( _ )** The APP_NAME must match the name given to the application in TOSCA (default: **nginxapp**) You should also change the SSL user/password/port information if they are different from the default.
* Step4: run ``1-submit-tosca-nginx.sh`` to create the minimum number of MiCADO worker nodes and to deploy the Kubernetes Deployment including the nginx app defined in the ``nginx.yaml`` TOSCA description.
* Step4a: run ``2-list-apps.sh`` to see currently running applications and their IDs, as well as the ports forwarded to 8080 for accessing the HTTP service
* Step5: run ``3-generate-traffic.sh`` to generate some HTTP traffic. After thirty seconds or so, you will see the system respond by scaling up containers, and eventually virtual machines to the maximum specified.
* Step6: run ``4-undeploy-nginx.sh`` to remove the nginx deployment and all the MiCADO worker nodes

0 comments on commit affc0c1

Please sign in to comment.