Skip to content

Commit

Permalink
fix: revert incorrect find & replace
Browse files Browse the repository at this point in the history
  • Loading branch information
maystery committed Aug 18, 2020
1 parent 70ddeed commit 6278b88
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 20 deletions.
2 changes: 1 addition & 1 deletion sphinx/source/tutorial-autoscaling-infrastructures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ You can download the example as `tutorial.examples.autoscaling-hadoop <https://r

If you want Occopus to monitor (health_check) your Hadoop Master and it is to be deployed in a different network, make sure you assign public (floating) IP to the Master node.

#. Optionally, edit the ``nodes/cloud_init_hadoop_devel.yaml`` node descriptor file's "Prometheus rules" section in case you want to implement new scaling rules. The actually implemented rules are working well and can be seen below.
#. Optionally, edit the ``nodes/cloud_init_hadoop_master.yaml`` node descriptor file's "Prometheus rules" section in case you want to implement new scaling rules. The actually implemented rules are working well and can be seen below.

- ``{infra_id}`` is a built in Occopus variable and every alert has to implement it in their Labels!

Expand Down
12 changes: 6 additions & 6 deletions sphinx/source/tutorial-bigdata-ai.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ You can download the example as `tutorial.examples.hadoop-cluster <https://raw.g
.. code:: bash
List of nodes/ip addresses:
hadoop-devel:
hadoop-master:
192.168.xxx.xxx (3116eaf5-89e7-405f-ab94-9550ba1d0a7c)
hadoop-slave:
192.168.xxx.xxx (23f13bd1-25e7-30a1-c1b4-39c3da15a456)
Expand Down Expand Up @@ -230,7 +230,7 @@ You can download the example as `tutorial.examples.spark-cluster <https://raw.gi
.. code:: bash
List of nodes/ip addresses:
spark-devel:
spark-master:
192.168.xxx.xxx (3116eaf5-89e7-405f-ab94-9550ba1d0a7c)
spark-worker:
192.168.xxx.xxx (23f13bd1-25e7-30a1-c1b4-39c3da15a456)
Expand Down Expand Up @@ -359,7 +359,7 @@ You can download the example as `tutorial.examples.spark-cluster-with-r <https:/
.. code:: bash
List of nodes/ip addresses:
spark-devel:
spark-master:
192.168.xxx.xxx (3116eaf5-89e7-405f-ab94-9550ba1d0a7c)
spark-worker:
192.168.xxx.xxx (23f13bd1-25e7-30a1-c1b4-39c3da15a456)
Expand Down Expand Up @@ -398,7 +398,7 @@ You can download the example as `tutorial.examples.spark-cluster-with-r <https:/
install.packages("sparklyr")
library(sparklyr)
Sys.setenv(SPARK_HOME = '/home/sparkuser/spark')
sc <- spark_connect(devel = "local")
sc <- spark_connect(master = "local")
sdf_len(sc, 5, repartition = 1) %>%
spark_apply(function(e) I(e))
spark_disconnect_all()
Expand All @@ -424,7 +424,7 @@ You can download the example as `tutorial.examples.spark-cluster-with-r <https:/
install.packages("sparklyr")
library(sparklyr)
Sys.setenv(SPARK_HOME = '/home/sparkuser/spark')
sc <- spark_connect(devel = "spark://<SparkMasterIP>:7077")
sc <- spark_connect(master = "spark://<SparkMasterIP>:7077")
sdf_len(sc, 5, repartition = 1) %>%
spark_apply(function(e) I(e))
spark_disconnect_all()
Expand Down Expand Up @@ -570,7 +570,7 @@ This tutorial sets up a complete Apache Spark infrastructure integrated with HDF
.. code:: bash
List of nodes/ip addresses:
spark-devel:
spark-master:
192.168.xxx.xxx (3116eaf5-89e7-405f-ab94-9550ba1d0a7c)
spark-worker:
192.168.xxx.xxx (23f13bd1-25e7-30a1-c1b4-39c3da15a456)
Expand Down
26 changes: 13 additions & 13 deletions sphinx/source/tutorial-building-clusters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Building clusters
Docker-Swarm cluster
~~~~~~~~~~~~~~~~~~~~

This tutorial sets up a complete Docker infrastructure with Swarm, Docker and Consul software components. It contains a devel node and predefined number of worker nodes. The worker nodes receive the ip of the devel node and attach to the devel node to form a cluster. Finally, the docker cluster can be used with any standard tool talking the docker protocol (on port ``2375``).
This tutorial sets up a complete Docker infrastructure with Swarm, Docker and Consul software components. It contains a master node and predefined number of worker nodes. The worker nodes receive the ip of the master node and attach to the master node to form a cluster. Finally, the docker cluster can be used with any standard tool talking the docker protocol (on port ``2375``).

**Features**

Expand Down Expand Up @@ -53,7 +53,7 @@ The following steps are suggested to be performed:

#. Make sure your authentication information is set correctly in your authentication file. You must set your email and password in the authentication file. Setting authentication information is described :ref:`here <authentication>`.

#. Load the node definition for ``dockerswarm_devel_node`` and ``dockerswarm_worker_node`` nodes into the database.
#. Load the node definition for ``dockerswarm_master_node`` and ``dockerswarm_worker_node`` nodes into the database.

.. important::

Expand Down Expand Up @@ -81,21 +81,21 @@ The following steps are suggested to be performed:
.. note::

It may take a few minutes until the services on the devel node come to live. Please, be patient!
It may take a few minutes until the services on the master node come to live. Please, be patient!

#. After successful finish, the node with ``ip address`` and ``node id`` are listed at the end of the logging messages and the identifier of the newly built infrastructure is printed. You can store the identifier of the infrastructure to perform further operations on your infra or alternatively you can query the identifier using the **occopus-maintain** command.

.. code:: bash
List of nodes/ip addresses:
devel:
master:
<ip-address> (dfa5f4f5-7d69-432e-87f9-a37cd6376f7a)
worker:
<ip-address> (cae40ed8-c4f3-49cd-bc73-92a8c027ff2c)
<ip-address> (8e255594-5d9a-4106-920c-62591aabd899)
77cb026b-2f81-46a5-87c5-2adf13e1b2d3
#. Check the result by submitting docker commands to the docker devel node!
#. Check the result by submitting docker commands to the docker master node!

#. Finally, you may destroy the infrastructure using the infrastructure id returned by ``occopus-build``

Expand Down Expand Up @@ -416,18 +416,18 @@ The following steps are suggested to be performed:
List of nodes/ip addresses:
cqueue-worker:
192.168.xxx.xxx (34b07a23-a26a-4a42-a5f4-73966b8ed23f)
cqueue-devel:
cqueue-master:
192.168.xxx.xxx (29b98290-c6f4-4ae7-95ca-b91a9baf2ea8)
db0f0047-f7e6-428e-a10d-3b8f7dbdb4d4
#. After a successful built, tasks can be sent to the CQueue devel. The framework is built for executing Docker containers with their specific inputs. Also, environment variables and other input parameters can be specified for each container. The CQueue devel receives the tasks via a REST API and the CQueue workers pull the tasks from the CQueue devel and execute them. One worker process one task at a time.
#. After a successful built, tasks can be sent to the CQueue master. The framework is built for executing Docker containers with their specific inputs. Also, environment variables and other input parameters can be specified for each container. The CQueue master receives the tasks via a REST API and the CQueue workers pull the tasks from the CQueue master and execute them. One worker process one task at a time.

Push 'hello world' task (available parameters: image string, env []string, cmd []string, container_name string):

.. code:: bash
curl -H 'Content-Type: application/json' -X POST -d'{"image":"ubuntu", "cmd":["echo", "hello Docker"]}' http://<develip>:8080/task
curl -H 'Content-Type: application/json' -X POST -d'{"image":"ubuntu", "cmd":["echo", "hello Docker"]}' http://<masterip>:8080/task
The result should be: ``{"id":"task_324c5ec3-56b0-4ff3-ab5c-66e5e47c30e9"}``
Expand All @@ -437,29 +437,29 @@ The following steps are suggested to be performed:
This id (task_324c5ec3-56b0-4ff3-ab5c-66e5e47c30e9) will be used later, in order to query its status and result.


#. The worker continuously updates the status (pending, received, started, retry, success, failure) of the task with the task’s ID. After the task is completed, the workers send a notification to the CQueue devel, and this task will be removed from the queue. The status of a task and the result can be queried from the key-value store through the CQueue devel.
#. The worker continuously updates the status (pending, received, started, retry, success, failure) of the task with the task’s ID. After the task is completed, the workers send a notification to the CQueue master, and this task will be removed from the queue. The status of a task and the result can be queried from the key-value store through the CQueue master.

Check the result of the push command by querying the ``task_id`` returned by the push command:

.. code:: bash
curl -X GET http://<develip>:8080/task/$task_id
curl -X GET http://<masterip>:8080/task/$task_id
The result should be: ``{"status":"SUCCESS"}``

#. Fetch the result of the push command by querying the ``task_id`` returned by the push command:

.. code:: bash
curl -X GET http://<develip>:8080/task/$task_id/result
curl -X GET http://<masterip>:8080/task/$task_id/result
The result should be: ``hello Docker``

#. Delete the task with the following command:

.. code:: bash
curl -X DELETE http://<develip>:8080/task/$task_id
curl -X DELETE http://<masterip>:8080/task/$task_id
#. For debugging, check the logs of the container at the CQueue worker node.

Expand All @@ -475,4 +475,4 @@ The following steps are suggested to be performed:
.. note::

The CQueue devel and the worker components are written in golang, and they have a shared code-base. The open-source code is available `at GitLab <https://gitlab.com/lpds-public/cqueue/-/tree/devel>`_ .
The CQueue master and the worker components are written in golang, and they have a shared code-base. The open-source code is available `at GitLab <https://gitlab.com/lpds-public/cqueue/-/tree/master>`_ .

0 comments on commit 6278b88

Please sign in to comment.