Skip to content

Commit

Permalink
Merge 90a3b68 into dae6406
Browse files Browse the repository at this point in the history
  • Loading branch information
awicenec committed Sep 9, 2020
2 parents dae6406 + 90a3b68 commit 47181ec
Show file tree
Hide file tree
Showing 26 changed files with 251 additions and 4,580 deletions.
15 changes: 14 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,20 @@ documentation <https://daliuge.readthedocs.io/>`_
Installation
------------

To get the latest stable version of the full package::
docker
------

The easiest way to get started is to use the docker container installation procedures provided
to build and run the daliuge-runtime and the daliuge-translator. Please refer to
the README files in the subdirectories daliuge-runtime and daliuge-translator, respectively.
Depending on what you want to deploy you may need to build and run only the runtime or the
translator.


PyPi
----

It is also possible to install the latest stable version of the full package using PyPi::

pip install daliuge

Expand Down
84 changes: 84 additions & 0 deletions daliuge-runtime/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Docker containers

We currently build the system in two images:

* *icrar/daliuge-base:latest* includes a CentOS 7 system with a 'dfms' user and all the requirements to install the dfms framework installed.
* *icrar/daliuge-engine:latest* is built on top of the :base image and includes the installation of the DALiuGE framework.

This way we try to separate the pre-requirements of the daliuge engine from the framework installation, which is more dynamic. The idea is then to rebuild only the daliuge-engine image as needed when new versions of the framework need to be deployed, and not build it from scratch each time.

Most of the dependencies included in :base do not belong to the DALiuGE framework itself, but rather to its requirements (mainly to the spead2 communication protocol). Once we move out the spead2 application from this repository (and therefore the dependency of dfms on spead2) we'll re-organize these Dockerfiles to have a base installation of the dfms framework, and then build further images on top of that base image containing specific applications with their own system installation requirements.

The *daliuge-engine* image by default runs a generic daemon, which allows to then start the Master Manager, Node Manager or DataIsland Manager. This approach allows to change the actual manager deployment configuration in a more dynamic way and adjusted to the actual requirements of the environment.

## Building the docker images

If starting from scratch you need to build both the base and the runtime image. This can be achieved by using the shell scripts provided in the same directory as this README file:

```bash
./build_base.sh
````

followed by

```bash
./build_engine.sh
```

## Starting the DALiuGE Engine Daemon

The *icrar/daliuge-engine:latest* image can be started using the *run_engine.sh* script:

```bash
./run_engine.sh
```

This will start the image in interactive mode, means that the logs from the DALiuGE daemon are displayed on the screen.

## Starting managers

In a typical real-world scenario DALiuGE runs manager services on multiple machines. Each machine participating in a DALiuGE deployment will run at least one of these services. Each worker machine will need to run a Node Manager and, if more than one node participates in a deployment, in addition there must be a Master Manager running as well. The Master Manager can run on a seperate machine, or on one of the worker machine in addition to the Node Manager. For scalability reasons DALiuGE also introduces the concept of Data Islands, in order to keep the load on the Master under control for very big workflow runs. Data Islands are only really helpful when trying to deploy extremely large physical graphs with 10s of millions of nodes, they are not required when just a large number of machines are involved. Thus starting a DataIsland Manager is optional and could be started on a worker node, or a seperate machine.

Here are examples of the commands used to start the managers on localhost, assuming that the docker image icrar/daliuge-engine:latest is running on localhost.

### Node manager

```bash
docker exec -ti daliuge-engine dlg nm -v --no-dlm -H 0.0.0.0
```

### Master manager

```bash
docker exec -ti daliuge-engine dlg mm -v -H 0.0.0.0
```

### Starting managers using the RESTful interface

This would be required if the docker-engine is running on a remote host.

```bash
curl -X POST http://localhost:9000/managers/master
curl -X POST http://localhost:9000/managers/node
curl -d '{"nodes": ["0.0.0.0"]}' -H "Content-Type: application/json" -X POST http://localhost:9000/managers/dataisland
```

## Accessing the run-time web interface

To access the session interface open a browser and point to https://localhost:8000. This allows to monitor the status of deployment sessions.

## Usage

### Stand alone

The DALiuGE runtime and the DALiuGE translator are exposing a command line interface> However, with docker containers using that functionality, although possible is not practical. In many cases the user does not even have access to the host machine running the docker engine. For completeness here is an example on how to call the command line interface:

```bash
docker exec -ti daliuge-engine dlg
```

It is also possible to interact with both of them directly using the RESTful API, but the easiest way is to use EAGLE.

### EAGLE

The DALiuGE run-time is integrated with the EAGLE (https://github.com/ICRAR/EAGLE) graphical workflow editor. EAGLE has an interface to the DALiuGE translator and through that users can also submit physical graph templates graphs for execution. Please refer to the EAGLE documentation for more details.
4 changes: 4 additions & 0 deletions daliuge-runtime/build_base.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/bash

# Go!
docker build -t icrar/daliuge-base:latest -f docker/base/Dockerfile .
4 changes: 4 additions & 0 deletions daliuge-runtime/build_engine.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/bash

# Go!
docker build --no-cache -t icrar/daliuge-engine:latest -f docker/exec-engine/Dockerfile .
9 changes: 0 additions & 9 deletions daliuge-runtime/docker/README.md

This file was deleted.

53 changes: 30 additions & 23 deletions daliuge-runtime/docker/base/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,26 +1,33 @@
# We need centos 7 if we want gcc 4.8
FROM centos:7
# we are doing a two-stage build to keep the size of
# the final image low.

# Install all required packages
RUN yum -y update && yum -y install \
boost-devel \
boost-python \
boost-system \
gcc-c++ \
git \
openssh-server \
openssh-clients \
python-devel && \
yum clean all && \
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py" && \
python get-pip.py && \
pip install numpy
# First stage build and cleanup
FROM python:3.8-slim
ARG BUILD_ID
LABEL stage=builder
LABEL build=$BUILD_ID
RUN apt-get update && apt-get install -y gcc
RUN pip install numpy

# Add the dfms user and create its private/public key pair
# Also setup the SSH host keys
RUN ssh-keygen -A && \
adduser --uid 1000 dfms
COPY / /daliuge

COPY dfms_docker.pem.pub /home/dfms/.ssh/authorized_keys
RUN chmod 755 /home/dfms/.ssh && \
chown -R dfms.dfms /home/dfms/.ssh
RUN cd /daliuge && \
pip install .


# Second stage build taking what's required from first stage
# FROM alpine
# RUN apk add --update python3 db sqlite
# COPY VERSION /home/ngas/VERSION
# COPY startServer.sh /home/ngas/startServer.sh
# COPY --from=0 /home/ngas/. /home/ngas/.
# COPY --from=0 /usr/bin/. /usr/bin/.
# COPY --from=0 /usr/lib/python3.8/site-packages/. /usr/lib/python3.8/site-packages/.
# COPY --from=0 /NGAS/. /NGAS/.
# RUN sed -i 's/127.0.0.1/0.0.0.0/g' /NGAS/cfg/ngamsServer.conf
# Second stage build taking what's required from first stage
FROM python:3.8-slim
RUN apt-get update && apt-get install -y git
COPY --from=0 /usr/local/lib/python3.8/site-packages/. /usr/local/lib/python3.8/site-packages/.
COPY --from=0 /usr/local/bin/. /usr/local/bin/.
COPY --from=0 /daliuge/. /daliuge/.
16 changes: 0 additions & 16 deletions daliuge-runtime/docker/base/build.sh

This file was deleted.

7 changes: 0 additions & 7 deletions daliuge-runtime/docker/dfms/Dockerfile

This file was deleted.

8 changes: 0 additions & 8 deletions daliuge-runtime/docker/dfms/Dockerfile_incontext

This file was deleted.

14 changes: 0 additions & 14 deletions daliuge-runtime/docker/dfms/build.sh

This file was deleted.

18 changes: 18 additions & 0 deletions daliuge-runtime/docker/exec-engine/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# We need the base image we build with the other Dockerfile
FROM icrar/daliuge-base:latest

# Get the DFMS sources and install them in the system
RUN git clone https://github.com/ICRAR/daliuge ~/daliuge && \
cd ~/daliuge && \
pip install .

# Second stage build taking what's required from first stage
FROM icrar/daliuge-base:latest
COPY --from=0 /usr/local/lib/python3.8/site-packages/. /usr/local/lib/python3.8/site-packages/.
COPY --from=0 /usr/local/bin/. /usr/local/bin/.
COPY --from=0 /daliuge/. /daliuge/.

EXPOSE 8000
EXPOSE 9000

CMD ["dlg", "daemon", "-vv", "--no-nm"]
17 changes: 17 additions & 0 deletions daliuge-runtime/docker/exec-engine/Dockerfile_incontext
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# We need the base image we build with the other Dockerfile
FROM icrar/daliuge-base:latest

# Get the local DALiuGE sources and install them in the system
COPY / /daliuge
RUN cd ~/daliuge && pip install .

# Second stage build taking what's required from first stage
FROM icrar/daliuge-base:latest
COPY --from=0 /usr/local/lib/python3.8/site-packages/. /usr/local/lib/python3.8/site-packages/.
COPY --from=0 /usr/local/bin/. /usr/local/bin/.
COPY --from=0 /daliuge/. /daliuge/.

EXPOSE 8000
EXPOSE 9000

CMD ["dlg", "daemon", "-vv", "--no-nm"]
1 change: 1 addition & 0 deletions daliuge-runtime/run_engine.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docker run -ti --name daliuge-engine -p 8000:8000 -p 8001:8001 -p 8002:8002 -p 9000:9000 icrar/daliuge-engine:latest
26 changes: 26 additions & 0 deletions daliuge-translator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Docker containers

We currently build the translator in a single, separate image in order to enable a deployment on machines completely seperate from the EAGLE editor and the runtime environment.

## Building the translator image

Just execute the shell script:
```
./build_translator.sh
```

## Starting the DALiuGE Translator Daemon
The *icrar/daliuge-translator:latest* image can be started using

```
./run_translator.sh
````

This will start the image in interactive mode, means that the logs from the daemon are displayed on the screen, which is good for debugging sessions. The RESTful interface is mapped to http://localhost:8084 by default and that address can be configured in the EAGLE editor.

## Usage
### Stand alone
The DALiuGE translator is not meant to be used stand-alone, but since it is exposing a RESTful interface it can be called using e.g. curl.

### EAGLE
The DALiuGE translator is integrated with the EAGLE (https://github.com/ICRAR/EAGLE) graphical workflow editor. EAGLE has an interface to the DALiuGE translator and users can submit logical graphs to the translator and retrieve the resulting physical graph templates back.
1 change: 1 addition & 0 deletions daliuge-translator/build_translator.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docker build -t icrar/daliuge-translator:latest -f docker/Dockerfile .
5 changes: 0 additions & 5 deletions daliuge-translator/dlg/dropmake/web/d3.v3.min.js

This file was deleted.

Loading

0 comments on commit 47181ec

Please sign in to comment.