Skip to content

grycap/toscarizer

Repository files navigation

TOSCARIZER

TOSCARIZER-logo

Quick-Start

Step1: Install TOSCARIZER

git clone https://github.com/grycap/toscarizer
cd toscarizer
python3 -m pip install . 

Docker image

A docker image is available with TOSCARIZER and AI-SPRINT Desing tools: registry.gitlab.polimi.it/ai-sprint/toscarizer/toscarizer

You can use it setting the path of you local application directory as in this example:

docker run --rm -v local_app_path:/app \
    -ti registry.gitlab.polimi.it/ai-sprint/toscarizer/toscarizer \
    toscarizer tosca --application_dir /app --base

In the case of the docker operation (Step3) docker is used to build and push the application images so it must be enabled inside the docker container to work, as in this example:

docker run --rm -v local_app_path:/app \
    -v /var/run/docker.sock:/var/run/docker.sock  \
    -v $HOME/.docker/config.json:/root/.docker/config.json \
    -ti registry.gitlab.polimi.it/ai-sprint/toscarizer/toscarizer \
    toscarizer docker --application_dir /app
    ...

Step2: Try --help

toscarizer --help

Step3: Build and push the container images needed by each component of the application

This step requires docker installed. See how to install it here. In case that any of the images will run on an ARM platform, support multiarch docker builds must be installed also. See how to configure it here.

First you need to login to the container registry that you will use in the docker operation:

docker login registry.gitlab.polimi.it

Also in case that any of the steps uses NativeCloudFunctions (AWS Lambda). You need to also to login to the ECR repository, using the aws-cli tool (See how to install it here):

aws ecr get-login-password --region [region] | docker login --username AWS --password-stdin
 XXXXXXXXXXXX.dkr.ecr.[region].amazonaws.com
toscarizer docker --registry registry.gitlab.polimi.it \
                  --registry_folder /ai-sprint \
                  --application_dir app

Optionally the --base_image parameter can be set to define a different base image for the generated images. The default value is: registry.gitlab.polimi.it/ai-sprint/toscarizer/ai-sprint-base.

Furthermore in case that any of the steps uses NativeCloudFunctions (AWS Lambda). You need to also set an existing ECR repository URL:

toscarizer docker --registry registry.gitlab.polimi.it \
                  --registry_folder /ai-sprint \
                  --application_dir app
                  --ecr XXXXXXXXXXXX.dkr.ecr.[region].amazonaws.com/repo_name

Step4: Generate the corresponding TOSCA YAML files

The tosca command uses the files templates\*.yaml to generate the TOSCA file for the OSCAR clusters.

Generate the TOSCA IM input files for the base case:

toscarizer tosca --application_dir app --base

Generate the TOSCA IM input files for the optimal case:

toscarizer tosca --application_dir app --optimal

In all cases if some resources are of type PhysicalAlreadyProvisioned or NativeCloudFunction an extra file with the needed information to connect with this resources (IP and SSH auth data, MinIO service info, InfluxDB info AWS S3 bucket, info) is needed. It is expected in the app common_config directory with name physical_nodes.yaml. See the following example: The first layer corresponds with a PhysicalAlreadyProvisioned cluster, thatç is not already installed with OSCAR, that will be accessed via SSH to install the required software. The second one corresponds with a PhysicalAlreadyProvisioned cluster where oscar has been already installed so we can access it directly. Finally the last layer corresponds with an NativeCloudFunction function, where the user must specify the AWS S3 bucket that will be used to trigger the function.

ComputationalLayers: 
   computationalLayer1:
      number: 1
      Resources: 
         resource1:
            name: RaspPi
            fe_node:
               public_ip: 8.8.8.1
               private_ip: 10.0.1.1
               ssh_user: ubuntu
               ssh_key: |
                  -----BEGIN RSA PRIVATE KEY-----
                  ssh11
                  -----END RSA PRIVATE KEY-----
            wns:
               - private_ip: 10.0.1.1
                 ssh_user: ubuntu
                 ssh_key: |
                  -----BEGIN RSA PRIVATE KEY-----
                  ssh12
                  -----END RSA PRIVATE KEY-----
               - private_ip: 10.0.1.2
                 ssh_user: ubuntu
                 ssh_key: |
                  -----BEGIN RSA PRIVATE KEY-----
                  ssh13
                  -----END RSA PRIVATE KEY-----
            influx:
               token: some_token
               endpoint: http://influx.endpoint.com
   computationalLayer2:
      number: 2
      Resources: 
         resource1:
            name: RaspPi
            minio:
               endpoint: https://minio.endpoint.some
               access_key: user
               secret_key: pass
            oscar:
               name: oscar-test
   computationalLayer3:
      number: 3
      Resources: 
         resource1:
            name: AWS-FaaS
            aws:
               bucket: test1
               region: us-east-1
               access_key: ak
               secret_key: sk

In the OSCAR configuration a set of valid DNS records are assigned to the nodes to enable correct and secure external access to the services. A Route53 managed domain is required to make it work. You can set it with the --domain parameter (otherwise the default im.grycap.net will be used):

It has also the option to prepare the OSCAR clusters to be elastic, use the --elastic option setting the maximum number of nodes to set. This option has a limitation: the WNs of the cluster must have the same features.

toscarizer tosca --application_dir app --optimal --elastic 10

In the elastic cases it needs the IM authentication file. The default location is app/im/auth.dat or you can set another one using --im_auth option.

The generated TOSCA will also set the needed recipes to include the AI-SPRINT monitoring system. These recipes require to set the central InfluxDB instance URL (--influxdb_url option) and a valid API token (--influxdb_token option).

Step5: Deploy TOSCA YAML files

To deploy the TOSCA files generated for the base case use:

toscarizer deploy --application_dir app --base

To deploy the TOSCA files generated for the optimal case use:

toscarizer deploy --application_dir app --optimal

In both cases it assumes that the IM authentication file is located at path app/im/auth.dat. It will use the EGI IM instance (https://im.egi.eu/im/). The auth file must contain not only the InfrastructureManager and the cloud provider selected, but also some AWS credentials (EC2 type) to manage the DNS domain names used in the OSCAR TOSCA template. In case of using the default domain value im.grycap.net you should contact the authors to get a set of valid credentials. Otherwise you have to add some EC2 credentials able to manage the specified domain in AWS Route53.

But you can also specify the URL of another IM endpoint, an specific IM authentication file, and even the set of TOSCA files to deploy.

toscarizer deploy --im_url http://someim.com \
                  --im_auth auth.dat \
                  --tosca_file some_path/tosca1.yaml \
                  --tosca_file some_path/tosca1.yaml

During deployment process the command will show the IDs of the infrastructures deployed. You can use this ID to track the status of the deployment. You can access the IM-Dashboard associated to the IM instance you have specified in the deploy command (default one), or you can also use the IM-Client.

At the end it will return in the standard output a YAML formatted output with the name of each yaml file with the infrastructure ID generated or the error message returned. In case of unconfigured infrastructures it will also retun the contextualization log with all the Ansible tasks performed to the infrastructure enabled to debug the error.

Step6: Get infrastructures Outputs

To get the TOSCA outputs of the infrastructures generated for the base case use:

toscarizer outputs --application_dir app --base

To get the TOSCA outputs of the infrastructures generated for the optimal case use:

toscarizer outputs --application_dir app --optimal

Step7: Delete infrastructures

To delete the infrastructures generated for the base case use:

toscarizer delete --application_dir app --base

To delete the infrastructures generated for the optimal case use:

toscarizer delete --application_dir app --optimal