Skip to content

Latest commit

 

History

History
152 lines (98 loc) · 8.29 KB

File metadata and controls

152 lines (98 loc) · 8.29 KB

IBM Workload Automation

Introduction

Workload Automation is a complete, modern solution for batch and real-time workload management. It enables organizations to gain complete visibility and control over attended or unattended workloads. From a single point of control, it supports multiple platforms and provides advanced integration with enterprise applications including ERP, Business Analytics, File Transfer, Big Data, and Cloud applications.

Docker adoption ensures standardization of your workload scheduling environment and provides an easy method to replicate environments quickly in development, build, test, and production environments, speeding up the time it takes to get from build to production significantly. Install your environment using Docker to improve scalability, portability, and efficiency.

This readme file contains the high-level steps to deploy all of the Workload Automation product components. However, for more detailed information about configuring a specific component, see:

To access Workload Automation 9.5 documentation please see the following link: Workload Automation 9.5

Accessing the container images

You can access the IBM Workload Automation container images from the Entitled Registry:

  1. Access the entitled registry. Log in to MyIBM Container Software Library with the IBMid and password that are associated with the entitled software.

  2. In the Container software library tile, click View library and then click Copy key to copy the entitlement key to the clipboard.

  3. Execute the following command to log in into the IBM Entitled Registry:

    docker login -u cp -p <your_entitled_key> cp.icr.io
    

The images are as follows:

  • cp.icr.io/cp/ibm-workload-automation-agent-dynamic:10.2.0.01.20231201
  • cp.icr.io/cp/ibm-workload-automation-server:10.2.0.01.20231201
  • cp.icr.io/cp/ibm-workload-automation-console:10.2.0.01.20231201

Other supported tags

  • 10.2.0.01.20231201
  • 10.2.0.00.20230728
  • 10.1.0.04.20231201
  • 10.1.0.03.20230511-amd64
  • 10.1.0.02.20230301
  • 10.1.0.01.20221130
  • 10.1.0.00.20220722
  • 10.1.0.00.20220512
  • 10.1.0.00.20220304
  • 9.5.0.06.20230324
  • 9.5.0.06.20221216
  • 9.5.0.06.20220617
  • 9.5.0.05.20211217

Getting Started

If you want to start the containers via Docker Compose, use the following command to clone the current repository:

git clone https://github.com/WorkloadAutomation/ibm-workload-automation-docker-compose.git

If you do not have git installed in your environment, download the ZIP file from the main page of the repository:

Click on "Code" and select "Download ZIP"

If you want to customize the installation parameters, modify the docker-compose.yml file.

Accept the product licenses by setting the LICENSE parameter to "accept" in the wa.env file located in the container package as follows: LICENSE=accept

In the directory where the docker-compose.yml file is located, you can start the containers by running the following command:

docker-compose up -d

To verify that the containers are started, run the following command:

docker ps 

You can optionally check the container logs using the following command:

docker-compose logs -f <container_name>

Where, <container_name> represents one of the following: wa-server, wa-console or wa-agent.

Notes

If your server component uses a timezone different from the default timezone, then to avoid problems with the FINAL job stream, you must update MAKEPLAN within the DOCOMMAND, specifying the timezone parameter and value. For example, if you are using the America/Los Angeles timezone, then it must be specified as follows:

$JOBS

WA_WA-SERVER_XA#MAKEPLAN
DOCOMMAND "TODAY_DATE=`${UNISONHOME}/bin/datecalc today pic YYYYMMDD`; ${UNISONHOME}/MakePlan -to `${UNISONHOME}/bin/datecalc ${TODAY_DATE}070
0 + 1 day + 2 hours pic MM/DD/YYYY^HHTT` timezone America/Los_Angeles"
STREAMLOGON wauser
DESCRIPTION "Added by composer."
TASKTYPE OTHER
SUCCOUTPUTCOND CONDSUCC "(RC=0) OR (RC=4)"
RECOVERY STOP

Custom certificates enhancements

To ensure communication between components, you have to install in each container its own custom certificates, which must be generated by the user.

Generating your own Custom Certificates

To generate the custom certificates on a Windows or UNIX workstation, run the following command:

openssl genrsa -out ./ca.key 40xx96
openssl req -x5x09 -new -nodes -key ./ca.key -subj "/CN=WA_ROOT_CA" -days 3650 -out ./ca.crt -config <openssl_dir>/openssl.cnf
openssl genrsa -des3 -passout pass:<SSL_PASSWORD> -out ./tls.key 40xx96
openssl req -new -key ./tls.key -passin pass:<SSL_PASSWORD> -out ./tls.csr -config <openssl_dir>/openssl.cnf -subj "/C=<C>/ST=<ST>/L=<L>/O=Global Security/OU=<OU> Department/CN=<COMMON_NAME>" 
openssl x509 -req -in ./tls.csr -days 3650 -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out ./tls.crt

After you run the above commands, the following files are created:

  • ca
  • ca.key
  • ca.srl
  • tls
  • tls.csr
  • tls.key

Store these files into a dedicated folder, which you will mount as a volume of your containers.

Perform the following changes in the docker-compose.yaml file for each component:

  • Specify the password of the Trust Store where the certificates are stored in your component in the SSL_PASSWORD parameter in the environment section.
  • Specify the path to the custom certificates on your workstation in the volumes section.

Upgrade scenario

When upgrading, perform the steps documented above, then set the SSL_KEY_FOLDER variable in the environment section of all the components. The value of this variable is the path to the existing folder on each component containing the default certificates coming from the previous release:

SSL_KEY_FOLDER=<path_to_folder_containing_your_default_certificates>

Hybrid scenario

You might want to deploy the server component on the cloud, and the console component on-premises. You then need to import the custom certificates that are available in the component in the cloud into the component that is deployed on-premises. For example, if the console is on-premises and the server is deployed in a container, you need to import the server certificates into the console truststore, as follows:

${WAUI_DIR}/java/jre/bin/keytool -importcert -keystore ${WAUI_DIR}/usr/servers/dwcServer/resources/security/TWSServerTrustFile.jks -storepass <SSL_PASSWORD> -storetype jks -file <ca_server_file> -alias wa_ca_server -noprompt

Supported Docker versions

This image is officially supported on Docker version 19.x or later.

Support for versions earlier than 19.x is provided on a best-effort basis.

Please see the Docker installation documentation for details on how to upgrade your Docker daemon.

Limitations

The owner of all product files is the wauser user, thus the product does not run as root, but as wauser only. Do not perform the login as root to start processes or execute other commands such as Jnextplan, otherwise it might create some issues.

Limited to amd64 and Linux on Z platforms.

Additional Information

For additional information about using the IBM Workload Automation, see the online documentation. For technical issues, search for Workload Scheduler or Workload Automation on StackOverflow.

License

The Dockerfile and associated scripts are licensed under the Apache License 2.0. IBM Workload Automation is licensed under the IBM International Program License Agreement. This license for IBM Workload Automation can be found online. Note that this license does not permit further distribution.