This demo shows how to use the OptaPlanner Operator to run OptaPlanner workloads on OpenShift.
Please note that the OptaPlanner Operator used in this demo is experimental. As such, it provides no guarantees in terms of maturity and backward compatibility.
Use an existing OpenShift cluster or try any of the options that fit best your needs.
Warning
|
The Developer Sandbox for Red Hat OpenShift does not allow installing new operators, thus it’s not suitable for running this demo. |
Warning
|
If you pick the Red Hat OpenShift Local, increase the available memory at least to 16 384 MiB
by running crc config set memory 16384 before you start the local cluster.
|
Next, install the OpenShift CLI (oc
)
to be able to interact with the OpenShift cluster.
Login as a user with the cluster-admin
role using the oc login
command.
To set up the demo, run ./demo.sh setup [path to the OptaPlanner distribution]
.
The script installs KEDA, ArtemisCloud, and OptaPlanner operators and creates a single Artemis AMQ broker in the OpenShift cluster. It also prints information about the Artemis AMQ broker required in the next steps. You can skip following instructions and jump right into creating the school-timetabling solver.
This section describes a manual set up as an alternative to Setup and deploy automatically by using the demo.sh script.
The OptaPlanner Operator uses ArtemisCloud to create ActiveMQ queues, thus, as the first step, follow the documentation to install the ArtemisCloud operator to your OpenShift instance.
Note
|
The ArtemisCloud operator by default watches for ActiveMQ broker and queue resources in the namespace it has been installed to. |
After the ArtemisCloud operator has been deployed, create a new broker.
The ArtemisCloud distribution comes with several activemqartemis
example resources you can use as a starting point.
This demo uses the AMQP protocol to send and receive messages.
To enable the AMQP protocol on the broker, add the following acceptor definition in the activemqartemis
resource spec
:
...
spec:
acceptors:
- name: amqp
connectionsAllowed: 100
port: 5672
protocols: amqp
...
This acceptor allows up to 100 concurrent connections via the AMQP protocol.
KEDA enables dynamic scaling of solver deployments based on the number of incoming requests.
Follow the documentation to install the KEDA operator.
Follow the README to install the OptaPlanner operator.
The Demo App also relies upon the quarkus-openshift
extension to deploy to OpenShift. The extension also picks up additional
resources defined in the src/main/kubernetes/openshift.yml
; in this case, there is a template that creates
the PostgreSQL database to store input problems and solutions.
-
change directory to
demo-app
-
change the current project to demo by running
oc project demo
-
deploy the demo-app via a maven command:
mvn clean package -Dopenshift -Dquarkus.openshift.namespace=demo -Dquarkus.openshift.env.vars.amqp-host=<ActiveMQ broker host>
The -Dquarkus.openshift.env.vars.amqp-host
property results in an environment variable AMQP_HOST
in the running container, which Quarkus converts to the amqp-host
property, effectively overriding the
amqp-host=localhost
in the application.properties
.
Tip
|
If you encounter the Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed exception due to self-signed certificate during the build, add the -Dquarkus.kubernetes-client.trust-certs=true property.
|
To find out the ActiveMQ broker host:
. run oc get service
. find the service bound to the port 5672
. the ActiveMQ broker host is a compound of the <service name>.<project>.svc.cluster.local
, e.g. ex-aao-amqp-0-svc.demo.svc.cluster.local
Now, when all the other pieces of the puzzle are already in place, it’s time to make the school-timetabling running.
In order for the OptaPlanner Operator to create a deployment of the solver project, you need to push it to any container image registry accessible by your OpenShift instance.
Quarkus comes in handy again, this time with one of the Quarkus container image extensions, which builds a container image locally and pushes it to a container image registry.
Build and push the School Timetabling container image to a registry of your choice:
-
change directory to
school-timetabling
-
run
mvn clean package -Dopenshift -Dquarkus.container-image.group=<image group> -Dquarkus.container-image.registry=<container registry> -Dquarkus.container-image.username=<container registry username> -Dquarkus.container-image.password=<container registry password>
The container registry in the command above is a repository used to store and access container images (e.g. docker.io) and the image group is an organization or a personal account in that registry.
Tip
|
Pushing an image to a container image registry
You can use quay.io as a container image registry.
|
The Solver custom resource describes the problem to solve on OpenShift and the infrastructure it requires.
In this case, the Solver
custom resource might look like follows:
apiVersion: org.optaplanner.solver/v1alpha1
kind: Solver
metadata:
name: school-timetabling
spec:
amqBroker:
host: ex-aao-amqp-0-svc.demo.svc.cluster.local
port: 5672
managementHost: ex-aao-hdls-svc.demo.svc.cluster.local
usernameSecretRef:
key: AMQ_USER
name: ex-aao-credentials-secret
passwordSecretRef:
key: AMQ_PASSWORD
name: ex-aao-credentials-secret
template:
spec:
containers:
- name: school-timetabling
image: quay.io/example/school-timetabling:latest
scaling:
dynamic: true
replicas: 3
-
line 4 - the solver name
-
lines 7 and 8 - ActiveMQ broker host and port accepting AMQP connections
-
line 9 - ActiveMQ broker host providing management interface
-
lines 10 to 15 - reference to a secret containing a username and password to access the broker
-
line 16 to 20 - the school-timetabling container that will run from the image built and pushed to a registry of your choice
-
line 22 - enables dynamic scaling via KEDA
-
line 23 - the maximum number of running school-timetabling pods; if dynamic scaling is disabled, this parameter defines a fixed number of pods
To find out the ActiveMQ broker management host:
-
run
oc get service
-
find the service bound to the port 8161
-
the ActiveMQ broker management host is a compound of the
<service name>.<project>.svc.cluster.local
, e.g.ex-aao-hdls-svc.demo.svc.cluster.local
The ActiveMQ broker username and password is stored in a secret named <broker resource name>-credentials-secret
.
Run oc get secret
to see the available secrets in the project.
Create the Solver
resource via oc apply -f <file>
.
To see what ActiveMQ queue there are in the demo
project, run oc get activemqartemisaddress
:
$ oc get activemqartemisaddress NAME AGE school-timetabling-problem 4s school-timetabling-solution 4s
Both the school-timetabling-problem
and school-timetabling-solution
have been created.
Check the active pods in the demo
project by running the oc get pods
command:
$ oc get pod NAME READY STATUS activemq-artemis-controller-manager-569cdd7f7c-xrfcx 1/1 Running demo-app-3-t4pkp 1/1 Running ex-aao-ss-0 1/1 Running postgresql-school-timetabling-1-kwdr5 1/1 Running
There are no running school-timetabling
pods, as no request for solving has been submitted yet.
-
find out the Demo App address by running
oc get route
; see the HOST/PORT column of its output -
open the address in the browser
-
change the number of lessons, if needed, and click the Create & send button
Check the active pods again:
$ oc get pod NAME READY STATUS activemq-artemis-controller-manager-569cdd7f7c-xrfcx 1/1 Running demo-app-3-t4pkp 1/1 Running ex-aao-ss-0 1/1 Running postgresql-school-timetabling-1-kwdr5 1/1 Running school-timetabling-cb57fc6bd-hvmc6 0/1 ContainerCreating school-timetabling-cb57fc6bd-lhn9d 0/1 ContainerCreating school-timetabling-cb57fc6bd-tnlw8 1/1 Running
After submitting four datasets for solving, there is one running pod and two others starting, as the maximum number of replicas is three.
To work locally on this demo without OpenShift or any Kubernetes cluster:
-
start the PostgreSQL database and a Kafka broker by running
docker-compose up
-
run the
demo-app
bymvn quarkus:dev
in thedemo-app
directory -
run the
school-timetabling
bymvn quarkus:dev
in theschool-timetabling
directory