An example Red Hat JBoss EAP 7 application with a MySQL database. For more information about using this template, see https://github.com/jboss-container-images/jboss-eap-7-openshift-image/blob/eap72/README.adoc.
Templates allow you to define parameters which take on a value. That value is then substituted wherever the parameter is referenced. References can be defined in any text field in the objects list field. Refer to the Openshift documentation for more information.
Variable name | Image Environment Variable | Description | Example value | Required |
---|---|---|---|---|
|
— |
The name for the application. |
eap-app |
True |
|
— |
Custom hostname for https service route. Leave blank for default hostname, e.g.: secure-<application-name>-<project>.<default-domain-suffix> |
— |
False |
|
— |
Git source URI for application |
True |
|
|
— |
Git branch/tag reference |
1.3 |
False |
|
— |
Path within Git project to build; empty for root project directory. |
todolist/todolist-jdbc |
False |
|
|
Database JNDI name used by application to resolve the datasource, e.g. java:/jboss/datasources/mysql |
java:jboss/datasources/TodoListDS |
False |
|
|
Database name |
root |
True |
|
|
Queue names, separated by commas. These queues will be automatically created when the broker starts. Also, they will be made accessible as JNDI resources in EAP. Note that all queues used by the application must be specified here in order to be created automatically on the remote AMQ broker. |
|
False |
|
|
Topic names, separated by commas. These topics will be automatically created when the broker starts. Also, they will be made accessible as JNDI resources in EAP. Note that all topics used by the application must be specified here in order to be created automatically on the remote AMQ broker. |
|
False |
|
— |
The name of the secret containing the keystore file |
eap7-app-secret |
True |
|
|
The name of the keystore file within the secret |
keystore.jks |
False |
|
|
The type of the keystore file (JKS or JCEKS) |
|
False |
|
|
The name associated with the server certificate |
|
False |
|
|
The password for the keystore and certificate |
|
False |
|
|
Sets xa-pool/min-pool-size for the configured datasource. |
|
False |
|
|
Sets xa-pool/max-pool-size for the configured datasource. |
|
False |
|
|
Sets transaction-isolation for the configured datasource. |
|
False |
|
|
Sets how the table names are stored and compared. |
|
False |
|
|
The maximum permitted number of simultaneous client connections. |
|
False |
|
|
The minimum length of the word to be included in a FULLTEXT index. |
|
False |
|
|
The maximum length of the word to be included in a FULLTEXT index. |
|
False |
|
|
Controls the innodb_use_native_aio setting value if the native AIO is broken. |
|
False |
|
|
AMQ cluster admin password |
|
True |
|
|
Database user name |
|
True |
|
|
Database user password |
|
True |
|
— |
GitHub trigger secret |
secret101 |
True |
|
— |
Generic build trigger secret |
secret101 |
True |
|
— |
Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you’ve installed the ImageStreams in a different namespace/project. |
openshift |
True |
|
|
The name of the secret containing the keystore file |
eap7-app-secret |
False |
|
|
The name of the keystore file within the secret |
jgroups.jceks |
False |
|
|
The name associated with the server certificate |
|
False |
|
|
The password for the keystore and certificate |
|
False |
|
|
JGroups cluster password |
|
True |
|
|
Controls whether exploded deployment content should be automatically deployed |
false |
False |
|
— |
Maven mirror to use for S2I builds |
— |
False |
|
— |
Maven additional arguments to use for S2I builds |
— |
False |
|
— |
List of directories from which archives will be copied into the deployment folder. If unspecified, all archives in /target will be copied. |
— |
False |
|
— |
The tag to use for the "mysql" image stream. Typically, this aligns with the major.minor version of MySQL. |
5.7 |
True |
|
— |
Container memory limit |
1Gi |
False |
The CLI supports various object types. A list of these object types as well as their abbreviations can be found in the Openshift documentation.
A service is an abstraction which defines a logical set of pods and a policy by which to access them. Refer to the container-engine documentation for more information.
Service | Port | Name | Description |
---|---|---|---|
|
8080 |
— |
The web server’s http port. |
|
8443 |
— |
The web server’s https port. |
|
3306 |
— |
The database server’s port. |
|
8888 |
ping |
The JGroups ping port for clustering. |
A route is a way to expose a service by giving it an externally-reachable hostname such as www.example.com
. A defined route and the endpoints
identified by its service can be consumed by a router to provide named connectivity from external clients to your applications. Each route consists
of a route name, service selector, and (optionally) security configuration. Refer to the
Openshift documentation for more information.
Service | Security | Hostname |
---|---|---|
|
TLS passthrough |
A buildConfig
describes a single build definition and a set of triggers for when a new build should be created.
A buildConfig
is a REST object, which can be used in a POST to the API server to create a new instance. Refer to
the Openshift documentation
for more information.
S2I image | link | Build output | BuildTriggers and Settings |
---|---|---|---|
jboss-eap72-openshift:1.0 |
|
GitHub, Generic, ImageChange, ConfigChange |
A deployment in OpenShift is a replication controller based on a user defined template called a deployment configuration. Deployments are created manually or in response to triggered events. Refer to the Openshift documentation for more information.
A trigger drives the creation of new deployments in response to events, both inside and outside OpenShift. Refer to the Openshift documentation for more information.
Deployment | Triggers |
---|---|
|
ImageChange |
|
ImageChange |
A replication controller ensures that a specified number of pod "replicas" are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more. Refer to the container-engine documentation for more information.
Deployment | Replicas |
---|---|
|
1 |
|
1 |
/bin/bash -c /opt/eap/bin/readinessProbe.sh
/bin/sh -i -c MYSQL_PWD="$MYSQL_PASSWORD" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e 'SELECT 1'
Deployments | Name | Port | Protocol |
---|---|---|---|
|
jolokia |
8778 |
|
http |
8080 |
|
|
https |
8443 |
|
|
ping |
8888 |
|
|
|
— |
3306 |
|
Deployment | Variable name | Description | Example value |
---|---|---|---|
|
|
— |
|
|
Database JNDI name used by application to resolve the datasource, e.g. java:/jboss/datasources/mysql |
|
|
|
Database user name |
|
|
|
Database user password |
|
|
|
Database name |
|
|
|
— |
|
|
|
Sets xa-pool/min-pool-size for the configured datasource. |
|
|
|
Sets xa-pool/max-pool-size for the configured datasource. |
|
|
|
Sets transaction-isolation for the configured datasource. |
|
|
|
— |
dns.DNS_PING |
|
|
— |
|
|
|
— |
8888 |
|
|
The name of the keystore file within the secret |
|
|
|
The name of the keystore file within the secret |
|
|
|
The name of the keystore file within the secret |
|
|
|
The name associated with the server certificate |
|
|
|
The password for the keystore and certificate |
|
|
|
AMQ cluster admin password |
|
|
|
Queue names, separated by commas. These queues will be automatically created when the broker starts. Also, they will be made accessible as JNDI resources in EAP. Note that all queues used by the application must be specified here in order to be created automatically on the remote AMQ broker. |
|
|
|
Topic names, separated by commas. These topics will be automatically created when the broker starts. Also, they will be made accessible as JNDI resources in EAP. Note that all topics used by the application must be specified here in order to be created automatically on the remote AMQ broker. |
|
|
|
The name of the secret containing the keystore file |
|
|
|
The name of the keystore file within the secret |
|
|
|
The name of the keystore file within the secret |
|
|
|
The name associated with the server certificate |
|
|
|
The password for the keystore and certificate |
|
|
|
JGroups cluster password |
|
|
|
Controls whether exploded deployment content should be automatically deployed |
|
|
|
— |
|
|
|
— |
|
|
|
|
— |
|
|
— |
|
|
|
— |
|
|
|
Sets how the table names are stored and compared. |
|
|
|
The maximum permitted number of simultaneous client connections. |
|
|
|
The minimum length of the word to be included in a FULLTEXT index. |
|
|
|
The maximum length of the word to be included in a FULLTEXT index. |
|
|
|
Controls the innodb_use_native_aio setting value if the native AIO is broken. |
|
Clustering in OpenShift EAP is achieved through one of two discovery mechanisms:
KUBE_PING or DNS_PING. This is done by configuring the JGroups protocol stack in
standalone-openshift.xml with any of the following mechanisms:
<kubernetes.KUBE_PING>
, <dns.DNS_PING>
, <openshift.KUBE_PING/>
or
<openshift.DNS_PING/>
. The templates are configured to use DNS_PING
, however
KUBE_PING
is the default used by the image.
The discovery mechanism used is specified by the JGROUPS_PING_PROTOCOL
environment
variable which can be set to openshift.DNS_PING
, kubernetes.KUBE_PING
,
dns.DNS_PING
or openshift.KUBE_PING
. KUBE_PING
is the default used
by the image if no value is specified for JGROUPS_PING_PROTOCOL
for compatibility
with previous releases.
WARN: openshift.DNS_PING
and openshift.KUBE_PING
are deprecated and may be removed
in a future release.
For DNS_PING
to work, the following steps must be taken:
-
The
OPENSHIFT_DNS_PING_SERVICE_NAME
environment variable must be set to the name of the ping service for the cluster (see table above). If not set, the server will act as if it is a single-node cluster (a "cluster of one"). -
The
OPENSHIFT_DNS_PING_SERVICE_PORT
environment variables should be set to the port number on which the ping service is exposed (see table above). TheDNS_PING
protocol will attempt to discern the port from the SRV records, if it can, otherwise it will default to 8888. -
A ping service which exposes the ping port must be defined. This service should be "headless" (ClusterIP=None) and must have the following:
-
The port must be named for port discovery to work.
-
It must be annotated with
service.alpha.kubernetes.io/tolerate-unready-endpoints
set to"true"
. Omitting this annotation will result in each node forming their own "cluster of one" during startup, then merging their cluster into the other nodes' clusters after startup (as the other nodes are not detected until after they have started).
-
kind: Service
apiVersion: v1
spec:
clusterIP: None
ports:
- name: ping
port: 8888
selector:
deploymentConfig: eap-app
metadata:
name: eap-app-ping
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
description: "The JGroups ping port for clustering."
For KUBE_PING
to work, the following steps must be taken:
For kubernetes.KUBE_PING
:
. The KUBERNETES_NAMESPACE
environment variable must be set (see table above).
If not set, the server will act as if it is a single-node cluster (a "cluster of one").
. The KUBERNETES_LABELS
environment variables should be set (see table above).
If not set, pods outside of your application (albeit in your namespace) will try to join.
For legacy openshift.KUBE_PING
. The OPENSHIFT_KUBE_PING_NAMESPACE
environment variable must be set (see table above).
If not set, the server will act as if it is a single-node cluster (a "cluster of one").
. The OPENSHIFT_KUBE_PING_LABELS
environment variables should be set (see table above).
If not set, pods outside of your application (albeit in your namespace) will try to join.
For both implementations: . Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST api. This is done on the command line.
Using the default service account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:myproject:default -n myproject
Using the eap-service-account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:myproject:eap-service-account -n myproject