Skip to content
This repository has been archived by the owner on Jul 6, 2022. It is now read-only.

OpenLiberty/archived-guide-cloud-openshift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploying microservices to OpenShift 3

Note
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website.

Explore how to deploy microservices to Red Hat OpenShift 3.

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to an OpenShift 3 cluster. To learn how to deploy to an OpenShift 4 cluster, see the Deploying microservices to OpenShift 4 by using Kubernetes Operators guide.

There are different cloud-based solutions for running your Kubernetes workloads. With a cloud-based infrastructure, you can focus on developing your microservices without worrying about low-level infrastructure details for deployment. Using a cloud helps you to easily scale and manage your microservices in a high-availability setup.

Kubernetes is an open source container orchestrator that automates many tasks that are involved in deploying, managing, and scaling containerized applications. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.

OpenShift is a Kubernetes-based platform with added functions. It streamlines the DevOps process by providing an intuitive development pipeline. It also provides integration with multiple tools to make the deployment and management of cloud applications easier. To learn more about the different platforms that Red Hat OpenShift offers, check out their official documentation.

The two microservices you will deploy are called system and inventory. The system microservice returns the JVM system properties of the running container. It also returns the pod’s name in the HTTP header, making replicas easy to distinguish from each other. The inventory microservice adds the properties from the system microservice to the inventory. This process demonstrates how communication can be established between pods inside a cluster.

Additional prerequisites

Before you begin, the following additional tools need to be installed:

  • Docker: You need a containerization software for building containers. Kubernetes supports various container types, but you will use Docker in this guide. For installation instructions, refer to the official Docker documentation.

  • OpenShift account: To access a Kubernetes cluster, you must sign up for a Red Hat OpenShift Online account. There are two options, Starter and Pro. Use the Starter plan, which includes a free trial of the OpenShift platform with limited resources, making it perfect for individual experimentation. The Pro plan includes more resources and has a monthly fee. To sign up, go to the official website. Keep in mind that the creation time depends on resource availability and might take some time.

  • OpenShift CLI: You need the OpenShift command-line tool oc to interact with your Kubernetes cluster. For installation instructions, refer to the official OpenShift Online documentation.

To verify that the OpenShift CLI is installed correctly, run the following command:

oc version

The output will be similar to:

oc v3.11.0+0cbc58b

Accessing an OpenShift cluster

Before you can deploy your microservices, you must gain access to a cluster on OpenShift.

Creating an OpenShift account automatically grants you access to their multi-tenant, OpenShift cluster. After you have access, you are also given access to their online web console. To login to OpenShift by using the CLI, navigate to the online web console by following the [username] > Copy Login Command > Display Token > Log in with this token path.

The command looks like the following example:

oc login --token=[your-token] --server=https://api.[region].online-starter.openshift.com:[port]

Create a project by running the following command:

oc new-project [project-name]

Deploying microservices to OpenShift

In this section, you will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on OpenShift. You will build and containerize the system and inventory microservices, push them to a container registry, and then deploy them to your Kubernetes cluster.

Building and containerizing the microservices

The first step of deploying to Kubernetes is to build your microservices and containerize them.

The starting Java project, which is located in the start directory, is a multi-module Maven project. It is made up of the system and inventory microservices. Each microservice resides in its own directory, start/system or start/inventory. Both of these directories contain a Dockerfile, which is necessary for building the Docker images. See the Containerizing microservices guide if you’re unfamiliar with Dockerfiles.

If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then use the .war file to build a Docker image. The projects are set up so that this process is automated as a part of a single Maven build.

Navigate to the start directory and build these microservices by running the following commands:

cd start
mvn package

Next, run the docker build commands to build container images for your application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default.

During the build, you see various Docker messages that describe what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:

docker images

Verify that the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images are listed among them, for example:

REPOSITORY                    TAG
system                        1.0-SNAPSHOT
inventory                     1.0-SNAPSHOT
openliberty/open-liberty      kernel-java8-openj9-ubi

If you don’t see the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images, check the Maven build log for any potential errors.

Pushing the images to OpenShift’s internal registry

In order to run the microservices on the cluster, you need to push the microservice images into a container image registry. You will use OpenShift’s integrated container image registry called OpenShift Container Registry (OCR). After your images are pushed into the registry, you can use them in the pods you create later in the guide.

First, you must authenticate your Docker client to your OCR. Start by running the login command:

oc registry login

You can store your Docker credentials in a custom external credential store, which is more secure than using a Docker configuration file. If you are using a custom credential store for securing your registry credentials, or if you are unsure where your credentials are stored, use the following command:

echo $(oc whoami -t) | docker login -u developer --password-stdin $(oc registry info)

Because the Windows command prompt doesn’t support the command substitution that is displayed for Mac and Linux, run the following commands:

oc whoami
oc whoami -t
oc registry info

Replace the square brackets in the following docker login command with the results from the previous commands:

docker login -u [oc whoami] -p [oc whoami -t] [oc registry info]

The command authenticates your credentials against the internal registry so that you are able to push and pull images. The registry address will be displayed after you run the oc registry login command. It is formatted similar to the following output:

default-route-openshift-image-registry.apps.[region].starter.openshift-online.com

You can also view the registry address by running the following command:

oc registry info

Ensure that you are logged in to OpenShift and the registry, and run the following commands to tag your applications:

docker tag system:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT

Run the following commands:

oc registry info
oc project -q

Replace the square brackets in the following docker tag commands with the results from the previous commands:

docker tag system:1.0-SNAPSHOT [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT

Finally, push your images to the registry:

docker push $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
docker push $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT

Run the following commands:

oc registry info
oc project -q

Replace the square brackets in the following docker push commands with the results from the previous commands:

docker push [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
docker push [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT

After you push the images, run the following command to list the images that you pushed to the internal OCR:

oc get imagestream

Verify that the system and inventory images are listed among them, for example:

NAME        IMAGE REPOSITORY                                                                                     TAGS           UPDATED
inventory   default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/inventory   1.0-SNAPSHOT   3 seconds ago
system      default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/system      1.0-SNAPSHOT   17 seconds ago

Deploying the microservices

Now that your container images are built, deploy them by using a Kubernetes object configuration file.

Kubernetes objects can be configured in a YAML file that contains a description of all your deployments, services, or any other objects that you want to deploy. All objects can also be deleted from the cluster by using the same YAML file that you used to deploy them. The kubernetes.yaml object configuration file is provided for you. If you are interested in learning more about using and configuring Kubernetes clusters, check out the Deploying microservices to Kubernetes guide.

kubernetes.yaml

link:finish/kubernetes.yaml[role=include]
Update the kubernetes.yaml file in the start directory.
kubernetes.yaml

The image is the name and tag of the container image that you want to use for the container. The image address is the OCR address that you logged in to. Update the system image and the inventory image fields to include your project name.

Run the following commands to deploy the objects as defined in kubernetes.yaml file:

oc apply -f kubernetes.yaml

You see an output similar to the following example:

deployment.apps/system-deployment created
deployment.apps/inventory-deployment created
service/system-service created
service/inventory-service created
route.route.openshift.io/system-route created
route.route.openshift.io/inventory-route created

When the apps are deployed, run the following command to check the status of your pods:

oc get pods

If all the pods are healthy and running, you see an output similar to the following example:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

Making requests to the microservices

To access the services and the application, use a route. A route in OpenShift exposes a service at a hostname such as www.your-web-app.com so external users can access the application.

kubernetes.yaml

link:finish/kubernetes.yaml[role=include]

Both the system and inventory routes are configured in the kubernetes.yaml file, and running the oc apply -f kubernetes.yaml command exposed both services.

Your microservices can now be accessed through the hostnames that you can find by running the following command:

oc get routes

They can also be found in the web console by following the Networking > Routes > Location path. Hostnames are in the inventory-route-[project-name].apps.[region].starter.openshift-online.com format. Ensure that you are in your project, not the default project, which is shown in the upper-left corner of the web console.

To access your microservices, point your browser to the following URLs. Substitute the appropriate hostnames for the system and inventory services:

  • http://[system-hostname]/system/properties/

  • http://[inventory-hostname]/inventory/systems

In the first URL, you see a result in JSON format with the system properties of the container JVM. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.

Point your browser to the http://[inventory-hostname]/inventory/systems/[system-hostname] URL. When you to go this URL, the system properties that are taken from the system service are automatically stored in the inventory. Revisit the http://[inventory-hostname]/inventory/systems URL and you see a new entry.

Testing the microservices

pom.xml

link:finish/inventory/pom.xml[role=include]

A few tests are included for you to test the basic functions of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further. The default properties that are defined in the pom.xml file are:

Property Description

system.ip

IP or hostname of the system-service Kubernetes Service

inventory.ip

IP or hostname of the inventory-service Kubernetes Service

Use the following command to run the integration tests against your cluster. Substitute [region] and [project-name] with the appropriate values:

mvn verify \
-Dsystem.ip=system-route-[project-name].apps.[region].starter.openshift-online.com  \
-Dinventory.ip=inventory-route-[project-name].apps.[region].starter.openshift-online.com
  • The system.ip parameter is replaced with the appropriate hostname to access your system microservice.

  • The inventory.ip parameter is replaced with the appropriate hostname to access your inventory microservice.

If the tests pass, you see an output for each service similar to the following example:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

When you no longer need your deployed microservices, you can delete the Kubernetes deployments, services, and routes by running the following command:

oc delete -f kubernetes.yaml

To delete the pushed images, run the following commands:

oc delete imagestream/inventory
oc delete imagestream/system

Finally, you can delete the project by running the following command:

oc delete project [project-name]

Great work! You’re done!

You just deployed two microservices running in Open Liberty to OpenShift. You also learned how to use oc to deploy your microservices on a Kubernetes cluster.