Skip to content

Latest commit

 

History

History
244 lines (166 loc) · 16.6 KB

howto-integrate-elasticsearch-stack.md

File metadata and controls

244 lines (166 loc) · 16.6 KB

Integrate your Liberty application with Elasticsearch stack

In this guide, you will integrate your Liberty application with Elasticsearch stack to enable distributed logging. The Liberty application is running on an Azure Red Hat OpenShift (ARO) 4 cluster. You learn how to:

[!div class="checklist"]

  • Distribute your application logs to hosted Elasticsearch on Microsoft Azure
  • Distribute your application logs to EFK stack installed on ARO 4 cluster

Before you begin

In previous guide, a Java application, which is running inside Open Liberty/WebSphere Liberty runtime, is deployed to an ARO 4 cluster. If you have not done these steps, start with Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift 4 cluster and return here to continue.

Distribute your application logs to hosted Elasticsearch on Microsoft Azure

Elasticsearch Service on Elastic Cloud is the only hosted Elasticsearch and Kibana offering available powered by the creators. It's simple to get up and running — scale with a click, configure with a slider. Choose Microsoft Azure for your deployment and you're on your way to simple management and powerful customization. Refer to Hosted Elasticsearch on Microsoft Azure for starting a free trial.

Create a hosted Elasticsearch service on Microsoft Azure

Follow the instructions below to create a deployment for the hosted Elasticsearch service on Microsoft Azure.

  1. Sign up for a free trial.

  2. Log into Elastic Cloud using your free trial account.

  3. Click Create deployment.

  4. Name your deployment > Select Azure as cloud platform > Leave defaults for others or customize per your needs > Click Create deployment.

    create-elasticsearch-service-deployment

  5. Wait until the deployment is created.

    elasticsearch-service-deployment-created

  6. Write down User name, Password, and Cloud ID for further usage.

Use Filebeat to retrieve and ship application logs

The application <path-to-repo>/2-simple used in the previous guide is ready to write logs to messages.log file, using Java Logging API java.util.logging. With logging in JSON format is configured, Filebeat can run as a side-container to collect and ship logs from messages.log file to the hosted Elasticsearch service on Microsoft Azure.

To configure Filebeat as a side container to retrieve and ship application logs, a number of Kubernetes resource YAML files need to be updated or created.

File Name Source Path Destination Path Operation Description
filebeat-svc-account.yaml <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch/filebeat-svc-account.yaml New A Kubernetes ServiceAccount resource which is used for Filebeat container.
filebeat-config.yaml <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch/filebeat-config.yaml New A Kubernetes ConfigMap resource which is used as the Filebeat configuration file.
elastic-cloud-secret.yaml <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch/elastic-cloud-secret.yaml New A Kubernetes Secret resource with Hosted Elasticsearch service connection credentials, including elastic.cloud.id, and elastic.cloud.auth.
openlibertyapplication.yaml <path-to-repo>/2-simple/openlibertyapplication.yaml <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch/openlibertyapplication.yaml Updated Configure Filebeat as sidecar container.

For reference, you can find these deployment files from <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch of your local clone.

Now you can deploy the sample Liberty application to the ARO 4 cluster with the following steps.

  1. Log in to the OpenShift web console from your browser using the credentials of the Azure AD user.

  2. Log in to the OpenShift CLI with the token for the Azure AD user.

  3. Run the following commands to deploy the application.

    # Change directory to "<path-to-repo>/3-integration/elk-logging/hosted-elasticsearch"
    cd <path-to-repo>/3-integration/elk-logging/hosted-elasticsearch
    
    # Change project to "open-liberty-demo"
    oc project open-liberty-demo
    
    # Create ServiceAccount "filebeat-svc-account"
    oc create -f filebeat-svc-account.yaml
    
    # Grant the service account access to the privileged security context constraints
    oc adm policy add-scc-to-user privileged -n open-liberty-demo -z filebeat-svc-account
    
    # Create ConfigMap "filebeat-config"
    oc create -f filebeat-config.yaml
    
    # Create environment variables which will be passed to secret "elastic-cloud-secret"
    # Note: replace "<Cloud ID>", "<User name>", and "<Password>" with the ones you noted down before
    export ELASTIC_CLOUD_ID=<Cloud ID>
    export ELASTIC_CLOUD_AUTH=<User name>:<Password>
    
    # Create secret "elastic-cloud-secret"
    envsubst < elastic-cloud-secret.yaml | oc create -f -
    
    # Create OpenLibertyApplication "javaee-cafe-elk-hosted-elasticsearch"
    oc create -f openlibertyapplication.yaml
    
    # Check if OpenLibertyApplication instance is created
    oc get openlibertyapplication javaee-cafe-elk-hosted-elasticsearch
    
    # Check if deployment created by Operator is ready
    oc get deployment javaee-cafe-elk-hosted-elasticsearch
    
    # Get host of the route
    HOST=$(oc get route javaee-cafe-elk-hosted-elasticsearch --template='{{ .spec.host }}')
    echo "Route Host: $HOST"

Once the Liberty Application is up and running:

  1. Open the output of Route Host in your browser to visit the application home page.
  2. To generate application logs, Create a new coffee and Delete an existing coffee in the application home page.

Visualize your application logs in Kibana

As long as the application logs are shipped to the Elasticsearch cluster, they can be visualized in the Kibana web console.

  1. Log into Elastic Cloud.

  2. Find your deployment from Elasticsearch Service, click Kibana to open its web console.

  3. From the top-left of the Kibana home page, click menu icon to expand the top-level menu items. Click Stack Management > Index Patterns > Create index pattern.

    create-index-pattern-define

  4. Set filebeat-* as index pattern. Click Next step.

    create-index-pattern-settings

  5. Select @timestamp as Time Filter field name > Click Create index pattern.

  6. From the top-left of the Kibana home page, click menu icon to expand the top-level menu items. Click Discover. Check index pattern filebeat-* is selected.

  7. Add host.name, loglevel, and message from Available fields into Selected fields. Discover application logs from the work area of the page.

    discover-application-logs

Distribute your application logs to EFK stack installed on ARO 4 cluster

Another option is to install EFK (Elasticsearch, Fluentd, and Kibana) stack on the ARO 4 cluster, which aggregates log data from all containers running on the cluster. The steps below describe the process of deploying EFK stack using the Elasticsearch Operator and the Cluster Logging Operator.

Note

Elasticsearch is a memory-intensive application. Refer to section Set up Azure Red Hat OpenShift cluster from the previous guide to learn how to specify appropriate virtual machine size for the worker nodes when creating the cluster.

Deploy cluster logging

Follow the instructions in these tutorials and then return here to continue.

  1. Log in to the OpenShift web console from your browser using the kubeadmin credentials.
  2. Log in to the OpenShift CLI with the token for kubeadmin.
  3. Install the Elasticsearch Operator by following the steps in Install the Elasticsearch Operator using the CLI.
  4. Install the Cluster Logging Operator by following the steps in Install the Cluster Logging Operator using the CLI.

    [!NOTE] To specify the name of an existing StorageClass for Elasticsearch storage in step Create a Cluster Logging instance, open ARO web console > Storage > Storage Classes and find the supported storage class name.

After the newly created Cluster Logging instance is up and running, configure Fluentd to merge the JSON log message bodies emitted by sample application.

  1. Switch to project openshift-logging:

    oc project openshift-logging
  2. Change the cluster logging instance’s managementState field from Managed to Unmanaged:

    oc edit ClusterLogging instance
  3. Set the environment variable MERGE_JSON_LOG to true:

    oc set env ds/fluentd MERGE_JSON_LOG=true

Deploy sample application

The application <path-to-repo>/2-simple used in the previous guide is ready to write logs to messages.log file, using Java Logging API java.util.logging. With the Open Liberty Operator, which sets JSON as console log format and includes message as one of log sources, the application logs will be parsed by Fluentd and posted to Elasticsearch cluster.

To distribute your application logs to EFK stack, a number of Kubernetes resource YAML files need to be updated or created.

File Name Source Path Destination Path Operation Description
openlibertyapplication.yaml <path-to-repo>/2-simple/openlibertyapplication.yaml <path-to-repo>/3-integration/elk-logging/cluster-logging/openlibertyapplication.yaml Updated Changed name to javaee-cafe-elk-cluster-logging.

For reference, you can find these deployment files from <path-to-repo>/3-integration/elk-logging/cluster-logging of your local clone.

Now you can deploy the sample Liberty application to the ARO 4 cluster with the following steps.

  1. Log in to the OpenShift web console from your browser using the credentials of the Azure AD user.

  2. Log in to the OpenShift CLI with the token for the Azure AD user.

  3. Run the following commands to deploy the application.

    # Change directory to "<path-to-repo>/3-integration/elk-logging/cluster-logging"
    cd <path-to-repo>/3-integration/elk-logging/cluster-logging
    
    # Change project to "open-liberty-demo"
    oc project open-liberty-demo
    
    # Create OpenLibertyApplication "javaee-cafe-elk-cluster-logging"
    oc create -f openlibertyapplication.yaml
    
    # Check if OpenLibertyApplication instance is created
    oc get openlibertyapplication javaee-cafe-elk-cluster-logging
    
    # Check if deployment created by Operator is ready
    oc get deployment javaee-cafe-elk-cluster-logging
    
    # Get host of the route
    HOST=$(oc get route javaee-cafe-elk-cluster-logging --template='{{ .spec.host }}')
    echo "Route Host: $HOST"

Once the Liberty Application is up and running:

  1. Open the output of Route Host in your browser to visit the application home page.
  2. To generate application logs, Create a new coffee and Delete an existing coffee in the application home page.

Visualize your application logs in Kibana (EFK)

As long as the application logs are shipped to the Elasticsearch cluster, they can be visualized in the Kibana web console.

  1. Log in to the OpenShift web console from your browser using the kubeadmin credentials. Click Monitoring > Logging.

  2. In the new opened window, click Log in with OpenShift. Log in with kubeadmin if required.

  3. In Authorize Access page, click Allow selected permissions. Wait until the Kibana web console is displayed.

  4. Open Management > Index Patterns > Select project.* > Click Refresh field list icon at top-right of the page.

    refresh-field-list.png

  5. Click Discover. Select index pattern project.* from the dropdown list.

  6. Add kubernetes.namespace_name, kubernetes.pod_name, loglevel, and message from Available Fields into Selected Fields. Discover application logs from the work area of the page.

    discover-application-logs-cluster-logging

If you want to log in using the Azure AD user to view logs in the Kibana web console, follow the steps above but replace index pattern project.* with project.open-liberty-demo.<random-guid>.*.

Next steps

In this guide, you learned how to:

[!div class="checklist"]

  • Distribute your application logs to hosted Elasticsearch on Microsoft Azure
  • Distribute your application logs to EFK stack installed on ARO 4 cluster

Advance to these guides, which integrate Liberty application with other Azure services:

[!div class="nextstepaction"] Integrate your Liberty application with Azure managed databases

[!div class="nextstepaction"] Set up your Liberty application in a multi-node stateless cluster with load balancing

[!div class="nextstepaction"] Integrate your Liberty application with Azure Active Directory OpenID Connect

[!div class="nextstepaction"] Integrate your Liberty application with Azure Active Directory Domain Service via Secure LDAP

If you've finished all of above guides, advance to the complete guide, which incorporates all of Azure service integrations:

[!div class="nextstepaction"] Integrate your Liberty application with different Azure services

Here are references used in this guide: