Skip to content
Merged
8 changes: 4 additions & 4 deletions .bumpversion.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ search = WSVERSION={current_version}
replace = WSVERSION={new_version}

[bumpversion:file:workshop/aws/ec2/terraform.tfvars.template]
search = wsversion = "{current_version}"
replace = wsversion = "{new_version}"
search = default = "{current_version}"
replace = default = "{new_version}"

[bumpversion:file:deprecated/multipass/terraform.tfvars.template]
replace = wsversion = "{new_version}"
search = wsversion = "{current_version}"
search = default = "{current_version}"
replace = default = "{new_version}"

[bumpversion:file:deprecated/multipass/main.tf]
search = default = "{current_version}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The diagram below details the architecture of the Spring PetClinic Java applicat

The Spring PetClinic Java application is a simple microservices application that consists of a frontend and backend services. The frontend service is a Spring Boot application that serves a web interface to interact with the backend services. The backend services are Spring Boot applications that serve RESTful API's to interact with a MySQL database.

By the end of this workshop, you will have a better understanding of how to enable **Splunk OpenTelemetry automatic discovery and configuration** for your Java-based applications running in Kubernetes.
By the end of this workshop, you will have a better understanding of how to enable **automatic discovery and configuration** for your Java-based applications running in Kubernetes.

![Splunk Otel Architecture](../images/auto-instrumentation-java-diagram.png)

Expand Down
6 changes: 3 additions & 3 deletions content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ weight: 2

To get Observability signals (**metrics, traces** and **logs**) into **Splunk Observability Cloud** the Splunk OpenTelemetry Collector needs to be deployed into the Kubernetes cluster.

For this workshop, we will be using the Splunk OpenTelemetry Collector Helm Chart. First we need to add the Helm chart repository to Helm and update to ensure the latest state:
For this workshop, we will be using the Splunk OpenTelemetry Collector Helm Chart. First we need to add the Helm chart repository to Helm and update to ensure the latest version:

{{< tabs >}}
{{% tab title="Helm Repo Add" %}}
{{% tab title="Install Helm Chart" %}}

``` bash
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart && helm repo update
Expand All @@ -32,7 +32,7 @@ Update Complete. ⎈Happy Helming!⎈
{{% /tab %}}
{{< /tabs >}}

**Splunk Observability Cloud** offers wizards in the UI to walk you through the setup of the OpenTelemetry Collector on Kubernetes, but in the interest of time, we will use the Helm command below. Some additional parameters are set to enable the operator and automatic discovery and configuration.
**Splunk Observability Cloud** offers wizards in the UI to walk you through the setup of the OpenTelemetry Collector on Kubernetes, but in the interest of time, we will use the Helm install command below. Additional parameters are set to enable the operator and automatic discovery and configuration.

* `--set="operator.enabled=true"` - this will install the Opentelemetry operator that will be used to handle automatic discovery and configuration.
* `--set="certmanager.enabled=true"` - this will install the required certificate manager for the operator.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ linkTitle: 2. Deploy PetClinic Application
weight: 3
---

#### Deploy the PetClinic Application

The first deployment of our application will be using prebuilt containers to give us the base scenario: a regular Java microservices-based application running in Kubernetes that we want to start observing. So let's deploy our application:

{{< tabs >}}
Expand Down Expand Up @@ -106,5 +104,3 @@ curl -X GET http://localhost:9999/v2/_catalog

{{% /tab %}}
{{< /tabs >}}

If this fails then reach out to your Instructor for a replacement instance.
3 changes: 2 additions & 1 deletion content/en/conf24/1-zero-config-k8s/2-preparation/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ time: 15 minutes
---

The instructor will provide you with the login information for the instance that we will be using during the workshop.

When you first log into your instance, you will be greeted by the Splunk Logo as shown below. If you have any issues connecting to your workshop instance then please reach out to your Instructor.

``` text
Expand Down Expand Up @@ -42,7 +43,7 @@ REALM = <e.g. eu0, us1, us2, jp0, au0 etc.>
RUM_TOKEN = <redacted>
HEC_TOKEN = <redacted>
HEC_URL = https://<...>/services/collector/event
INSTANCE = <workshop name>
INSTANCE = <instance_name>
```

{{% /tab %}}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Verify the PetClinic Website
linkTitle: 1. Verify the PetClinic Webiste
linkTitle: 1. Verify PetClinic Website
weight: 1
---

Expand All @@ -17,4 +17,8 @@ You can validate if the application is running by visiting **http://<IP_ADDRESS>

Make sure the application is working correctly by visiting the **All Owners** **(1)** and **Veterinarians** **(2)** tabs, you should get a list of names in each case.

{{% notice note %}}
As each service needs to start up and synchronize with the database, it may take a few minutes for the application to fully start up.
{{% /notice %}}

![owners](../../images/petclinic-owners.png)
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: 2. Section Break
weight: 2
archetype: chapter
---
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Verify Kubernetes Cluster metrics
linkTitle: 3. Verify everything is working
linkTitle: 3. Verify Cluster Metrics
weight: 4
time: 10 minutes
---
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Automatic Discovery and Configuration
linkTitle: 1. Automatic Discovery and Configuration
title: Patching the Deployment
linkTitle: 1. Patching the Deployment
weight: 1
---

To see how automatic discovery and configuration works with a single pod we will patch the `api-gateway`. Once patched, the OpenTelemetry Collector will inject the automatic discovery and configuration library and the Pod will be restarted in order to start sending traces and profiling data. To show what happens when you enable automatic discovery and configuration, let's do a *before and after* of the configuration:
To configure **automatic discovery and configuration** the deployments need to be patched to add the instrumentation annotation. Once patched, the OpenTelemetry Collector will inject the automatic discovery and configuration library and the Pods will be restarted in order to start sending traces and profiling data. First, confirm that the `api-gateway` does not have the `splunk-otel-java` image.

{{< tabs >}}
{P}{{% tab title="Describe api-gateway" %}}
Expand All @@ -23,25 +23,33 @@ Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2
{{% /tab %}}
{{< /tabs >}}

This container was pulled from a remote repository `quay.io` and was not built to send traces to **Splunk Observability Cloud**. To enable the Java automatic discovery and configuration on the api-gateway service add the `inject-java` annotation to Kubernetes with the `kubectl patch deployment` command.
Next, enable the Java automatic discovery and configuration for all of the services by adding the annotation to the deployments. The following command will patch the all deployments. This will trigger the OpenTelemetry Operator to inject the `splunk-otel-java` image into the Pods:

{{< tabs >}}
{{% tab title="Patch api-gateway" %}}
{{% tab title="Patch all PetClinic services" %}}

``` bash
kubectl patch deployment api-gateway -p '{"spec": {"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-java":"default/splunk-otel-collector"}}}}}'
kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}"
```

{{% /tab %}}
{{% tab title="Patch Output" %}}

``` text
deployment.apps/config-server patched (no change)
deployment.apps/admin-server patched (no change)
deployment.apps/customers-service patched
deployment.apps/visits-service patched
deployment.apps/discovery-server patched (no change)
deployment.apps/vets-service patched
deployment.apps/api-gateway patched
```

{{% /tab %}}
{{< /tabs >}}

There will be no change for the **config-server**, **discovery-server** and **admin-server** as these have already been patched.

To check the container image(s) of the `api-gateway` pod again, run the following command:

{{< tabs >}}
Expand All @@ -64,32 +72,8 @@ Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2

A new image has been added to the `api-gateway` which will pull `splunk-otel-java` from `ghcr.io` (if you see two `api-gateway` containers, the original one is probably still terminating, so give it a few seconds).

To patch all the other services in the Spring Petclinic application, run the following command. This will add the `inject-java` annotation to the remaining services. There will be no change for the **config-server**, **discovery-server**, **admin-server** and **api-gateway** as these have already been patched.

{{< tabs >}}
{{% tab title="Patch all Petclinic services" %}}

``` bash
kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}"

```

{{% /tab %}}
{{% tab title="Patch Output" %}}

``` text
deployment.apps/config-server patched (no change)
deployment.apps/admin-server patched (no change)
deployment.apps/customers-service patched
deployment.apps/visits-service patched
deployment.apps/discovery-server patched (no change)
deployment.apps/vets-service patched
deployment.apps/api-gateway patched (no change)
```

{{% /tab %}}
{{< /tabs >}}

Navigate back to the Kubernetes Navigator in **Splunk Observability Cloud**. After a couple of minutes you will see that the Pods are being restarted by the operator and the automatic discovery and configuration container will be added. This will look similar to the screenshot below:

![restart](../../images/k8s-navigator-restarted-pods.png)

Wait for the Pods to turn green in the Kubernetes Navigator, then go to **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services.
13 changes: 13 additions & 0 deletions content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: Viewing the data in Splunk APM
linkTitle: 2. Viewing APM Data
weight: 2
---

Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`<INSTANCE>-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected.

![apm](../../images/zero-config-first-services-overview.png)

You will see the name **(2)** of the **api-gateway** service and metrics in the Latency and Request & Errors charts (you can ignore the Critical Alert, as it is caused by the sudden request increase generated by the load generator). You will also see the rest of the services appear.

We will visit the **Service Map** **(3)** in the next section.
5 changes: 5 additions & 0 deletions content/en/conf24/1-zero-config-k8s/4-apm/3-section-break.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: 3. Section Break
weight: 3
archetype: chapter
---
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Setting up automatic discovery and configuration for APM
linkTitle: 4. automatic discovery and configuration & Metrics
linkTitle: 4. Automatic discovery and configuration
weight: 5
time: 10 minutes
---
Expand Down
13 changes: 0 additions & 13 deletions content/en/conf24/1-zero-config-k8s/4-zero-config/2-apm-data.md

This file was deleted.

26 changes: 3 additions & 23 deletions content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,16 @@ linkTitle: 1. APM Service Map
weight: 1
---

Next, click on **Service Map** **(3)** to view the automatically generated Service Map and select the **api-gateway** service.

![apm map](../../images/zero-config-first-services-map.png)

The example above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of 2 minutes will help. The initial startup-related errors (red dots) will eventually disappear.
The above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear.

Next, let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard

For this exercise we are going to use a common scenario you would use if the service operation was showing high latency, or errors for example.

## I THINK WE SHOULD HAVE SOMETHING GENERATING ERRORS TO SHOW THE BENEFIT OF THIS SECTION

Select the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**.

![select a trace](../../images/select-workflow.png)

This should give you the workflow with a filter on `GET /owners` **(1)** as shown below. To pick a trace, select a line in the `Service Requests & Errors` chart **(2)**, when the dot appears click to get a list of sample traces:

![workflow-trace-pick](../../images/selecting-a-trace.png)
This should give you the workflow with a filter on `GET /owners` **(1)** as shown below.

Once you have the list of sample traces, click on the blue **(3)** Trace ID Link. (Make sure it has the same three services mentioned in the Service Column.)

This brings us the the Trace selected in the Waterfall view:

![waterfall](../../images/waterfall-view.png)

Here we find several sections:

* The actual Waterfall Pane **(1)**, where you see the trace and all the instrumented functions visible as spans, with their duration representation and order/relationship showing.
* The Trace Info Pane **(2), by default, shows the selected Span information. (Highlighted with a box around the Span in the Waterfall Pane.)
* The Span Pane **(3)**, here you can find all the Tags that have been sent in the selected Span, You can scroll down to see all of them.
* The process Pane, with tags related to the process that created the Span (Scroll down to see as it is not in the screenshot.)
* The Trace Properties at the top of the right-hand pane by default is collapsed as shown.
![select a trace](../../images/select-workflow.png)
23 changes: 23 additions & 0 deletions content/en/conf24/1-zero-config-k8s/5-traces/2-trace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: APM Trace
linkTitle: 2. APM Trace
weight: 2
---

To pick a trace, select a line in the `Service Requests & Errors` chart **(2)**, when the dot appears click to get a list of sample traces:

Once you have the list of sample traces, click on the blue **(3)** Trace ID Link (make sure it has the same three services mentioned in the Service Column.)

![workflow-trace-pick](../../images/selecting-a-trace.png)

This brings us the the Trace selected in the Waterfall view:

Here we find several sections:

* The actual Waterfall Pane **(1)**, where you see the trace and all the instrumented functions visible as spans, with their duration representation and order/relationship showing.
* The Trace Info Pane **(2)**, by default, shows the selected Span information (highlighted with a box around the Span in the Waterfall Pane).
* The Span Pane **(3)**, here you can find all the Tags that have been sent in the selected Span, You can scroll down to see all of them.
* The process Pane, with tags related to the process that created the Span (scroll down to see as it is not in the screenshot).
* The Trace Properties at the top of the right-hand pane by default is collapsed as shown.

![waterfall](../../images/waterfall-view.png)
51 changes: 0 additions & 51 deletions content/en/conf24/1-zero-config-k8s/5-traces/3-red-metrics.md

This file was deleted.

Loading