Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -23,22 +23,23 @@ First, let's set the Log Structure/Format. SpringBoot will allow you to set a gl
{{% notice note %}}
The following entries will be added:

- `trace_id`
- `span_id`
- `trace_flags`
- `service.name`
- `deployment.environment`
- **trace_id**
- **span_id**
- **trace_flags**
- **service.name**
- **deployment.environment**

{{% /notice %}}

These fields allow the **Splunk Observability Cloud** to display **Related Content** when used in a pattern shown below:
These fields allow the **Splunk Observability Cloud** to display **Related Content** when using the log pattern shown below:

``` xml
<pattern>
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
</pattern>
<pattern>
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
</pattern>
```

So, let's run the script that will update the files with the log structure in the format above:
The following script will update the `logback-spring.xml` for all of the services with the log structure in the format above:

{{< tabs >}}
{{% tab title="Update Logback files" %}}
Expand All @@ -64,7 +65,7 @@ Script execution completed.
{{% /tab %}}
{{< /tabs >}}

We can verify if the replacement has been successful by examining the spring-logback.xml file from one of the services:
We can verify if the replacement has been successful by examining the `logback-spring.xml` file from one of the services:

```bash
cat /home/splunk/spring-petclinic-microservices/spring-petclinic-customers-service/src/main/resources/logback-spring.xml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ To see the changes in effect, we need to redeploy the services, First, let's cha
. ~/workshop/petclinic/scripts/set_local.sh
```

The result is a new file on disk called **petclinic-local.yaml**. Let's switch to the local versions by using the new version of the `deployment.yaml`. First delete the old containers from the original deployment with:
The result is a new file on disk called `petclinic-local.yaml`. Switch to the local versions by using the new version of the deployment YAML. First delete the old containers from the original deployment with:

```bash
kubectl delete -f ~/workshop/petclinic/petclinic-deploy.yaml
Expand All @@ -28,19 +28,19 @@ This will cause the containers to be replaced with the local version, you can ve
kubectl describe pods api-gateway | grep Image:
```

The resulting output should say `localhost:9999`:
The resulting output will show `localhost:9999`:

```text
Image: localhost:9999/spring-petclinic-api-gateway:local
```

However, as we only patched the deployment before, the new deployment does not have the right annotations for the **Zero Configuration Auto Instrumentation**, so let's fix that now by running the patch command again:

{{< notice note >}}
{{% notice note %}}

Note, that there will be no change for the *config-server & discovery-server* as they do have the annotation included in the deployment.
There will be no change for the **admin-server**, **config-server** and **discovery-server** as they are already annotated.

{{< /notice >}}
{{% /notice %}}

{{< tabs >}}
{{% tab title="Patch all Petclinic services" %}}
Expand All @@ -50,11 +50,11 @@ kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name |
```

{{% /tab %}}
{{% tab title="kubectl patch Output" %}}
{{% tab title="Output" %}}

```text
deployment.apps/config-server patched (no change)
deployment.apps/admin-server patched
deployment.apps/admin-server patched (no change)
deployment.apps/customers-service patched
deployment.apps/visits-service patched
deployment.apps/discovery-server patched (no change)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,20 @@ linkTitle: 4. Viewing the Logs
weight: 4
---

Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
Now that the Pods have been patched validate they are all running by executing the following command:

After a couple of minutes or so you should see that the Pods are being restarted by the operator and the Zero config container will be added. This will look similar to the screenshot below:
```bash
kubectl get pods
```

![restart](../../images/k8s-navigator-restarted-pods.png)
In order to see logs click on the **Log Observer** ![Logo](../../images/logo-icon.png?classes=inline&height=25px) in the left-hand menu. Once in Log Observer please ensure **Index** on the filter bar is set to **splunk4rookies-workshop**.

Wait for the pods to turn green again (you may want to refresh the screen), then from the left-hand menu click on the **Log Observer** ![Logo](../images/logo-icon.png?classes=inline&height=25px) and ensure **Index** is set to **splunk4rookies-workshop**.
Next, click **Add Filter** and search for the field `deployment.environment`, select your workshop instance and click `=` (to include). You will now see only the log messages from your PetClinic application.

Next, click **Add Filter** search for the field `deployment.environment` and select the value of rou Workshop (Remember the INSTANCE value ?) and click `=` (include). You should now see only the log messages from your PetClinic application.
Next search for the field `service_name`, select the value `customers-service` and click `=` (to include). Now the log entries will be reduced to show the entries from your `customers-service` only.

Next search for the field `service_name` select the value `customers-service` and click `=` (include). Now the log files should be reduced to just the lines from your `customers-service`.

Wait for Log Lines to show up with an injected trace-id like trace_id=08b5ce63e46ddd0ce07cf8cfd2d6161a as shown below **(1)**:
In the log entry you will see the message is formatted as per the pattern we configured for logback eariler **(1)**:

![Log Observer](../../images/log-observer-trace-info.png)

Click on a line with an injected trace_id, this should be all log lines created by your services that are part of a trace **(1)**.

A Side pane opens where you can see the related information about your logs. including the relevant Trace and Span IDs **(2**).
Click on an entry with an injected trace_id **(1)**. A side pane will open where you can see the detailed information, including the relevant trace and span IDs **(2)**.
Original file line number Diff line number Diff line change
Expand Up @@ -4,41 +4,12 @@ linkTitle: 5. Related Content
weight: 5
---

Also, at the bottom next to APM, there should be a number, this is the number of related AP Content items for this logline. click on the APM pane **(1)** as shown below:
In the bottom pane is where any related content will be reported. In the screenshot below you can see that APM has found a trace that is related to this log line **(1)**:

![RC](../../images/log-apm-rc.png)

- The *Map for customers-service* **(2)** brings us to the APM dependency map with the workflow focused on Customer Services, allowing you to quickly understand how this logline is related to the overall flow of service interaction.
- The *Trace for 34c98cbf7b300ef3dedab49da71a6ce3* **(3)** will bring us to the waterfall in APM for this specific trace that this log line was generated in.

As a last exercise, click on the Trace for Link, this will bring you to the waterfall for this specific trace:
By clicking on **Trace for 0c5b37a751e1fc3e7a7191140ex714a0** **(2)** will take us to the waterfall in APM for this specific trace that this log line was generated from:

![waterfall logs](../../images/waterfall-with-logs.png)

Note that you now have Logs Related Content Pane **(1)** appear, clicking on this will bring you back to log observer with all the loglines that are part of this Trace.
This will help you to quickly find relevant log lines for an interaction or a problem.

## Summary

This is the end of the workshop and we have certainly covered a lot of ground. At this point, you should have metrics, traces, logs, database query performance and code profiling being reported into Splunk Observability Cloud.

**Congratulations!**

<!--
docker system prune -a --volumes

81 . ~/workshop/petclinic/scripts/add_otel.sh
82 . ~/workshop/petclinic/scripts/update_logback.sh
83 ./mvnw clean install -DskipTests -P buildDocker
84 . ~/workshop/petclinic/scripts/push_docker.sh
85 . ~/workshop/petclinic/scripts/set_local.sh
86 kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
87 k9s
88 kubectl delete -f ~/workshop/petclinic/petclinic-local.yaml
89 kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
90 k9s
91 kubectl delete -f ~/workshop/petclinic/petclinic-local.yaml
92 kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
93 k9s
94 kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"true\"}}}}}"
-->
Note that you now have Related Content pane for Logs appear **(1)**. Clicking on this will take you back to Log Observer and will display all the log lines that are part of this trace.
24 changes: 16 additions & 8 deletions content/en/conf24/1-zero-config-k8s/8-rum/1-rebuild-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ weight: 1

At the top of the previous code snippet, there is a reference to the file `/static/env.js`, which contains/sets the variables used by the RUM, currently these are not configured and therefore no RUM traces are currently being sent.

So, let's run the script that will update variables to enable RUM traces so they can be viewable in the **Splunk Observability Cloud** RUM UI:
So, let's run the script that will update variables to enable RUM traces so they can be viewable in the **Splunk Observability Cloud** RUM UI. Note, that the Env.js script contains a deliberate Java script error, so we have one detected by Splunk RUM:

{{< tabs >}}
{{% tab title="Update env.js for RUM" %}}
Expand Down Expand Up @@ -42,30 +42,38 @@ cat ~/spring-petclinic-microservices/spring-petclinic-api-gateway/src/main/resou
env = {
RUM_REALM: 'eu0',
RUM_AUTH: '[redacted]',
RUM_APP_NAME: 'o11y-workshop-1-store',
RUM_ENVIRONMENT: 'o11y-workshop-1-workshop'
RUM_APP_NAME: 'k8s-petclinic-workshop-store',
RUM_ENVIRONMENT: 'k8s-petclinic-workshop-workshop'
}
// non critical error so it shows in RUM when the realm is set
if (env.RUM_REALM != "") {
let showJSErrorObject = false;
showJSErrorObject.property = 'true';
}
```

Let's move into the api-gateway directory and force a build for just the api-gateway service.

{{% /tab %}}
{{< /tabs >}}

``` bash
./mvnw clean install -D skipTests -P buildDocker
cd ~/spring-petclinic-microservices/spring-petclinic-api-gateway
../mvnw clean install -D skipTests -P buildDocker
```

``` bash
. ~/workshop/petclinic/scripts/push_docker.sh
```

Now restart the `api-gateway` to apply the changes:
As soon as the containers are pushed into the repository, just restart the `api-gateway` to apply the changes:

``` bash
kubectl rollout restart deployment api-gateway
```

In RUM, filter down into the environment as defined in the RUM snippet above and click through to the dashboard.
Validate that the application is running by visiting **http://<IP_ADDRESS>:81** (replace **<IP_ADDRESS>** with the IP address you obtained above). Make sure the application is working correctly by visiting the **All Owners** **(1)** and select an owner, then add a **visit** **(2)**. We will use this action when checking RUM

When you drill down into a RUM trace you will see a link to APM in the spans. Clicking on the trace ID will take you to the corresponding APM trace for the current RUM trace.
![pet](../../images/petclinic-pet.png)

## More image's needed for better info plz? Pretty Pretty lease?
If you want, you can access this website on your phone as well. This will also show up in RUM.
22 changes: 22 additions & 0 deletions content/en/conf24/1-zero-config-k8s/8-rum/2-rum-tour.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
title: Select the RUM view for the Petclinic App
linkTitle: 2.Select Rum env
weight: 2
---

Once the RUM has been configured an you have added a visit for a pet, you can log in to **Splunk Observability Cloud** and verify that the RUM traces are flowing in from you app.

From the left-hand menu click on **RUM** ![RUM](../../images/rum-icon.png?classes=inline&height=25px) and change the **Environment** filter **(1)** to the name of your workshop instance from the dropdown box, it will be **`<INSTANCE>-workshop`** **(1)**(where **`INSTANCE`** is the value from the shell script you ran earlier). Make sure it is the only one selected.
Then change the **App** **(2)** dropdown box to the name of your app, it will be **`<INSTANCE>-store`**

![rum select](../../images/rum-env-select.png)

If you have selected your Environment and App, you see an overview page showing the RUM status of you App (IF your Summary Dashboard is just a single row of numbers, you are looking at the condensed view. You can expadn it by clicking on the **>** in front of the Application name).

![rum overview](../../images/rum-overview.png)

Click on the blue link to get to the details page

![rum main](../../images/rum-main.png)


4 changes: 2 additions & 2 deletions content/en/conf24/1-zero-config-k8s/8-rum/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,9 @@ The following snippet is inserted into the **<head>** section of the `index.html
</script>
```

The above snippet of code has already been added to `index.html` in the repository you cloned earlier, and you already have a build that includes this snippet when rebuilding all the services in the previous exercise
The above snippet of code has already been added to `index.html` in the repository you cloned earlier, but it is not yet activated, we will do that in the next section.

If you want you can verify the snippet we added to the index.html by viewing the file:
If you want you can verify the snippet, we added to the index.html by viewing the file:

{{< tabs >}}
{{% tab title="View index.html" %}}
Expand Down
Binary file modified content/en/conf24/1-zero-config-k8s/images/log-apm-rc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion deprecated/multipass/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ These tools **will** prevent the instance from being created properly.

## 1. Pre-requisites

Install [Multipass](https://multipass.run/) and Terraform for your operating system. On a Mac you can also install via [Homebrew](https://brew.sh/) e.g.
Install [Multipass](https://multipass.run/) and Terraform for your operating system. On a Mac, you can also install via [Homebrew](https://brew.sh/) e.g.

```text
brew install multipass
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ module github.com/splunk/observability-workshop

go 1.19

require github.com/McShelby/hugo-theme-relearn v0.0.0-20240430200804-ab4cd9b6a78a // indirect
require github.com/McShelby/hugo-theme-relearn v0.0.0-20240507204003-21b4289ecf44 // indirect
4 changes: 4 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,7 @@ github.com/McShelby/hugo-theme-relearn v0.0.0-20240427143506-0480d11c33cc h1:CrB
github.com/McShelby/hugo-theme-relearn v0.0.0-20240427143506-0480d11c33cc/go.mod h1:mKQQdxZNIlLvAj8X3tMq+RzntIJSr9z7XdzuMomt0IM=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240430200804-ab4cd9b6a78a h1:niNv9mLLbxF23wKxYpeBSu8ngC2Nas4/iNRA9jTuI4w=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240430200804-ab4cd9b6a78a/go.mod h1:mKQQdxZNIlLvAj8X3tMq+RzntIJSr9z7XdzuMomt0IM=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240503165000-ff35016bcab2 h1:jcb5Aju57SiLF0KKUgI6/Kq0VKNmsWroW2jsTs1mkDk=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240503165000-ff35016bcab2/go.mod h1:mKQQdxZNIlLvAj8X3tMq+RzntIJSr9z7XdzuMomt0IM=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240507204003-21b4289ecf44 h1:AKT4XmMcPBvNx6/aIjCMZxKj7Gwl1j7CSJLszuyRRLc=
github.com/McShelby/hugo-theme-relearn v0.0.0-20240507204003-21b4289ecf44/go.mod h1:mKQQdxZNIlLvAj8X3tMq+RzntIJSr9z7XdzuMomt0IM=
109 changes: 109 additions & 0 deletions orbstack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# Preparing an Orbstack instance

**NOTE:** Please disable any VPNs or proxies before running the commands below e.g:

- ZScaler
- Cisco AnyConnect

These tools **will** prevent the instance from being created properly.

## 1. Pre-requisites

Install Orbstack:

``` bash
brew install orbstack
```

## 2. Clone workshop repository

``` bash
git clone https://github.com/splunk/observability-workshop
```

## 3. Change into Orbstack directory

```bash
cd observability-workshop/orbstack
```

## 4. Create start.sh script

Copy the `start.sh.example` to `start.sh` and edit the file to set the required variables:

- ACCESS_TOKEN
- REALM
- HEC_TOKEN
- HEC_URL

``` bash

#!/bin/bash
echo "Building: $1";

# Change these values below to match your environment and safe this file as start.sh
export ACCESS_TOKEN="<redacted>"
export REALM="eu0"
export RUM_TOKEN="<redacted>"
export HEC_TOKEN="<redacted>"
#export HEC_URL="https://http-inputs-o11y-workshop-eu0.splunkcloud.com:443/services/collector/event"
export HEC_URL="https://http-inputs-o11y-workshop-us1.splunkcloud.com:443/services/collector/event"
export INSTANCE=$1

# Do not change anything below this line
orb create -c cloud-init.yaml -a arm64 ubuntu:jammy $INSTANCE
sleep 2
ORBENV=ACCESS_TOKEN:REALM:RUM_TOKEN:HEC_TOKEN:HEC_URL:INSTANCE orb -m $INSTANCE -u splunk ansible-playbook /home/splunk/orbstack.yml
echo "ssh splunk@$INSTANCE@orb"
ssh splunk@$INSTANCE@orb

```

Run the script and provide an instance name e.g.: `./start.sh my-instance`.

Once the instance has been successfully created (this can take several minutes), you will automatically be logged into the instance. If you exit you can SSH back in using the following command:

```bash
ssh splunk@<my_instance>@orb
```

## 5. Validate instance

Once in the shell, you can validate that the instance is ready by running the following command:

```bash
kubectl version --output=yaml
```

To get the IP address of the instance, run the following command:

```bash
ifconfig eth0
```

If you get an error please check that you have disabled any VPNs or proxies and try again e.g. ZScaler, Cisco AnyConnect.

To start again, delete the instance and re-run `start.sh my-instance`:

```bash
orb delete my-instance
```

You can use Vscode with your new orb/container.
Make sure you have installed the remote ssh extension in vscode

here is a sample config for you ssh_config

```text
Host conf
Hostname 127.0.0.1
Port 32222
User splunk@orb-1
# replace or symlink ~/.orbstack/ssh/id_ed25519 file to change the key
IdentityFile ~/.orbstack/ssh/id_ed25519
# only use this key
IdentitiesOnly yes
ProxyCommand '/Applications/OrbStack.app/Contents/MacOS/../Frameworks/OrbStack Helper.app/Contents/MacOS/OrbStack Helper' ssh-proxy-fdpass 501
ProxyUseFdpass yes
```

Loading