Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,18 @@ You will start with an EC2 instance and perform some [initial steps](#initial-st

## Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance.

You'll be asked to enter a name for your environment. Please use `tagging-workshop-yourname` (where `yourname` is replaced by your actual name).
The initial setup can be completed by executing the following steps on the command line of your EC2 instance:

``` bash
cd workshop/tagging
./1-deploy-otel-collector.sh
./2-deploy-creditcheckservice.sh
./3-deploy-creditprocessorservice.sh
./4-deploy-load-generator.sh
./0-deploy-collector-with-services.sh
```

## View your application in Splunk Observability Cloud

Now that the setup is complete, let's confirm that it's sending data to **Splunk Observability Cloud**. Note that when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. `tagging-workshop-name`).
Navigate to APM, then use the Environment dropdown to select your environment (i.e. `tagging-workshop-instancename`).

If everything was deployed correctly, you should see `creditprocessorservice` and `creditcheckservice` displayed in the list of services:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,26 @@ While some tags can be added with the OpenTelemetry collector, the ones we’ll

A note about terminology before we proceed. While this workshop is about **tags**, and this is the terminology we use in **Splunk Observability Cloud**, OpenTelemetry uses the term **attributes** instead. So when you see tags mentioned throughout this workshop, you can treat them as synonymous with attributes.

## What are tags so important?
## Why are tags so important?

Tags are essential for an application to be truly observable. As we saw with our credit check service, some users are having a great experience: fast with no errors. But other users get a slow experience or encounter errors.

Tags add the context to the traces to help us understand why some users get a great experience and others don't. And powerful features in **Splunk Observability Cloud** utilize tags to help you jump quickly to root cause.

Let's proceed to look at how tags can be captured using OpenTelemetry.
## Sneak Peak: Tag Spotlight

**Tag Spotlight** uses tags to discover trends that contribute to high latency or error rates:

![Tag Spotlight Preview](../images/tag_spotlight_preview.png)

The screenshot above provides an example of Tag Spotlight from another application.

Splunk has analyzed all of the tags included as part of traces that involve the payment service.

It tells us very quickly whether some tag values have more errors than others.

If we look at the version tag, we can see that version 350.10 of the service has a 100% error rate, whereas version 350.9 of the service has no errors at all:

![Tag Spotlight Preview](../images/tag_spotlight_preview_details.png)

We’ll be using Tag Spotlight with the credit check service later on in the workshop, once we’ve captured some tags of our own.
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,9 @@ Note that we aren't required to index tags to use them for filtering with **Trac

### Grouping

With the grouping use case, we can surface trends for tags that we collect using the powerful **Tag Spotlight** feature in **Splunk Observability Cloud**, which we'll see in action shortly.
With the grouping use case, we can use **Trace Analyzer** to group traces by a particular tag.

But we can also go beyond this and surface trends for tags that we collect using the powerful **Tag Spotlight** feature in **Splunk Observability Cloud**, which we’ll see in action shortly.

Tags used for grouping use cases should be low to medium-cardinality, with hundreds of unique values.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,25 +41,25 @@ To see how, let's click on the metric named `service.request.duration.ns.p99`, w

![Service Request Duration](../images/service_request_duration_chart.png)

This metric tracks the p99 response time of service requests in nanoseconds, broken down by various attributes. Nanoseconds are bit too granular for our needs, so let's click **Enter formula** and convert this to seconds by entering `A / 1000000000`. Then we can hide `A` and show only `B`:
Add filters for `sf_environment`, `sf_service`, and `sf_dimensionalized`. Then set the **Extrapolation policy** to `Last value` and the **Display units** to `Nanosecond`:

![Chart with Seconds](../images/chart_with_seconds.png)
![Chart with Seconds](../images/chart_settings.png)

Next, let's break down the chart by credit score category. Click on the **Add analytics** button on the first row, select **Mean**, then **Mean:Aggregation**, then Group By the `credit_score_category` dimension:
With these settings, the chart allows us to visualize the service request duration by credit score category:

![Duration by Credit Score](../images/duration_by_credit_score.png)

Now we can see the duration by credit score category. In my example, the green line represents the `exceptional` category, and we can see that the duration for these requests sometimes goes all the way up to 5 seconds.
Now we can see the duration by credit score category. In my example, the red line represents the `exceptional` category, and we can see that the duration for these requests sometimes goes all the way up to 5 seconds.

The yellow line represents the `very good` category, and has very fast response times.
The orange represents the `very good` category, and has very fast response times.

The magenta line represents the `poor` category, and has response times between 2-3 seconds.
The green line represents the `poor` category, and has response times between 2-3 seconds.

It may be useful to save this chart on a dashboard for future reference. To do this, click on the **Save as...** button and provide a name for the chart:

![Save Chart As](../images/save_chart_as.png)

When asked which dashboard to save the chart to, let's create a new one named `Credit Check Service`:
When asked which dashboard to save the chart to, let's create a new one named `Credit Check Service - Your Name` (substituting your actual name):

![Save Chart As](../images/create_dashboard.png)

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.