diff --git a/.bumpversion.cfg b/.bumpversion.cfg index 8be2682c21..7d871bbf2c 100644 --- a/.bumpversion.cfg +++ b/.bumpversion.cfg @@ -14,12 +14,12 @@ search = WSVERSION={current_version} replace = WSVERSION={new_version} [bumpversion:file:workshop/aws/ec2/terraform.tfvars.template] -search = wsversion = "{current_version}" -replace = wsversion = "{new_version}" +search = default = "{current_version}" +replace = default = "{new_version}" [bumpversion:file:deprecated/multipass/terraform.tfvars.template] -replace = wsversion = "{new_version}" -search = wsversion = "{current_version}" +search = default = "{current_version}" +replace = default = "{new_version}" [bumpversion:file:deprecated/multipass/main.tf] search = default = "{current_version}" diff --git a/content/en/conf24/1-zero-config-k8s/1-architecture/_index.md b/content/en/conf24/1-zero-config-k8s/1-architecture/_index.md index 0817c15526..d87199f515 100644 --- a/content/en/conf24/1-zero-config-k8s/1-architecture/_index.md +++ b/content/en/conf24/1-zero-config-k8s/1-architecture/_index.md @@ -9,7 +9,7 @@ The diagram below details the architecture of the Spring PetClinic Java applicat The Spring PetClinic Java application is a simple microservices application that consists of a frontend and backend services. The frontend service is a Spring Boot application that serves a web interface to interact with the backend services. The backend services are Spring Boot applications that serve RESTful API's to interact with a MySQL database. -By the end of this workshop, you will have a better understanding of how to enable **Splunk OpenTelemetry automatic discovery and configuration** for your Java-based applications running in Kubernetes. +By the end of this workshop, you will have a better understanding of how to enable **automatic discovery and configuration** for your Java-based applications running in Kubernetes. ![Splunk Otel Architecture](../images/auto-instrumentation-java-diagram.png) diff --git a/content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md b/content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md index 116c0a9672..c5078087f3 100644 --- a/content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md +++ b/content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md @@ -6,10 +6,10 @@ weight: 2 To get Observability signals (**metrics, traces** and **logs**) into **Splunk Observability Cloud** the Splunk OpenTelemetry Collector needs to be deployed into the Kubernetes cluster. -For this workshop, we will be using the Splunk OpenTelemetry Collector Helm Chart. First we need to add the Helm chart repository to Helm and update to ensure the latest state: +For this workshop, we will be using the Splunk OpenTelemetry Collector Helm Chart. First we need to add the Helm chart repository to Helm and update to ensure the latest version: {{< tabs >}} -{{% tab title="Helm Repo Add" %}} +{{% tab title="Install Helm Chart" %}} ``` bash helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart && helm repo update @@ -32,7 +32,7 @@ Update Complete. ⎈Happy Helming!⎈ {{% /tab %}} {{< /tabs >}} -**Splunk Observability Cloud** offers wizards in the UI to walk you through the setup of the OpenTelemetry Collector on Kubernetes, but in the interest of time, we will use the Helm command below. Some additional parameters are set to enable the operator and automatic discovery and configuration. +**Splunk Observability Cloud** offers wizards in the UI to walk you through the setup of the OpenTelemetry Collector on Kubernetes, but in the interest of time, we will use the Helm install command below. Additional parameters are set to enable the operator and automatic discovery and configuration. * `--set="operator.enabled=true"` - this will install the Opentelemetry operator that will be used to handle automatic discovery and configuration. * `--set="certmanager.enabled=true"` - this will install the required certificate manager for the operator. diff --git a/content/en/conf24/1-zero-config-k8s/2-preparation/2-petclinic.md b/content/en/conf24/1-zero-config-k8s/2-preparation/2-petclinic.md index f72527ff62..58dca49dc6 100644 --- a/content/en/conf24/1-zero-config-k8s/2-preparation/2-petclinic.md +++ b/content/en/conf24/1-zero-config-k8s/2-preparation/2-petclinic.md @@ -4,8 +4,6 @@ linkTitle: 2. Deploy PetClinic Application weight: 3 --- -#### Deploy the PetClinic Application - The first deployment of our application will be using prebuilt containers to give us the base scenario: a regular Java microservices-based application running in Kubernetes that we want to start observing. So let's deploy our application: {{< tabs >}} @@ -106,5 +104,3 @@ curl -X GET http://localhost:9999/v2/_catalog {{% /tab %}} {{< /tabs >}} - -If this fails then reach out to your Instructor for a replacement instance. diff --git a/content/en/conf24/1-zero-config-k8s/2-preparation/_index.md b/content/en/conf24/1-zero-config-k8s/2-preparation/_index.md index 617eba42b0..a48678d37c 100644 --- a/content/en/conf24/1-zero-config-k8s/2-preparation/_index.md +++ b/content/en/conf24/1-zero-config-k8s/2-preparation/_index.md @@ -7,6 +7,7 @@ time: 15 minutes --- The instructor will provide you with the login information for the instance that we will be using during the workshop. + When you first log into your instance, you will be greeted by the Splunk Logo as shown below. If you have any issues connecting to your workshop instance then please reach out to your Instructor. ``` text @@ -42,7 +43,7 @@ REALM = RUM_TOKEN = HEC_TOKEN = HEC_URL = https://<...>/services/collector/event -INSTANCE = +INSTANCE = ``` {{% /tab %}} diff --git a/content/en/conf24/1-zero-config-k8s/3-verify-setup/1-website.md b/content/en/conf24/1-zero-config-k8s/3-verify-setup/1-website.md index 6134ab8ef9..ef12bf0a04 100644 --- a/content/en/conf24/1-zero-config-k8s/3-verify-setup/1-website.md +++ b/content/en/conf24/1-zero-config-k8s/3-verify-setup/1-website.md @@ -1,6 +1,6 @@ --- title: Verify the PetClinic Website -linkTitle: 1. Verify the PetClinic Webiste +linkTitle: 1. Verify PetClinic Website weight: 1 --- @@ -17,4 +17,8 @@ You can validate if the application is running by visiting **http:// Make sure the application is working correctly by visiting the **All Owners** **(1)** and **Veterinarians** **(2)** tabs, you should get a list of names in each case. +{{% notice note %}} +As each service needs to start up and synchronize with the database, it may take a few minutes for the application to fully start up. +{{% /notice %}} + ![owners](../../images/petclinic-owners.png) diff --git a/content/en/conf24/1-zero-config-k8s/3-verify-setup/2-section-break.md b/content/en/conf24/1-zero-config-k8s/3-verify-setup/2-section-break.md new file mode 100644 index 0000000000..bf2a30585b --- /dev/null +++ b/content/en/conf24/1-zero-config-k8s/3-verify-setup/2-section-break.md @@ -0,0 +1,5 @@ +--- +title: 2. Section Break +weight: 2 +archetype: chapter +--- \ No newline at end of file diff --git a/content/en/conf24/1-zero-config-k8s/3-verify-setup/_index.md b/content/en/conf24/1-zero-config-k8s/3-verify-setup/_index.md index 781e613b18..ef9d44509b 100644 --- a/content/en/conf24/1-zero-config-k8s/3-verify-setup/_index.md +++ b/content/en/conf24/1-zero-config-k8s/3-verify-setup/_index.md @@ -1,6 +1,6 @@ --- title: Verify Kubernetes Cluster metrics -linkTitle: 3. Verify everything is working +linkTitle: 3. Verify Cluster Metrics weight: 4 time: 10 minutes --- diff --git a/content/en/conf24/1-zero-config-k8s/4-zero-config/1-zero-config.md b/content/en/conf24/1-zero-config-k8s/4-apm/1-patching-deployment.md similarity index 55% rename from content/en/conf24/1-zero-config-k8s/4-zero-config/1-zero-config.md rename to content/en/conf24/1-zero-config-k8s/4-apm/1-patching-deployment.md index 8fc8b3192d..be3f3b023b 100644 --- a/content/en/conf24/1-zero-config-k8s/4-zero-config/1-zero-config.md +++ b/content/en/conf24/1-zero-config-k8s/4-apm/1-patching-deployment.md @@ -1,10 +1,10 @@ --- -title: Automatic Discovery and Configuration -linkTitle: 1. Automatic Discovery and Configuration +title: Patching the Deployment +linkTitle: 1. Patching the Deployment weight: 1 --- -To see how automatic discovery and configuration works with a single pod we will patch the `api-gateway`. Once patched, the OpenTelemetry Collector will inject the automatic discovery and configuration library and the Pod will be restarted in order to start sending traces and profiling data. To show what happens when you enable automatic discovery and configuration, let's do a *before and after* of the configuration: +To configure **automatic discovery and configuration** the deployments need to be patched to add the instrumentation annotation. Once patched, the OpenTelemetry Collector will inject the automatic discovery and configuration library and the Pods will be restarted in order to start sending traces and profiling data. First, confirm that the `api-gateway` does not have the `splunk-otel-java` image. {{< tabs >}} {P}{{% tab title="Describe api-gateway" %}} @@ -23,25 +23,33 @@ Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2 {{% /tab %}} {{< /tabs >}} -This container was pulled from a remote repository `quay.io` and was not built to send traces to **Splunk Observability Cloud**. To enable the Java automatic discovery and configuration on the api-gateway service add the `inject-java` annotation to Kubernetes with the `kubectl patch deployment` command. +Next, enable the Java automatic discovery and configuration for all of the services by adding the annotation to the deployments. The following command will patch the all deployments. This will trigger the OpenTelemetry Operator to inject the `splunk-otel-java` image into the Pods: {{< tabs >}} -{{% tab title="Patch api-gateway" %}} +{{% tab title="Patch all PetClinic services" %}} ``` bash -kubectl patch deployment api-gateway -p '{"spec": {"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-java":"default/splunk-otel-collector"}}}}}' +kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}" ``` {{% /tab %}} {{% tab title="Patch Output" %}} ``` text +deployment.apps/config-server patched (no change) +deployment.apps/admin-server patched (no change) +deployment.apps/customers-service patched +deployment.apps/visits-service patched +deployment.apps/discovery-server patched (no change) +deployment.apps/vets-service patched deployment.apps/api-gateway patched ``` {{% /tab %}} {{< /tabs >}} +There will be no change for the **config-server**, **discovery-server** and **admin-server** as these have already been patched. + To check the container image(s) of the `api-gateway` pod again, run the following command: {{< tabs >}} @@ -64,32 +72,8 @@ Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2 A new image has been added to the `api-gateway` which will pull `splunk-otel-java` from `ghcr.io` (if you see two `api-gateway` containers, the original one is probably still terminating, so give it a few seconds). -To patch all the other services in the Spring Petclinic application, run the following command. This will add the `inject-java` annotation to the remaining services. There will be no change for the **config-server**, **discovery-server**, **admin-server** and **api-gateway** as these have already been patched. - -{{< tabs >}} -{{% tab title="Patch all Petclinic services" %}} - -``` bash -kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}" - -``` - -{{% /tab %}} -{{% tab title="Patch Output" %}} - -``` text -deployment.apps/config-server patched (no change) -deployment.apps/admin-server patched (no change) -deployment.apps/customers-service patched -deployment.apps/visits-service patched -deployment.apps/discovery-server patched (no change) -deployment.apps/vets-service patched -deployment.apps/api-gateway patched (no change) -``` - -{{% /tab %}} -{{< /tabs >}} - Navigate back to the Kubernetes Navigator in **Splunk Observability Cloud**. After a couple of minutes you will see that the Pods are being restarted by the operator and the automatic discovery and configuration container will be added. This will look similar to the screenshot below: ![restart](../../images/k8s-navigator-restarted-pods.png) + +Wait for the Pods to turn green in the Kubernetes Navigator, then go to **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services. diff --git a/content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md b/content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md new file mode 100644 index 0000000000..29c904a9b4 --- /dev/null +++ b/content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md @@ -0,0 +1,13 @@ +--- +title: Viewing the data in Splunk APM +linkTitle: 2. Viewing APM Data +weight: 2 +--- + +Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected. + +![apm](../../images/zero-config-first-services-overview.png) + +You will see the name **(2)** of the **api-gateway** service and metrics in the Latency and Request & Errors charts (you can ignore the Critical Alert, as it is caused by the sudden request increase generated by the load generator). You will also see the rest of the services appear. + +We will visit the **Service Map** **(3)** in the next section. diff --git a/content/en/conf24/1-zero-config-k8s/4-apm/3-section-break.md b/content/en/conf24/1-zero-config-k8s/4-apm/3-section-break.md new file mode 100644 index 0000000000..86fb59cd6d --- /dev/null +++ b/content/en/conf24/1-zero-config-k8s/4-apm/3-section-break.md @@ -0,0 +1,5 @@ +--- +title: 3. Section Break +weight: 3 +archetype: chapter +--- \ No newline at end of file diff --git a/content/en/conf24/1-zero-config-k8s/4-zero-config/_index.md b/content/en/conf24/1-zero-config-k8s/4-apm/_index.md similarity index 97% rename from content/en/conf24/1-zero-config-k8s/4-zero-config/_index.md rename to content/en/conf24/1-zero-config-k8s/4-apm/_index.md index af523af91e..90fff6a5b7 100644 --- a/content/en/conf24/1-zero-config-k8s/4-zero-config/_index.md +++ b/content/en/conf24/1-zero-config-k8s/4-apm/_index.md @@ -1,6 +1,6 @@ --- title: Setting up automatic discovery and configuration for APM -linkTitle: 4. automatic discovery and configuration & Metrics +linkTitle: 4. Automatic discovery and configuration weight: 5 time: 10 minutes --- diff --git a/content/en/conf24/1-zero-config-k8s/4-zero-config/2-apm-data.md b/content/en/conf24/1-zero-config-k8s/4-zero-config/2-apm-data.md deleted file mode 100644 index 62acc034cd..0000000000 --- a/content/en/conf24/1-zero-config-k8s/4-zero-config/2-apm-data.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Viewing the data in Splunk APM -linkTitle: 2. Viewing APM Data -weight: 2 ---- - -Wait for the pods to turn green again (you may want to refresh the screen), then navigate to **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the auto-instrumented services. - -Change the **Environment** filter **(1)** to the name of your workshop instance from the dropdown box, it will be **`-workshop`** (where **`INSTANCE`** is the value from the shell script you ran earlier). Make sure it is the only one selected. - -![apm](../../images/zero-config-first-services-overview.png) - -You will see the name **(2)** of the **api-gateway** service and metrics in the Latency and Request & Errors charts (you can ignore the Critical Alert, as it was caused by the sudden request increase generated by the load generator). You will also see the rest of the services appear. diff --git a/content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md b/content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md index 8952aeadf4..dc9a15588f 100644 --- a/content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md +++ b/content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md @@ -4,36 +4,16 @@ linkTitle: 1. APM Service Map weight: 1 --- -Next, click on **Service Map** **(3)** to view the automatically generated Service Map and select the **api-gateway** service. - ![apm map](../../images/zero-config-first-services-map.png) -The example above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of 2 minutes will help. The initial startup-related errors (red dots) will eventually disappear. +The above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear. Next, let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard For this exercise we are going to use a common scenario you would use if the service operation was showing high latency, or errors for example. -## I THINK WE SHOULD HAVE SOMETHING GENERATING ERRORS TO SHOW THE BENEFIT OF THIS SECTION - Select the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**. -![select a trace](../../images/select-workflow.png) - -This should give you the workflow with a filter on `GET /owners` **(1)** as shown below. To pick a trace, select a line in the `Service Requests & Errors` chart **(2)**, when the dot appears click to get a list of sample traces: - -![workflow-trace-pick](../../images/selecting-a-trace.png) +This should give you the workflow with a filter on `GET /owners` **(1)** as shown below. -Once you have the list of sample traces, click on the blue **(3)** Trace ID Link. (Make sure it has the same three services mentioned in the Service Column.) - -This brings us the the Trace selected in the Waterfall view: - -![waterfall](../../images/waterfall-view.png) - -Here we find several sections: - -* The actual Waterfall Pane **(1)**, where you see the trace and all the instrumented functions visible as spans, with their duration representation and order/relationship showing. -* The Trace Info Pane **(2), by default, shows the selected Span information. (Highlighted with a box around the Span in the Waterfall Pane.) -* The Span Pane **(3)**, here you can find all the Tags that have been sent in the selected Span, You can scroll down to see all of them. -* The process Pane, with tags related to the process that created the Span (Scroll down to see as it is not in the screenshot.) -* The Trace Properties at the top of the right-hand pane by default is collapsed as shown. +![select a trace](../../images/select-workflow.png) diff --git a/content/en/conf24/1-zero-config-k8s/5-traces/2-trace.md b/content/en/conf24/1-zero-config-k8s/5-traces/2-trace.md new file mode 100644 index 0000000000..7c88bf2ce5 --- /dev/null +++ b/content/en/conf24/1-zero-config-k8s/5-traces/2-trace.md @@ -0,0 +1,23 @@ +--- +title: APM Trace +linkTitle: 2. APM Trace +weight: 2 +--- + +To pick a trace, select a line in the `Service Requests & Errors` chart **(2)**, when the dot appears click to get a list of sample traces: + +Once you have the list of sample traces, click on the blue **(3)** Trace ID Link (make sure it has the same three services mentioned in the Service Column.) + +![workflow-trace-pick](../../images/selecting-a-trace.png) + +This brings us the the Trace selected in the Waterfall view: + +Here we find several sections: + +* The actual Waterfall Pane **(1)**, where you see the trace and all the instrumented functions visible as spans, with their duration representation and order/relationship showing. +* The Trace Info Pane **(2)**, by default, shows the selected Span information (highlighted with a box around the Span in the Waterfall Pane). +* The Span Pane **(3)**, here you can find all the Tags that have been sent in the selected Span, You can scroll down to see all of them. +* The process Pane, with tags related to the process that created the Span (scroll down to see as it is not in the screenshot). +* The Trace Properties at the top of the right-hand pane by default is collapsed as shown. + +![waterfall](../../images/waterfall-view.png) diff --git a/content/en/conf24/1-zero-config-k8s/5-traces/3-red-metrics.md b/content/en/conf24/1-zero-config-k8s/5-traces/3-red-metrics.md deleted file mode 100644 index ae2b2c0f2b..0000000000 --- a/content/en/conf24/1-zero-config-k8s/5-traces/3-red-metrics.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Rate, Errors, and Duration (RED) Metrics -linkTitle: 3. RED Metrics -weight: 3 ---- - -## NEED TO UPDATE TO USE SERVICE CENTRIC VIEW - - Splunk APM provides a set of built-in dashboards that present charts and visualized metrics to help you see problems occurring in real time and quickly determine whether the problem is associated with a service, a specific endpoint, or the underlying infrastructure. - - To look at this dashboard for the selected `api-gateway`, make sure you have the `api-gateway` service selected in the Dependency map as shown above, then click on the ***View Dashboard** Link **(1)** at the top of the right-hand pane. This will bring you to the services dashboard: - -![metrics dashboard](../../images/zero-config-first-services-metrics.png) - -This dashboard, which is available for each of your instrumented services, offers an overview of the key `request, error, and duration (RED)` metrics based on Monitoring MetricSets created from endpoint spans for your services, endpoints, and Business Workflows. They also present related host and Kubernetes metrics to help you determine whether problems are related to the underlying infrastructure, as in the above image. - -As the dashboards allow you to go back in time with the *Time picker* window **(1)**, it's the perfect spot to identify the behavior you wish to be alerted on, and with a click on one of the bell icons **(2)** available in each chart, you can set up an alert to do just that. - -If you scroll down the page, you get host and Kubernetes metrics related to your service as well. -Let's move on to look at some of the traces generated by the automatic discovery and configuration. - \ No newline at end of file diff --git a/content/en/conf24/1-zero-config-k8s/5-traces/2-spans.md b/content/en/conf24/1-zero-config-k8s/5-traces/3-spans.md similarity index 60% rename from content/en/conf24/1-zero-config-k8s/5-traces/2-spans.md rename to content/en/conf24/1-zero-config-k8s/5-traces/3-spans.md index b3564de8d1..814f7fa2f7 100644 --- a/content/en/conf24/1-zero-config-k8s/5-traces/2-spans.md +++ b/content/en/conf24/1-zero-config-k8s/5-traces/3-spans.md @@ -1,7 +1,7 @@ --- title: APM Span -linkTitle: 2. APM Spans -weight: 2 +linkTitle: 3. APM Spans +weight: 3 --- While we examine our spans, let's look at several features that you get out of the box without code modifications when using **automatic discovery and configuration** on top of tracing: @@ -12,10 +12,12 @@ First, in the Waterfall Pane, make sure the `customers-service:SELECT petclinic. ![DB-query](../../images/db-query.png) * The basic latency information is shown as a bar for the instrumented function or call, in our example, it took 6.3 Milliseconds. -* Several similar Spans **(1**)**, are only visible if the span is repeated multiple times. In this case, there are 10 repeats in our example. (You can show/hide them all by clicking on the `10x` and all spans will show in order) -* Inferred Services, Calls done to external systems that are not instrumented, show up as a gray 'inferred' span. The Inferred Service or span in our case here is a call to the Mysql Database `mysql:petclinic SELECT petclinic` **(2)** as shown below our selected span. -* Span Tags in the Tag Pane, standard tags produced by the automatic discovery and configuration. In this case, the span is calling a Database, so it includes the `db.statement` tag **(3)**. This tag will hold the DB query statement and is used by the Database call performed during this span. This will be used by the DB-Query Performance feature. We look at DB-Query Performance in the next section. -* Always-on Profiling, **IF** the system is configured to, and has captured Profiling data during a Spans life cycle, it will show the number of Call Stacks captured in the Spans timeline. (15 Call Stacks for the `customers-service:SELECT petclinic.`owners` Span shown above). We will look at Profiling in the next section. +* Several similar Spans **(1)**, are only visible if the span is repeated multiple times. In this case, there are 10 repeats in our example. (You can show/hide them all by clicking on the `10x` and all spans will show in order) +* **Inferred Services**: Calls made to external systems that are not instrumented, show up as a grey 'inferred' span. The Inferred Service or span in our case here is a call to the Mysql Database `mysql:petclinic SELECT petclinic` **(2)** as shown above our selected span. +* **Span Tags**: In the Tag Pane, standard tags produced by the automatic discovery and configuration. In this case, the span is calling a Database, so it includes the `db.statement` tag **(3)**. This tag will hold the DB query statement and is used by the Database call performed during this span. This will be used by the DB-Query Performance feature. We look at DB-Query Performance in the next section. +* **Always-on Profiling**: **IF** the system is configured to and has captured Profiling data during a Span life cycle, it will show the number of Call Stacks captured in the Spans timeline (15 Call Stacks for the `customers-service:SELECT petclinic.owners` Span shown above). + +We will look at Profiling in the next section. +![Champagne](images/champagne.png?width=45vw) diff --git a/content/en/conf24/1-zero-config-k8s/9-wrap-up/images/champagne.png b/content/en/conf24/1-zero-config-k8s/9-wrap-up/images/champagne.png new file mode 100644 index 0000000000..98c0014827 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/9-wrap-up/images/champagne.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-apm-waterfall.png b/content/en/conf24/1-zero-config-k8s/images/rum-apm-waterfall.png new file mode 100644 index 0000000000..0b59d2fe68 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/images/rum-apm-waterfall.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-overview.png b/content/en/conf24/1-zero-config-k8s/images/rum-overview.png index ddcbfa1938..7a1fcb8fd1 100644 Binary files a/content/en/conf24/1-zero-config-k8s/images/rum-overview.png and b/content/en/conf24/1-zero-config-k8s/images/rum-overview.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-replay.png b/content/en/conf24/1-zero-config-k8s/images/rum-replay.png deleted file mode 100644 index ae9183a561..0000000000 Binary files a/content/en/conf24/1-zero-config-k8s/images/rum-replay.png and /dev/null differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-tag-spotlight.png b/content/en/conf24/1-zero-config-k8s/images/rum-tag-spotlight.png new file mode 100644 index 0000000000..abf44b00d2 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/images/rum-tag-spotlight.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-trace.png b/content/en/conf24/1-zero-config-k8s/images/rum-trace.png new file mode 100644 index 0000000000..5c47829f71 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/images/rum-trace.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rum-user-sessions.png b/content/en/conf24/1-zero-config-k8s/images/rum-user-sessions.png new file mode 100644 index 0000000000..5ed14dc6d5 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/images/rum-user-sessions.png differ diff --git a/content/en/conf24/1-zero-config-k8s/images/rumt-trace.png b/content/en/conf24/1-zero-config-k8s/images/rumt-trace.png deleted file mode 100644 index 49cfd2017f..0000000000 Binary files a/content/en/conf24/1-zero-config-k8s/images/rumt-trace.png and /dev/null differ diff --git a/content/en/conf24/1-zero-config-k8s/images/service-centric-view.png b/content/en/conf24/1-zero-config-k8s/images/service-centric-view.png new file mode 100644 index 0000000000..577dfee528 Binary files /dev/null and b/content/en/conf24/1-zero-config-k8s/images/service-centric-view.png differ diff --git a/content/en/conf24/_index.md b/content/en/conf24/_index.md index 9481cffc4e..643bab2d4e 100644 --- a/content/en/conf24/_index.md +++ b/content/en/conf24/_index.md @@ -2,7 +2,6 @@ title: .conf24 Workshops menuPost: " " weight: 20 -draft: true --- {{% children containerstyle="ul" style="li" depth="1" description="true" %}} diff --git a/workshop/petclinic/scripts/push_env.sh b/workshop/petclinic/scripts/push_env.sh index 66b80dff67..5c027c29f9 100644 --- a/workshop/petclinic/scripts/push_env.sh +++ b/workshop/petclinic/scripts/push_env.sh @@ -20,11 +20,7 @@ env = { RUM_APP_NAME: '$INSTANCE-store', RUM_ENVIRONMENT: '$INSTANCE-workshop' } -// non critical error so it shows in RUM when the realm is set -if (env.RUM_REALM != "") { - let showJSErrorObject = false; - showJSErrorObject.property = 'true'; - } + EOF echo "JavaScript file generated at: $JS_FILE"