Skip to content

Commit

Permalink
Update quality-foundation-implementation-guide.mdx
Browse files Browse the repository at this point in the history
Added rest of KH's quality foundation doc
  • Loading branch information
barbnewrelic committed Sep 2, 2021
1 parent 2042c7b commit 4f4fb04
Showing 1 changed file with 139 additions and 4 deletions.
Expand Up @@ -142,9 +142,7 @@ Quality Foundation measures the following KPIs:

## Establish current state [#current-state]

### Tune Browser Configuration

#### Review Instrumented Pages
### Review Instrumented Pages

Review Browser apps and pages to make sure that everything you expect to report back to New Relic is. You can do this by reviewing the Page Views tab in the Browser UI or running the following query:

Expand All @@ -154,7 +152,7 @@ SELECT uniques(pageUrl) from PageView LIMIT MAX

You may need to filter out URLs that contain request or customer ID.

#### Validate Browser URL Grouping
### Validate Browser URL Grouping

Ensure Browser segments are captured correctly so user experience performance is measurable in both the NewRelic UI as well as at the aggregate level when querying via NRQL.

Expand Down Expand Up @@ -298,3 +296,140 @@ Typical segmentation involves breaking down user experience into the following c
</tbody>
</table>

### Import the Quality Foundation Dashboard

![cx_qf_dashboard.png](./images/cx_qf_dashboard.png "Quality Foundation Dashboard")

This step creates the dashboard that you will use to measure your customer experience and improve it.

1. Clone the [GitHub repository](https://github.com/newrelic/oma-resource-center/tree/main/src/content/docs/oma/value-drivers/customer-experience/use-cases/quality-foundation).
2. Follow the [GitHub repository README](https://github.com/newrelic/oma-resource-center/tree/main/src/content/docs/oma/value-drivers/customer-experience/use-cases/quality-foundation) instructions to implement the dashboard.
3. Make sure to align the dashboard to lines of business or customer facing offerings rather than teams. This ensures optimization time is spent where it is most impactful.

### Capture Current Performance For Each Dashboard Page

![cx_qf_kpi_progress.png](./images/cx_qf_kpi_progress.png "Quality Foundation KPIs Dashboard")

1. Follow the [GitHub README](https://github.com/newrelic/oma-resource-center/tree/main/src/content/docs/oma/value-drivers/customer-experience/use-cases/quality-foundation) instructions.
2. Use the dashboard from the previous step to understand the overall performance for each line of business. If relevant, apply filters to see performance across region or device. If values drop below targets and it matters, add it to the sheet as a candidate for improvement.
* Not worth tracking: A company that sells insurance in the US only notices poor performance in Malaysia.
* Worth tracking: A company that sells insurance in the US only notices poor performance with respect to mobile users in the US.

## Improvement Process [#improvement-process]

### Plan your work

Whether you have a dedicated initiative to improve performance or classifying as ongoing maintenance, you need to track your progress at the end of every sprint.

### Decide which KPIs to Improve

You now know what your user experience looks like across multiple lines of business. Where should you be improving?

1. Start with business priorities. If you have clear business directives or have access to a senior manager above who does, you should focus on what matters most to your organization. For example, let’s say your company has recently launched a new initiative around a line of business but the KPIs associated with the UI are below target. This is where you should focus time initially.
2. Next, focus on KPIs for each line of business.
3. Finally, filter each line of business by device, region, etc., to see if additional focus is needed for specific regions or devices.

### Improve Targeted KPIs

To track your progress, create a new dashboard or add a new page to the existing dashboard and name it `Quality Foundation KPI Improvement`. For more information, see [Improve Web Uptime](/docs/new-relic-solutions/observability-maturity/customer-experience/cx-improve-web-uptime).

### Improve Page Load Performance

Narrow your focus to specific pages that aren’t meeting target KPI values.
For each page load KPI result that is out of bounds in the Quality Foundation Dashboard, remove the `COMPARE WITH` clause and add `FACET pageUrl/targetGroupedUrl LIMIT MAX` to find which pages are the poor performers.

Use `targetGroupedUrl` when there are many results; for example, when the customer ID is part of the URL. Otherwise, use `pageUrl`.

Original Dashboard query:

```
FROM PageViewTiming SELECT percentile(largestContentfulPaint, 75) WHERE appName ='WebPortal' AND pageUrl LIKE '%phone%' SINCE 1 week AGO COMPARE WITH 1 week AGO
```

New query to identify problem pages:

```
FROM PageViewTiming SELECT percentile(largestContentfulPaint, 75) WHERE appName ='WebPortal' AND pageUrl LIKE '%phone%' FACET targetGroupedUrl LIMIT MAX
```

Once you have identified which pages to improve, improve them following these [best practices](/docs/new-relic-solutions/observability-maturity/customer-experience-cx-improve-page-load).

### Improve AJAX Response Times

Find the slow requests.

1. Go to the Ajax duration widget on the dashboard.
2. View query, then open in query builder.
3. Add `facet requestUrl LIMIT MAX` to the end of the query.
4. Run the query.
5. View the results as a table and save to your KPI Improvement dashboard as `LOB - AjaxResponseTimes`.
6. Focus improving requests with a `timeToSettle` > 2.5s.

Use New Relic’s recommended best practices to improve response times. See [AJAX troubleshooting tips](/docs/browser/browser-monitoring/browser-pro-features/ajax-page-identify-time-consuming-calls/).

### Improve the AJAX Error Rate

Find the failing requests.

1. Go to Dashboards > Query builder.
2. Enter
```
FROM AjaxRequest
SELECT percentage(count(*), WHERE httpResponseCode >= 400)
WHERE httpResponseCode >= 200 AND <Ajax Request filter>
SINCE 1 week AGO facet pageUrl, appName
```
3. Run the query.
4. View the results as a table and save to your KPI Improvement dashboard as `LOB - Pages with AjaxErrors`.
5. Run the query again for the most problematic pages to find the requests that are failing:
```
FROM AjaxRequest
SELECT percentage(count(*), WHERE httpResponseCode >= 400)
WHERE httpResponseCode >= 200 AND pageUrl=<problematic page> AND appName = <corresponding app> <Ajax Request filter>
SINCE 1 week AGO facet requestUrl
```

Use New Relic’s recommended best practices to improve response times. See [AJAX troubleshooting tips](/docs/browser/browser-monitoring/browser-pro-features/ajax-page-identify-time-consuming-calls/).

### Improve Javascript Errors

Find the most common failures.

1. Go to Dashboards > Query builder
2. Enter
```
FROM JavaScriptError
SELECT count(errorClass)
SINCE 1 week AGO WHERE <PageView filter>
FACET transactionName, errorClass, errorMessage, domain
```
3. Run the query.
4. View the results as a table and save to your KPI Improvement dashboard as `LOB - Javascript Errors`.
5. Use this information to figure out which errors need to be addressed
Use New Relic’s recommended best practices to resolve errors that need addressing. See [JavaScript errors page: Detect and analyze errors](/docs/browser/new-relic-browser/browser-pro-features/javascript-errors-page-detect-analyze-errors/).
6. Remove third party errors that do not add value.

You may be using a third party JavaScript that is noisy but works as expected. You can take a couple of approaches:

* Remove the domain name from the JavaScript error/Pageview ratio widget and add it as its own widget so you can see unexpected changes. You can alert on this using [Baseline NRQL](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/alert-conditions/create-baseline-alert-conditions/) alerts.
* Drop the JavaScript error using [drop filters](/docs/logs/log-management/ui-data/drop-data-drop-filter-rules/). Only use this option if the volume of errors is impacting your data ingest in a significant way. Be as specific as you can in the drop filter.

## Conclusion [#conclusion]

**Best practices going forward:**

* Revisit performance metrics (shared in this document as Quality Foundation KPIs) at the end of each sprint.
* Incorporate performance improvements into developer sprints.
* Openly share metrics with the lines of the business you support as well as other internal stakeholders.
* Define Customer Experience SLOs.
* Create alerts for [business critical drops](/docs/new-relic-solutions/observability-maturity/aqm-implementation-guide/) in Quality Foundation KPIs.

**Value Realization:**

At the end of this process you should now:

* Have an understanding of your end user experience in a way that is tangible, actionable, and easy for engineers as well as the business to understand.
* Know how releases impact your end customers.
* Know how your customers are impacted by service, infrastructure, or network level events.
* See latency issues caused by backend services if they exist.
* Have created, or be on the path to create, a common language with business owners so you are working together. This can open new avenues for recognition and sponsorship for new projects.

0 comments on commit 4f4fb04

Please sign in to comment.