Skip to content

Commit

Permalink
Updated text to match new nav (#277)
Browse files Browse the repository at this point in the history
* Updated text to match new nav

* fixed broken link
  • Loading branch information
emetelka committed Feb 20, 2024
1 parent dde0bab commit a13f75b
Show file tree
Hide file tree
Showing 12 changed files with 14 additions and 48 deletions.
2 changes: 1 addition & 1 deletion docs/experiment-analysis/creating-experiments.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 1

Experiments are a set of metrics that correspond to users being shown different feature sets that you would like to track over time.

### 1. Navigate to **Experiments** in the left-hand menu and click **+Experiment**
### 1. Navigate to **Analysis** in the left-hand menu, click **+Create**, and select **Experiment Analysis**

![Create experiment](/img/building-experiments/create-experiment.png)

Expand Down
31 changes: 0 additions & 31 deletions docs/experiment-analysis/experiment-metrics.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/experiment-analysis/experiment-status.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Experiment status

You can navigate to the **Experiments** page by clicking on the **Experiments** icon from the left tab. Each experiment has an associated status which indicates the current state of the experiment. An experiment can have one of 4 different statuses:
You can navigate to the **Analysis** page by clicking on the icon from the left-hand manu. Each experiment has an associated status which indicates the current state of the experiment. An experiment can have one of 4 different statuses:

1. Draft - The initial state after the experiment is created
2. Running - The active state where the experiment is assigning subjects and/or updating the metric event data
Expand Down
2 changes: 1 addition & 1 deletion docs/experiment-analysis/explores.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Creating Explore Charts

You can dive deeper into the results of your experiments by creating graphs with the result data. To do this, first navigate to the **Experiments** page using the tab on the left panel and click on the experiment you are interested in. Then, click on the **Explore** tab. You can create a new chart by clicking on the **Create Explore** button.
You can dive deeper into the results of your experiments by creating graphs with the result data. To do this, first navigate to the **Analysis** page using the tab on the left panel and click on the experiment you are interested in. Then, click on the **Explore** tab. You can create a new chart by clicking on the **Create Explore** button.

![Create Explore](/img/measuring-experiments/create-explore-button.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/experiment-analysis/holdouts.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ With this setup, Holdouts will be logged with Experiments in AssignmentSQL that

### Configuring a Holdouts Analysis

Create a new Holdout experiment over your desired date range by clicking on the "+Experiment" button on the Experiments tab.
Create a new Holdout experiment over your desired date range by clicking on the "+Create" button on the Analysis tab.

Configure the analysis with the variation names in your holdout, such as `status_quo` and `winning_variants`. This allows the Eppo generated SQL to correctly query the data in your warehouse.

Expand Down
7 changes: 2 additions & 5 deletions docs/experiment-analysis/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ slice and dice those results by looking at different segments and metric cuts.

## Viewing multiple experiments

When you click on the **Experiments** tab, you will see the **experiment list
When you click on the **Experiments** tab of the **Analysis** section, you will see the **experiment list
view**, which shows all of your experiments. You can filter this list by
experiment name, status, entity, or owner, or just show experiments you have
**starred**.
Expand All @@ -29,10 +29,7 @@ experiment name, status, entity, or owner, or just show experiments you have
Clicking on the name of an experiment will take you to the
**experiment detail view**, which shows the effects of each treatment variation,
compared to control. Within each variation, for each metric that
[you have added to the experiment](/experiment-analysis/experiment-metrics),
we display the (per subject) **average value for the control variation**, as well as
the estimate of the <Term def={true}>relative lift</Term> (that is, the percentage change
from the control value) caused by that treatment variation.
you have added to the experiment, we display the (per subject) **average value for the control variation**, as well as the estimate of the <Term def={true}>relative lift</Term> (that is, the percentage change from the control value) caused by that treatment variation.

<Figure alt="Experiment details - overview" src="/img/interpreting-experiments/experiment-details-view.png">
In this example, the control value of <code>Total Purchase Value</code> is
Expand Down
2 changes: 1 addition & 1 deletion docs/experiment-analysis/progress-bar.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Precision is set at the metric level. However, it is possible to override this a

The goal of the progress bar is to measure whether we have gathered enough data to be confident in making a decision. In particular, use the progress bar to help you stop experiments that look flat once these hit 100% progress.

To view the progress bar, we must first navigate to the **Experiments** tab from the left panel. The progress bar can be seen in the list item card for each experiment in the experiment list. It can also be seen in the right panel if we click the card. Hovering over the progress bar shows we more details like the % lift that can be detected with the assignments seen so far:
To view the progress bar, we must first navigate to the **Analysis** tab from the left panel. The progress bar can be seen in the list item card for each experiment in the experiment list. It can also be seen in the right panel if we click the card. Hovering over the progress bar shows we more details like the % lift that can be detected with the assignments seen so far:

![Progress on list page](/img/interpreting-experiments/progress-card.png)

Expand Down
4 changes: 2 additions & 2 deletions docs/feature-flagging/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,11 @@ Note that it is possible to reduce an allocation's traffic exposure to less than

Every Eppo instance comes with two out-of-the-box environments: **Test** and **Production**. Use the **Test** environment to check feature flag behavior before releasing them in **Production**.

Additional environments can be added with no limit to match the ways you develop and ship code. For example, you can create environments for every developer's local environment or if you have multiple lower environments. Use _Feature Flags > Environments_ to create new environments.
Additional environments can be added with no limit to match the ways you develop and ship code. For example, you can create environments for every developer's local environment or if you have multiple lower environments. Use _Configuration > Environments_ to create new environments.

![Environment setup](/img/feature-flagging/environments/environment-setup.png)

SDK keys for environments can be created on the _Feature Flags > SDK Keys_ section of the interface:
SDK keys for environments can be created on the _Configuration > SDK Keys_ section of the interface:

![SDK key setup](/img/feature-flagging/environments/sdk-keys.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/integrating-with-contentful.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Now that we have the content defined, we’ll need to get the entry ID for our t

## Setting up Eppo

Next, we’ll create a [corresponding flag in Eppo](/feature-flag-quickstart/) (Feature Flags >> Create). For each variant above, simply add a new variant in the Eppo UI. For each variant value, paste in the corresponding Contentful entry ID. Make sure to also save your Feature Flag key – you will need it for your Node implementation later.
Next, we’ll create a [corresponding flag in Eppo](/feature-flag-quickstart/) (Configuration >> Create). For each variant above, simply add a new variant in the Eppo UI. For each variant value, paste in the corresponding Contentful entry ID. Make sure to also save your Feature Flag key – you will need it for your Node implementation later.

![Eppo feature flag setup](/img/guides/integrating-with-contentful/eppo-feature-flag-setup.png)

Expand Down
4 changes: 2 additions & 2 deletions docs/quick-starts/experiment-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,15 +40,15 @@ You can read more about Assignment SQL Definitions [here](/data-management/defin
<Tabs>
<TabItem value="e2e" label="Eppo Randomized">

From the Feature Flag page, click **Create Experiment Analysis**. Give the experiment a name, select the assignment logging table you created above, and click **Next**.
From your Feature Flag, click **Create Experiment Analysis**. Give the experiment a name, select the assignment logging table you created above, and click **Next**.

![Create Experiment 1b](/../static/img/building-experiments/quick-start-1b.png)

</TabItem>

<TabItem value="external" label="Externally Randomized">

From the **Experiments** tab, click **+Create** and select **Experiment Analysis**.
From the **Analysis** section, click **+Create** and select **Experiment Analysis**.

![Create Experiment 1](/../static/img/building-experiments/quick-start-1.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/quick-starts/feature-flag-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Note that if you are using Eppo alongside an existing randomization tool, you ca

### 1. Generate an SDK key

From the Feature Flag page, navigate to the SDK keys tab. Here you can generate keys for both production and testing.
From the Configuration section, navigate to the SDK keys tab. Here you can generate keys for both production and testing.

![Setup Eppo SDK key](/img/feature-flagging/environments/sdk-keys.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/sdks/api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 2

SDK keys are used to determine the environment in which the SDK is being used. A flag can have a different status and different targeting rules for each environment.

To create an SDK key, navigate to the **SDK Keys** section of the **Feature Flags** page and click **New SDK Key**.
To create an SDK key, navigate to the **SDK Keys** section of the **Configuration** section and click **New SDK Key**.

![SDK key setup](/img/feature-flagging/environments/sdk-keys.png)

Expand Down

0 comments on commit a13f75b

Please sign in to comment.