diff --git a/docs/experiment-analysis/creating-experiments.md b/docs/experiment-analysis/creating-experiments.md index 310f54a2..37c4dd84 100644 --- a/docs/experiment-analysis/creating-experiments.md +++ b/docs/experiment-analysis/creating-experiments.md @@ -6,7 +6,7 @@ sidebar_position: 1 Experiments are a set of metrics that correspond to users being shown different feature sets that you would like to track over time. -### 1. Navigate to **Experiments** in the left-hand menu and click **+Experiment** +### 1. Navigate to **Analysis** in the left-hand menu, click **+Create**, and select **Experiment Analysis** ![Create experiment](/img/building-experiments/create-experiment.png) diff --git a/docs/experiment-analysis/experiment-metrics.md b/docs/experiment-analysis/experiment-metrics.md deleted file mode 100644 index 530da0a2..00000000 --- a/docs/experiment-analysis/experiment-metrics.md +++ /dev/null @@ -1,31 +0,0 @@ -# Experiment metrics - -## Adding metrics to experiments - -1. Navigate to **Experiments** and click the **Overview** tab - -You should have already [configured your feature flag](/experiment-analysis/creating-experiments) above, if you haven't, go do that first. - -Under **Decision metrics**, you will see that [guardmail metrics](/data-management/metrics/guardrails) have already been included automatically. - -4. Click **+Add metric** button - -![Configure experiment](/img/building-experiments/add-metric.png) - -On the left hand of the modal, you will see a list of metrics that have been created that are attached to this set of experiment subjects. - -You can select one of them to add to the experiment - -5. Click **Save** - -## Removing metrics from experiments - -![Removing metrics from experiments](/img/building-experiments/remove-metric-from-experiment.gif) - -1. Navigate to **Experiments** > **Overview** - -2. Find the decision metric you would like to remove - -3. Click on the three dots next to it - -4. Select **Remove from experiment** from the dropdown menu diff --git a/docs/experiment-analysis/experiment-status.md b/docs/experiment-analysis/experiment-status.md index a4571fcc..f55143c7 100644 --- a/docs/experiment-analysis/experiment-status.md +++ b/docs/experiment-analysis/experiment-status.md @@ -1,6 +1,6 @@ # Experiment status -You can navigate to the **Experiments** page by clicking on the **Experiments** icon from the left tab. Each experiment has an associated status which indicates the current state of the experiment. An experiment can have one of 4 different statuses: +You can navigate to the **Analysis** page by clicking on the icon from the left-hand manu. Each experiment has an associated status which indicates the current state of the experiment. An experiment can have one of 4 different statuses: 1. Draft - The initial state after the experiment is created 2. Running - The active state where the experiment is assigning subjects and/or updating the metric event data diff --git a/docs/experiment-analysis/explores.md b/docs/experiment-analysis/explores.md index d71a2b8b..8e1c4b2b 100644 --- a/docs/experiment-analysis/explores.md +++ b/docs/experiment-analysis/explores.md @@ -2,7 +2,7 @@ ## Creating Explore Charts -You can dive deeper into the results of your experiments by creating graphs with the result data. To do this, first navigate to the **Experiments** page using the tab on the left panel and click on the experiment you are interested in. Then, click on the **Explore** tab. You can create a new chart by clicking on the **Create Explore** button. +You can dive deeper into the results of your experiments by creating graphs with the result data. To do this, first navigate to the **Analysis** page using the tab on the left panel and click on the experiment you are interested in. Then, click on the **Explore** tab. You can create a new chart by clicking on the **Create Explore** button. ![Create Explore](/img/measuring-experiments/create-explore-button.png) diff --git a/docs/experiment-analysis/holdouts.md b/docs/experiment-analysis/holdouts.md index d9ba4fc9..354b99ef 100644 --- a/docs/experiment-analysis/holdouts.md +++ b/docs/experiment-analysis/holdouts.md @@ -109,7 +109,7 @@ With this setup, Holdouts will be logged with Experiments in AssignmentSQL that ### Configuring a Holdouts Analysis -Create a new Holdout experiment over your desired date range by clicking on the "+Experiment" button on the Experiments tab. +Create a new Holdout experiment over your desired date range by clicking on the "+Create" button on the Analysis tab. Configure the analysis with the variation names in your holdout, such as `status_quo` and `winning_variants`. This allows the Eppo generated SQL to correctly query the data in your warehouse. diff --git a/docs/experiment-analysis/index.md b/docs/experiment-analysis/index.md index 8d0ef0bc..41bfa4f6 100644 --- a/docs/experiment-analysis/index.md +++ b/docs/experiment-analysis/index.md @@ -13,7 +13,7 @@ slice and dice those results by looking at different segments and metric cuts. ## Viewing multiple experiments -When you click on the **Experiments** tab, you will see the **experiment list +When you click on the **Experiments** tab of the **Analysis** section, you will see the **experiment list view**, which shows all of your experiments. You can filter this list by experiment name, status, entity, or owner, or just show experiments you have **starred**. @@ -29,10 +29,7 @@ experiment name, status, entity, or owner, or just show experiments you have Clicking on the name of an experiment will take you to the **experiment detail view**, which shows the effects of each treatment variation, compared to control. Within each variation, for each metric that -[you have added to the experiment](/experiment-analysis/experiment-metrics), -we display the (per subject) **average value for the control variation**, as well as -the estimate of the relative lift (that is, the percentage change -from the control value) caused by that treatment variation. +you have added to the experiment, we display the (per subject) **average value for the control variation**, as well as the estimate of the relative lift (that is, the percentage change from the control value) caused by that treatment variation.
In this example, the control value of Total Purchase Value is diff --git a/docs/experiment-analysis/progress-bar.md b/docs/experiment-analysis/progress-bar.md index 62b9e3a5..b53366ca 100644 --- a/docs/experiment-analysis/progress-bar.md +++ b/docs/experiment-analysis/progress-bar.md @@ -22,7 +22,7 @@ Precision is set at the metric level. However, it is possible to override this a The goal of the progress bar is to measure whether we have gathered enough data to be confident in making a decision. In particular, use the progress bar to help you stop experiments that look flat once these hit 100% progress. -To view the progress bar, we must first navigate to the **Experiments** tab from the left panel. The progress bar can be seen in the list item card for each experiment in the experiment list. It can also be seen in the right panel if we click the card. Hovering over the progress bar shows we more details like the % lift that can be detected with the assignments seen so far: +To view the progress bar, we must first navigate to the **Analysis** tab from the left panel. The progress bar can be seen in the list item card for each experiment in the experiment list. It can also be seen in the right panel if we click the card. Hovering over the progress bar shows we more details like the % lift that can be detected with the assignments seen so far: ![Progress on list page](/img/interpreting-experiments/progress-card.png) diff --git a/docs/feature-flagging/index.md b/docs/feature-flagging/index.md index 1189337d..32d2dbc9 100644 --- a/docs/feature-flagging/index.md +++ b/docs/feature-flagging/index.md @@ -67,11 +67,11 @@ Note that it is possible to reduce an allocation's traffic exposure to less than Every Eppo instance comes with two out-of-the-box environments: **Test** and **Production**. Use the **Test** environment to check feature flag behavior before releasing them in **Production**. -Additional environments can be added with no limit to match the ways you develop and ship code. For example, you can create environments for every developer's local environment or if you have multiple lower environments. Use _Feature Flags > Environments_ to create new environments. +Additional environments can be added with no limit to match the ways you develop and ship code. For example, you can create environments for every developer's local environment or if you have multiple lower environments. Use _Configuration > Environments_ to create new environments. ![Environment setup](/img/feature-flagging/environments/environment-setup.png) -SDK keys for environments can be created on the _Feature Flags > SDK Keys_ section of the interface: +SDK keys for environments can be created on the _Configuration > SDK Keys_ section of the interface: ![SDK key setup](/img/feature-flagging/environments/sdk-keys.png) diff --git a/docs/guides/integrating-with-contentful.md b/docs/guides/integrating-with-contentful.md index 7ab36a4a..d47827d0 100644 --- a/docs/guides/integrating-with-contentful.md +++ b/docs/guides/integrating-with-contentful.md @@ -39,7 +39,7 @@ Now that we have the content defined, we’ll need to get the entry ID for our t ## Setting up Eppo -Next, we’ll create a [corresponding flag in Eppo](/feature-flag-quickstart/) (Feature Flags >> Create). For each variant above, simply add a new variant in the Eppo UI. For each variant value, paste in the corresponding Contentful entry ID. Make sure to also save your Feature Flag key – you will need it for your Node implementation later. +Next, we’ll create a [corresponding flag in Eppo](/feature-flag-quickstart/) (Configuration >> Create). For each variant above, simply add a new variant in the Eppo UI. For each variant value, paste in the corresponding Contentful entry ID. Make sure to also save your Feature Flag key – you will need it for your Node implementation later. ![Eppo feature flag setup](/img/guides/integrating-with-contentful/eppo-feature-flag-setup.png) diff --git a/docs/quick-starts/experiment-quickstart.md b/docs/quick-starts/experiment-quickstart.md index 0cccbbd0..bde85524 100644 --- a/docs/quick-starts/experiment-quickstart.md +++ b/docs/quick-starts/experiment-quickstart.md @@ -40,7 +40,7 @@ You can read more about Assignment SQL Definitions [here](/data-management/defin -From the Feature Flag page, click **Create Experiment Analysis**. Give the experiment a name, select the assignment logging table you created above, and click **Next**. +From your Feature Flag, click **Create Experiment Analysis**. Give the experiment a name, select the assignment logging table you created above, and click **Next**. ![Create Experiment 1b](/../static/img/building-experiments/quick-start-1b.png) @@ -48,7 +48,7 @@ From the Feature Flag page, click **Create Experiment Analysis**. Give the exper -From the **Experiments** tab, click **+Create** and select **Experiment Analysis**. +From the **Analysis** section, click **+Create** and select **Experiment Analysis**. ![Create Experiment 1](/../static/img/building-experiments/quick-start-1.png) diff --git a/docs/quick-starts/feature-flag-quickstart.md b/docs/quick-starts/feature-flag-quickstart.md index 32dbb631..7f663901 100644 --- a/docs/quick-starts/feature-flag-quickstart.md +++ b/docs/quick-starts/feature-flag-quickstart.md @@ -13,7 +13,7 @@ Note that if you are using Eppo alongside an existing randomization tool, you ca ### 1. Generate an SDK key -From the Feature Flag page, navigate to the SDK keys tab. Here you can generate keys for both production and testing. +From the Configuration section, navigate to the SDK keys tab. Here you can generate keys for both production and testing. ![Setup Eppo SDK key](/img/feature-flagging/environments/sdk-keys.png) diff --git a/docs/sdks/api-keys.md b/docs/sdks/api-keys.md index ffe7d5cd..8877b1f3 100644 --- a/docs/sdks/api-keys.md +++ b/docs/sdks/api-keys.md @@ -6,7 +6,7 @@ sidebar_position: 2 SDK keys are used to determine the environment in which the SDK is being used. A flag can have a different status and different targeting rules for each environment. - To create an SDK key, navigate to the **SDK Keys** section of the **Feature Flags** page and click **New SDK Key**. + To create an SDK key, navigate to the **SDK Keys** section of the **Configuration** section and click **New SDK Key**. ![SDK key setup](/img/feature-flagging/environments/sdk-keys.png)