Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Flag Experiment Links #1139

Merged
merged 57 commits into from Aug 9, 2023
Merged

Feature Flag Experiment Links #1139

merged 57 commits into from Aug 9, 2023

Conversation

jdorn
Copy link
Member

@jdorn jdorn commented Apr 4, 2023

Overview

This PR creates a stronger link between feature flags and experiments.

Currently, feature flag experiment rules define all of the targeting and assignment parameters inline. Then, an entirely separate Experiment object must be created for analysis.

Both the Experiment and the Feature Rule have copies of variation weights, coverage, etc. and it's easy for them to get out-of-sync.

This PR makes the Experiment the source-of-truth. Adding an experiment rule to a feature will just reference the Experiment id instead of defining settings inline.

Changes

  • Model changes to link features and experiments in Mongo
  • New experiment-ref feature rule that just contains an experimentId and a variation->value mapping
  • Update the SDK payload to include these new experiment-ref rules and re-generate anytime a linked experiment changes
  • Ability to create a feature flag directly from a draft experiment (or link an existing feature flag)
  • On experiment pages, if there is only a single linked change and it's a feature flag, show the feature flag name in the top summary info
  • In the "Design a new Experiment" flow - don't ask for any analysis info up front
  • Ability to specify the hashVersion for an experiment (for SDKs that don't support hash V2 yet)
  • Open the visual editor automatically after entering and saving a URL
  • Improve error messaging and CTAs on the experiment page when in a draft state. New "Pre-launch Checklist"
  • If you skipped creating a data source, allow creating one from the experiment results after starting a test

Fast Follow Changes:

  • Ability to migrate a legacy experiment rule to a new experiment-ref rule on a feature/experiment page
  • Update documentation

Screenshots

Adding an experiment rule to a feature flag and choosing an existing experiment

image

Adding an experiment rule to a feature flag and creating the experiment inline

image

New experiment-ref rule display on feature flag pages (condition, traffic splits, etc. are coming from the linked experiment)

image

When the experiment is being skipped for any number of reasons:

image

When an experiment is stopped and a Temporary Rollout is enabled

image

Once the Temporary Rollout is disabled

image

Show linked feature flag at the top of the experiment

image

Linked features show up and there's a way to add more to an experiment

image

Streamlined flow to design a new experiment (removed data and analysis page)

image

image

image

Driver at top of brand new draft experiment now lets you add a visual editor change OR a feature flag

image

Create a new feature flag directly from the experiment page

image

OR select an existing feature flag directly from the experiment page

image

Pre-launch Checklist when there are no errors

image

If there are errors, still provide an escape hatch to start anyway for people that know what they are doing. Tooltip directs people to the "Start Experiment" link below if they want to bypass the checks.

image

If an experiment is running, but they haven't connected to any data sources yet, explain the situation

image

For SDK Connections, make the "Include experiment/variation names" toggle apply to both feature flags and visual editor experiments.

image

Testing

Lots of different states to test on the experiment page

  • Draft, no phases
  • Draft, phase, no changes
  • Draft, phase, visual changesets, no actual changes
  • Draft, phase, visual changesets, changes, no SDK Connection
  • Draft, phase, visual changesets, changes, valid SDK connection
  • Draft, phase, linked feature, rule disabled
  • Draft, phase, linked feature, draft changes
  • Draft, phase, linked feature, environment disabled
  • Draft, phase, linked feature, feature archived
  • Draft, phase, linked feature, active
  • Running, no data source selected, no data sources exist
  • Running, no data source selected, some data sources exist already
  • Running, data source selected
  • Stopped, no winner chosen
  • Stopped, winner chosen, no released variation
  • Stopped, winner chosen, no linked changes
  • Stopped, winner chosen, included in payload
  • Stopped, winner chosen, excluded from payload

Plus, things to test with the SDK payloads:

  • Variation/experiment names are included and controlled with the SDK Connection setting
  • Experiment-ref rules are included in the payload
  • SDK payloads re-generated when changes are made to a linked experiments
  • Legacy experiment rules are also included in the payload
  • Skip rule if experimentId is invalid
  • Skip rule if experiment is a draft
  • Skip rule if experiment is archived
  • Skip rule if experiment is stopped and no winner chosen
  • Skip rule is experiment is stopped, a variation is chosen, but the Temporary Rollout is disabled
  • Includes a force rule when experiment is stopped and Temporary Rollout is enabled

Related PRs

The following are related to this PR, but can be done as standalone changes to keep this PR small:

Currently, feature flag experiment rules define all of the targeting and assignment parameters inline. Then, an entirely separate Experiment object must be created for analysis.

Both the Experiment and the Feature Rule have copies of variation weights, coverage, etc. and it's easy for them to get out-of-sync.

This PR makes the Experiment the source-of-truth. Adding an experiment rule to a feature will just reference the Experiment id instead of defining settings inline.
@github-actions
Copy link

github-actions bot commented Apr 4, 2023

Your preview environment pr-1139-bttf has been deployed.

Preview environment endpoints are available at:

jdorn added a commit that referenced this pull request Apr 4, 2023
We fetch experiments from the API in 5 different places throughout the code. This PR centralizes that logic in a `useExperiments` hook.

In #1139, we are going to add even more places where we need to fetch experiments, so trying to get this merged first.
@jdorn jdorn mentioned this pull request Apr 4, 2023
jdorn added a commit that referenced this pull request Apr 4, 2023
Right now, we prompt users to add an initial rule when creating a feature. This is hard to maintain (need 2 copies of the rule UI).  It also gives new users the wrong impression that there's only a single rule for a feature. We want to train users to use the actual rule UI and think of features as having a list of rules.

This PR will also make #1139 much easier to implement.
jdorn added a commit that referenced this pull request Apr 4, 2023
Restructure the left navigation to better support the visual editor and experiments as a standalone object.

Specific changes:
- Features now has no sub-items (all of them moved to a new SDK Configuration section)
- Move Experiments to a top-level item
- Rename "Analysis" to "Metrics and Data"
- Move Slack into Settings and get rid of the Integrations section

Should merge after #1139 since otherwise people will click "Experiments" to create a new feature flag experiment and that won't be possible until this PR is merged.
bryce-fitzsimons added a commit that referenced this pull request Jul 5, 2023
* New Left Nav Organization

Restructure the left navigation to better support the visual editor and experiments as a standalone object.

Specific changes:
- Features now has no sub-items (all of them moved to a new SDK Configuration section)
- Move Experiments to a top-level item
- Rename "Analysis" to "Metrics and Data"
- Move Slack into Settings and get rid of the Integrations section

Should merge after #1139 since otherwise people will click "Experiments" to create a new feature flag experiment and that won't be possible until this PR is merged.

* change client keys to sdk connections

* standardize some styling within SDK Configuration section

* cleanup

* lint

* "looking for..." messages, style cleanups

---------

Co-authored-by: Bryce <bryce1@gmail.com>
@github-actions
Copy link

github-actions bot commented Jul 18, 2023

Deploy preview for docs ready!

✅ Preview
https://docs-izgifunmd-growthbook.vercel.app

Built with commit 5dcf3c1.
This pull request is being automatically deployed with vercel-action

@bttf
Copy link
Collaborator

bttf commented Aug 2, 2023

I noticed that there isn't a simple way to delete a linked feature from the experiment page - you have to delete the experiment rule from every environment on the feature. This seems like it could be untenable if a company has lots of envs.

// No unpublished feature flags
const hasFeatureFlagsErrors = linkedFeatures.some((f) =>
f.rules.some(
(r) => r.draft || !r.environmentEnabled || r.rule.enabled === false
Copy link
Collaborator

@bttf bttf Aug 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RE: the r.environmentEnabled check - Do we want to verify that every environment is enabled for every feature? For example, I may have a rule that is published but disabled for specific environments for w/e reason ... Not having all environments enabled for a feature results in a red x in the pre-launch checklist which can seem like something is wrong.
image

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple options:

  1. Change the check to just require at least one rule to be live per feature
  2. Introduce another checkbox state with a yellow warning triangle and a tooltip, so if you have at least one active rule, but some disabled ones, it would warn you, but not block you from starting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. makes most sense to me. My understanding is that it is normal to have features enabled for some environments and not for others. I'm not sure if there are any indirect repercussions that would degrade the integrity of an experiment by doing so ... Given that it's a normal workflow, 2. seems more alarming than it should be.

@bryce-fitzsimons
Copy link
Member

UI: For a linked draft experiment, it seems odd to mute the header text and link here
Screenshot 2023-08-02 at 1 26 39 PM

@bryce-fitzsimons
Copy link
Member

The linked change UI looks a bit clunky here. I'd prefer reverting the styling back (🎏 FEATURE FLAG // name: flag-name)
image

@bryce-fitzsimons
Copy link
Member

After creating a new linked visual editor (targeting only), the widget automatically opened. Maybe I'm anchored on the old behavior, but it felt very jarring. I didn't expect to be redirected with a Submit button. We might want to either warn them (Submit and Open Editor) or perhaps use 2 separate CTAs for "Submit" and "Submit and Open Editor"

@bryce-fitzsimons
Copy link
Member

When creating a new linked feature flag from within an experiment, the default values for JSON are true and false, which while technically are valid sort of feel like a mistake. I'd expect to see something like {} or whatever

@jdorn jdorn merged commit 64a000f into main Aug 9, 2023
6 checks passed
@jdorn jdorn deleted the feature-experiment-ref branch August 9, 2023 01:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants