New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Flag Experiment Links #1139
Conversation
Currently, feature flag experiment rules define all of the targeting and assignment parameters inline. Then, an entirely separate Experiment object must be created for analysis. Both the Experiment and the Feature Rule have copies of variation weights, coverage, etc. and it's easy for them to get out-of-sync. This PR makes the Experiment the source-of-truth. Adding an experiment rule to a feature will just reference the Experiment id instead of defining settings inline.
Your preview environment pr-1139-bttf has been deployed. Preview environment endpoints are available at: |
We fetch experiments from the API in 5 different places throughout the code. This PR centralizes that logic in a `useExperiments` hook. In #1139, we are going to add even more places where we need to fetch experiments, so trying to get this merged first.
Right now, we prompt users to add an initial rule when creating a feature. This is hard to maintain (need 2 copies of the rule UI). It also gives new users the wrong impression that there's only a single rule for a feature. We want to train users to use the actual rule UI and think of features as having a list of rules. This PR will also make #1139 much easier to implement.
Restructure the left navigation to better support the visual editor and experiments as a standalone object. Specific changes: - Features now has no sub-items (all of them moved to a new SDK Configuration section) - Move Experiments to a top-level item - Rename "Analysis" to "Metrics and Data" - Move Slack into Settings and get rid of the Integrations section Should merge after #1139 since otherwise people will click "Experiments" to create a new feature flag experiment and that won't be possible until this PR is merged.
* New Left Nav Organization Restructure the left navigation to better support the visual editor and experiments as a standalone object. Specific changes: - Features now has no sub-items (all of them moved to a new SDK Configuration section) - Move Experiments to a top-level item - Rename "Analysis" to "Metrics and Data" - Move Slack into Settings and get rid of the Integrations section Should merge after #1139 since otherwise people will click "Experiments" to create a new feature flag experiment and that won't be possible until this PR is merged. * change client keys to sdk connections * standardize some styling within SDK Configuration section * cleanup * lint * "looking for..." messages, style cleanups --------- Co-authored-by: Bryce <bryce1@gmail.com>
Deploy preview for docs ready! ✅ Preview Built with commit 5dcf3c1. |
…` setting for SDK connections apply to both visual editor changes and feature flags
I noticed that there isn't a simple way to delete a linked feature from the experiment page - you have to delete the experiment rule from every environment on the feature. This seems like it could be untenable if a company has lots of envs. |
// No unpublished feature flags | ||
const hasFeatureFlagsErrors = linkedFeatures.some((f) => | ||
f.rules.some( | ||
(r) => r.draft || !r.environmentEnabled || r.rule.enabled === false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RE: the r.environmentEnabled
check - Do we want to verify that every environment is enabled for every feature? For example, I may have a rule that is published but disabled for specific environments for w/e reason ... Not having all environments enabled for a feature results in a red x in the pre-launch checklist which can seem like something is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple options:
- Change the check to just require at least one rule to be live per feature
- Introduce another checkbox state with a yellow warning triangle and a tooltip, so if you have at least one active rule, but some disabled ones, it would warn you, but not block you from starting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- makes most sense to me. My understanding is that it is normal to have features enabled for some environments and not for others. I'm not sure if there are any indirect repercussions that would degrade the integrity of an experiment by doing so ... Given that it's a normal workflow, 2. seems more alarming than it should be.
After creating a new linked visual editor (targeting only), the widget automatically opened. Maybe I'm anchored on the old behavior, but it felt very jarring. I didn't expect to be redirected with a Submit button. We might want to either warn them (Submit and Open Editor) or perhaps use 2 separate CTAs for "Submit" and "Submit and Open Editor" |
When creating a new linked feature flag from within an experiment, the default values for JSON are |
…rmation step to "Stop Temporary Rollout" button.
Overview
This PR creates a stronger link between feature flags and experiments.
Currently, feature flag experiment rules define all of the targeting and assignment parameters inline. Then, an entirely separate Experiment object must be created for analysis.
Both the Experiment and the Feature Rule have copies of variation weights, coverage, etc. and it's easy for them to get out-of-sync.
This PR makes the Experiment the source-of-truth. Adding an experiment rule to a feature will just reference the Experiment id instead of defining settings inline.
Changes
experiment-ref
feature rule that just contains an experimentId and a variation->value mappingexperiment-ref
rules and re-generate anytime a linked experiment changeshashVersion
for an experiment (for SDKs that don't support hash V2 yet)Fast Follow Changes:
Screenshots
Adding an experiment rule to a feature flag and choosing an existing experiment
Adding an experiment rule to a feature flag and creating the experiment inline
New
experiment-ref
rule display on feature flag pages (condition, traffic splits, etc. are coming from the linked experiment)When the experiment is being skipped for any number of reasons:
When an experiment is stopped and a Temporary Rollout is enabled
Once the Temporary Rollout is disabled
Show linked feature flag at the top of the experiment
Linked features show up and there's a way to add more to an experiment
Streamlined flow to design a new experiment (removed data and analysis page)
Driver at top of brand new draft experiment now lets you add a visual editor change OR a feature flag
Create a new feature flag directly from the experiment page
OR select an existing feature flag directly from the experiment page
Pre-launch Checklist when there are no errors
If there are errors, still provide an escape hatch to start anyway for people that know what they are doing. Tooltip directs people to the "Start Experiment" link below if they want to bypass the checks.
If an experiment is running, but they haven't connected to any data sources yet, explain the situation
For SDK Connections, make the "Include experiment/variation names" toggle apply to both feature flags and visual editor experiments.
Testing
Lots of different states to test on the experiment page
Plus, things to test with the SDK payloads:
Related PRs
The following are related to this PR, but can be done as standalone changes to keep this PR small:
runExperiments
permission #1165