-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #652: Add option for setting user or default prefs for experiments. #670
Conversation
recipe-client-addon/bootstrap.js
Outdated
// Initialize experiments first to avoid a race between initializing prefs | ||
// and recipes rolling back pref changes when experiments end. | ||
PreferenceExperiments.init().catch(err => { | ||
log.error("Failed to initialize preference experiments:", err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would've liked to test that PreferenceExperiments.init
failing does not stop RecipeRunner.init
from running, but I dunno how to test bootstrap stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I don't know how to do that either. I guess you could write a browser test that disables and re-enables the add-on (using add-on manager APIs)? I think that'd run this code, maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhelmer Are there any better ways to test add-on startup code than ^^?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chatted about this in IRC - using the public AddonManager
API won't be great for this since built-in system add-ons cannot be uninstalled or disabled, so re-running bootstrap.js
startup()
isn't really doable.
I'd suggest either simplifyingbootstrap.js
and moving the code you want to test into a different JSM that you write a test for, or if you want to have more control over the AddonManager
lifecycle we have convenience methods for setting up and restarting AddonManager
in http://searchfox.org/mozilla-central/source/toolkit/mozapps/extensions/internal/AddonTestUtils.jsm
Another option would be to use mochitest and restart the whole browser, but that's a bit more heavy-handed.
Preferences.reset(preferenceName); | ||
// This does nothing if we're on the default branch, which is fine. The | ||
// preference will be reset on next restart, and most preferences should | ||
// have had a default value set before the experiment anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See also bsmedberg's email on dev-shield.
General note, that might be worth bringing up on the mailing list: code can (and does) check whether the user has "modified" preferences (ie whether a pref has a value on the user branch, using the pref service's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reviewed the recipe-client-server/addon bit, I'll leave that to Andy. :-)
recipe-client-addon/bootstrap.js
Outdated
// Initialize experiments first to avoid a race between initializing prefs | ||
// and recipes rolling back pref changes when experiments end. | ||
PreferenceExperiments.init().catch(err => { | ||
log.error("Failed to initialize preference experiments:", err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I don't know how to do that either. I guess you could write a browser test that disables and re-enables the add-on (using add-on manager APIs)? I think that'd run this code, maybe?
const store = await ensureStorage(); | ||
for (const experiment of Object.values(store.data)) { | ||
const {expired, preferenceBranchType, preferenceName, preferenceValue} = experiment; | ||
if (!expired && preferenceBranchType === "default") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit surprised here - why do we not just set it for all of the pref experiments, including the ones using the user branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's super edge case-y: If we're doing a user pref experiment (and the pref doesn't have a default value already), and the experiment ends, the default will retain the experimental value for the rest of the session. This check prevents setting the default when unnecessary. Although the reason I added it in the first place was more that it made sense only to set the pref when necessary, rather than because I thought that edge case mattered.
I'm fine with removing it, although I don't see the harm in keeping it.
DefaultPreferences.set("fake.preference", "experimentvalue"); | ||
ok(!stop.called, "Changing to the experimental pref value did not trigger the observer"); | ||
|
||
// Setting it to something different should trigger the call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not entirely sure what we're testing here, and if it's worth testing this again once setting it back to the experiment value from the new value? Specifically, is calling startObserver
expected to set the preference immediately, or is it not set until the test here calls DefaultPreferences.set
? In the latter case, I suppose testing this again would make no difference, but a comment might be nice. Otherwise, testing the correct behaviour (whatever that is) when setting the thing back to the expected experiment value would make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing after setting it back doesn't really matter, because by that point stop
has already been called, which (when not mocked, like in this test) would remove the observer.
startObserver
doesn't actually modify the preference, it only sets up the preference watcher, so the preference isn't set to the experimental value until the first DefaultPreferences.set
call. #645 isn't finished yet, but it will need to call this function on startup for the active experiments, and I figured it'd be easier to reason about how this behaves without setting the preference than it was to figure out if setting the experiment preferences on startup was an issue.
I'll add a comment clarifying that startObserver
does not modify the preference to these tests.
recipe-client-addon/bootstrap.js
Outdated
// Initialize experiments first to avoid a race between initializing prefs | ||
// and recipes rolling back pref changes when experiments end. | ||
PreferenceExperiments.init().catch(err => { | ||
log.error("Failed to initialize preference experiments:", err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the return value of the bootstrap methods isn't used (afaik?), I bet you could make them async functions if you wanted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhelmer Does ^^ sound right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes you can make bootstrap methods async, and the return values are not used by AOM AFAICT.
*/ | ||
async init() { | ||
const store = await ensureStorage(); | ||
for (const experiment of Object.values(store.data)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Object.values
returns an array, would it be more concise to use forEach
instead of the for/of
syntax?
ex: Object.values(store.data).forEach((experiment)=>{ ... });
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It ends up only saving a few characters. I think we use for/of
in most other places over forEach
as well, but I might be misremembering.
* @rejects {Error} | ||
* If an experiment with the given name already exists, or if an experiment | ||
* for the given preference is active. | ||
*/ | ||
async start(experimentName, branch, preferenceName, preferenceValue) { | ||
async start(experimentName, branch, preferenceName, preferenceValue, preferenceBranchType) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, when there are a bunch of function params, I prefer to just pass an object and destructure the values out from there. Ex:
async start({ experimentName, branch, preferenceName, preferenceValue, preferenceBranchType }) { ...
somewhereElse.start({ experimentName: 'whatever', ... })
It makes things a little clearer when start
is called, and devs skimming/reading the code don't have to refer to another file to determine what the params mean.
This may be a preference thing, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, that's usually the right thing. I often wish JS had keyword arguments in the same way Python does.
<option value="user">User</option> | ||
</ControlField> | ||
{preferenceBranchType === 'user' && | ||
<p className="field-warning"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this element isn't dynamic, we could add it as a static property of the class. That would have React "render" it once, and would let us write something like:
static userPrefWarning = (<p className="field-warning">...</p>);
// in render
{ preferenceBranchType === 'user' && PreferenceExperimentFields.userPrefWarning }
Minor nit, I just prefer to keep render
concise when possible.
|
||
it('should render a warning if using the user preference branch type', () => { | ||
const wrapper = shallow(<PreferenceExperimentFields preferenceBranchType="user" />); | ||
const warning = wrapper.find('.field-warning'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re: adding the warning as a static class member - I thiiiink we could do something like wrapper.find(PreferenceExperimentFields.userPrefWarning)
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works! I used wrapper.contains
instead of find
, but I like this a lot better than the old test. Thanks!
I dunno if it should live in our docs (probably?), but a short guide on designing and implementing a preference experiment on the Firefox for developers seems like it'd be a nice place to put this. |
Feedback'd, waiting on responses before making more changes. |
Alright, updated with one last commit for testing bootstrap.js, sorry for invalidating ya'lls existing review. |
recipe-client-addon/bootstrap.js
Outdated
@@ -45,6 +45,8 @@ const PREF_LOGGING_LEVEL = PREF_BRANCH + "logging.level"; | |||
let shouldRun = true; | |||
let log = null; | |||
|
|||
this.EXPORTED_SYMBOLS = ["install", "startup", "shutdown", "uninstall"]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment here saying we export these for testing purposes.
In which I continue to sidestep rewriting the entire PreferenceExperiments test suite to use async/await.
It turns out preference observers react to changes in the effective preference value, which covers default value changes, which helped cut down on a bunch of the code and tests I originally had here.