Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is client side A/B testing always a bad idea in your experience? #53

nchan0154 opened this issue Aug 23, 2019 · 1 comment


Copy link

commented Aug 23, 2019

Very curious about your recent Shopify work!馃槷


This comment has been minimized.

Copy link

commented Aug 23, 2019

In a word, yes.

In two words, yes yes.

From a performance point of view, it has so many drawbacks. In no particular and non-exhaustive order:

  • It typically blocks rendering. Most third parties are loaded in a synchronous manner so as to avoid the restyling and changing of content. This render- and parser-blocking behaviour contributes massively to delays.
  • Providers are almost always off-site. Most third-party providers exist on external origins, meaning trips to new domains: DNS, TCP, TLS overhead on the critical path ain鈥檛 the one.
  • It happens on every page load. Even though we might only take the download cost once, caching the file for subsequent use in-session, the runtime cost happens on every viewing of the tested page.
  • No user-benefitting reuse. Further to the above, because we鈥檙e not doing our A/B tests on the server, it鈥檚 not possible to build a variant once and serve it to many users. Every user has to build their own tests on their own device every time they visit a tested page.
  • They likely skip any governance process. While your engineers are subject to linting, code-reviews, tests, auditors, and more, your marketing team have free rein of the front-end. They can upload 6MB images, they can probably paste JavaScript into a WYSIWYG, you name it.

The benefits of client-side A/B tests really only suit the marketing team:

  • Test without a deploy. They don鈥檛 need developers, so they can experiment as much and as often as they like.
  • Available to everyone. All you need is longin details for your provider. You don鈥檛 need a coding background, write/push access to repos or CI, etc.

To mitigate the effects:

  • Exercise restraint. Don鈥檛 test huge variants. Subtle and small tests are not only more focused (and thus likely to provide more reliable insights), they鈥檙e also less expensive and less taxing to run.
  • Load asynchronously. If you are limiting yourself to just small, discrete variations, you may be able to get away with loading your A/B testing tool asynchronously.
  • Keep it clean. Regularly audit and review the tests that are live and remove any that are old or not active.
  • Preconnect off-site origins. If you do have trips to third party origins, use a preconnect HTTP header to warm up a connection.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can鈥檛 perform that action at this time.