Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client-side A/B testing #54

Closed
alexnj opened this issue Apr 14, 2022 · 3 comments
Closed

Client-side A/B testing #54

alexnj opened this issue Apr 14, 2022 · 3 comments

Comments

@alexnj
Copy link
Member

alexnj commented Apr 14, 2022

Introduction

Client-side A/B testing refers to the method of performing experimentation related changes to a web application in the browser — sometimes without integrating code changes to the actual application source code. This is popular in the industry as the method usually helps to cut down resources required for experimentation and to scale A/B testing involving external teams. This is a proposal to explore ways of conducting the same outcome, while avoiding or minimizing the performance penalty that is associated with the techniques used today.

Please read the complete Explainer. The associated prototype and code can be found here.

You can view the W3PerfWG March 17 2022 Presentation and Notes here.

Longer version

A/B testing on the web involves creating variations of a web application that can be selected for a sample of traffic, in order to verify a hypothesis. With infinitely scalable engineering resources, every experimentation could happen right inside the application engineering team and application source code, and could modify the application structurally to suit the needs.

However, in reality, due to engineering resource constraints, service provider boundaries, and due to application architecture choices, modifying the application for every experiment can be difficult or cost prohibitive.

Some use-cases and arguments that make Client-side A/B testing approach attractive are:

  1. Marketing or Research teams, external to the web application engineering team, want to conduct experimentation.
  2. Product management personnel want to conduct experimentation with minimal engineering bandwidth spent.
  3. Potential to reduce technical debt incurred through integrating experimental changes.
  4. Less engineering bandwidth spent on experimentation translates to a larger number of experiments, more hypotheses being tested.

In order to meet these use cases, combined with the flexibility available in web application architectures, the industry has taken a defacto approach to A/B testing — applying cosmetic changes to a web application. What do we mean by “cosmetic”? It means the experimentation-related changes aren’t baked into the original application’s source code. Instead, changes are applied in the browser, after application source and binaries have been sourced from its server.

This type of testing enables the above use-cases to function at scale, and usually employs Javascript as a de-facto to perform the modifications. And there lies the tradeoffs — employing Javascript in this manner as it is done today, comes with a few drawbacks:

  1. Until the experimentation script has had a chance to make the necessary modifications to the web application, the experience isn’t final or presentable to the user. The user might see the non-adjusted variant and start interacting with it, or could be the victim of a jarring experience of the changes being made — most of them resulting in layout shifts.
  2. To counter the layout shifts introduced by these cosmetic modifications, experimentation providers typically block the page from being rendered using styling, which is later removed as the experiment-related changes have been applied. This improves layout stability, however, ends up delaying rendering, resulting in a performance degradation.
  3. If a network request to the experimentation script and changes are inserted prior to the body of the application, that introduces rendering blocks that block document parsing and loading critical resources — incurring significant delays in performance metrics and thus creating missed opportunities in business and user experience.
  4. This also introduces a potentially unnecessary dependency on Javascript, even for static pages. It wastes computational resources on each client, even in cases where the test outcome can be pre-computed once and cached.
    This is a proposal to collaborate and create better, performant means to conduct Client A/B testing.

Please read the complete Explainer. The associated prototype and code can be found here.

Feedback

I welcome feedback in this thread, but encourage you to file bugs against the Explainer.

@kylesharp-optimizely
Copy link

Optimizely's Web and Performance Edge products both are client-side and heavily impacted by this. We've encountered all of the drawbacks stated above - in fact, Performance Edge was built just to try to counter some of the challenges with client-side testing. It helps in certain situations, but it has its own drawbacks that prevent it from being a complete solution, it really only bests Web in specific scenarios, and it is limited in what features it can provide due to the introduction of using edge workers.

We have a wealth of knowledge on this subject and are very interested in helping drive this conversation. We've already left some comments in the Explainer with a few of the things this would need to do in order to comprehensively address the problems stated above.

Optimizely fully supports this proposal.

@yoavweiss
Copy link
Collaborator

The repo is now live at https://github.com/WICG/ab-worker-prototype
Happy incubatin'!!

@acme
Copy link

acme commented Mar 8, 2023

There was an AB Testing update presentation at the WebPerfWG call on March 2nd 2023: https://w3c.github.io/web-performance/meetings/2023/2023-03-02/index.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants