-
-
Notifications
You must be signed in to change notification settings - Fork 369
Description
I created fake a/b that has literally no code inside it. It just runs on my landing page to trigger Split tracking, and I finish the experiment when someone signs up.
The code is literally just this:
<% ab_test "Pricing_page_conversion", "a", "b" do |style| %>
<% end %>
So, A=B.
Imagine my surprise when I saw this today on my split dashboard:
I'm in love with Split and I think it's one of the most important gems we use, but I'm posting this so that we can discuss a little bit about the math of the test so that we can understand how a 99% confidence can emerge, with such low numbers, when A=B.
I decided to take this numbers to a couple of A/B test calculators easily found on google (first results).
On this site, http://www.usereffect.com/split-test-calculator, it tells me I haven't even reached 90% significance, and I need 131 more visitors to do so.
Sorry, you have no clear winner
We estimate that you'll need 131 more visitors*
Group A: 6.04% conversion
Group B: 2.76% conversion
Visitors: Goals:
Group A:
Group B:
*Based on the current trend of your results, your test would require 131 additional visitors to reach significance at the 90% confidence level. Please note that this is at best a loose estimate, designed to help you decide whether or not to continue the test.
On this other site (http://www.abtester.com/calculator/), it tells me I have only 91.61% confidence with these numbers.
Even if I multiply them by two, I don't reach 99% confidence yet:
Last, I ran it on Visual Website Optimezer calculator, and here are the results (no significance as well):
I'd love to hear your thoughts.



