Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
An AB-Testing module built for Django
Python
branch: master

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
README
__init__.py
abtest.py
admin.py
models.py
utils.py

README

NAME:
=====
Django-ABTest


DESCRIPTION:
=====
ABTesting platform for `Django`.
1. Contains methods to get, create, assign variates to the user. Supports both discrete (e.g. button was clicked) and continuous (e.g. number of user invites) datasets.
2. Advanced built in methods to determine winner variates. Automatically works out the appropriate evaluation method based on dataset. (see below for details)
3. Support for running multiple tests on a single user, on any system including emails. Although it is advised all tests are independent.
4. Django-Admin interface with also simple reporting methods for use with custom admin tools.


INSTALLATION:
=====
1. Copy all files into your app (`example_app`) with some name (`abtest`)

2. Add `abtest` to your `INSTALLED_APPS` in `settings.py`

3. Run `python manage.py sync_db`


EXAMPLES and USAGE:
=====
Example A. Testing conversion rates of the copy used of some anchor (binary dataset)
    
    # 1. Define the test somewhere such as a `utils.py` file
        BCT = ABTest(title="Button Click Text 1.0", variates=["click me", "can't touch this", "new feature!"], default=0)
        # note default=0, this initializes the test to be fail (not clicked)
    
    # 2. Set the experiment running, get_or_creates variate and saves to user's session, returns the variate to be used
        link_text = BCT.assign_experiment(request)
    
    # 3. in a view pass `link_text` (the chosen variate) to the template
        context['link_text'] = link_text
        
        # in the template output the variate in the desired place
        <a href="/button_clicked/">{{link_text}}</a>
    
    # 4. in the view which corresponds to /button_clicked/, assign the experiment as a success (value=1)
        BCT.assign_result(request, value=1)
    
    # 5. evaluate the experiment in a cron job or on an admin_page in two keys ways
        # Case A. Manual inspection 
        
        print UIT.stats()
        # you can view the mean, stddev, var, for each variate
        
        winners, control = UIT.winners()
        # winners is a list of tuples (variate, confidence_is_better_than_control)
        # control is the variate given as the control (on initialization) or the worst performing variate
        
        UIT.assign_winner(winners[0])
        # you can choose a winner manually and assign the results - here winner[0] is the best performing
        
        # Case B. Automatic assignment of the winner based on some parameters (see below)
        
        winner = UIT.evaluate(confidence=0.8)
        # the experiment will end and the top performing variate will be chosen and returned as the winner
        # if there is no winner with a confidence > 0.8, then evaluate returns None and experiment remains unfinished
    
    
Example B. Testing the number of invites a user makes based on if there is a select-all button
    
    # 1. Similarly define the test, but make sure binary=False as this allows value to be anything (i.e. numbers of invites)
        UIT = ABTest(title="User Invite 123", variates=[True, False], binary=False)
    
    # 2. by not setting a default, the variate is only saved in the session and the test will only be saved in the db at step 4.
        show_button = UIT.assign_experiment(request)
        # this is good when the tested action is not the main one on the page
            
    # 3. in the template use the variate how you like
        {% if show_button %}<input value="Select-All" onclick="select_all()">{% endif %}
    
    # 4. in the view where you create the invites, save the result
        UIT.assign_result(request, value=len(invites))
    

Example C. Same as example A. but the link is in an email
    # 1. Define the same as before, but ensure session_persistent=False
        ELT = ABTest(title="Email Link Text 1.0", session_persistent=False, variates=["click me", "new feature"], default=0)

    # 2. Start experiment but pass no request object
        link_text = ELT.assign_experiment()
        link_id = ELT.get_result_id(link_text)

    # 3. Use link_text and store link_id as part of the url so it can be tracked
        <a href="http://example.com/?l={{link_id}}">{{link_text}}</a>

    # 4. Then just detect the link_index from the url in the view, assign it to the experiment, then assign the result
        ELT.assign_result(value=True,result=link_id)
        # or can track and pass the variate itself
        ELT.assign_result(value=True,variate=link_text)
    

Initialization Options - ABTest(**options)
=====
i. `title` of experiment to be run. Must be unique. Required.
ii. `variates` to be used. Can be list of anything (strs, numbers, bools, db_objects, etc)
iii. `session_persistent=True` Set True to keep variate constant for a given user for life of session
iv. `db_persistent=False` Set True to keep variate constant for multiple sessions (needs iii=True)
v. `session_name='abtest'` Can overwrite the key_name which is used to store the variates in the session
vi. `default=None` Set an initial default result when the experiment is assigned to a user
vii. `description=None` For use in the database only (if want to store information beyond the title)
viii. `binary=True` Set True if result sets are only Success or Fail
ix. `control=None`,  `sample_size=None`, `confidence=None (see below)`


Evaluation Options - ABT.evaluate(**options)
=====
i. `sample_size` once the experiment has this many results it will finish
ii. `force` this will force the experiment to finish on that call can be undone by setting the experiment.finished = False in the db
iii. `confidence` default 0.95, the percentage confidence you want to have that the winning result is actually the winner
iv. `control` this is the 0-index of the variate in the variates list passed or the variate itself. Winning variates significances are evaluated against this control. `e.g. given variates=['a','b','c'] with control='a' or 0`


Evaluation methods
=====
i. For binary result data (success or fail), a Pearson chi-squared test[1] is used. With automatic Yates continuity[2] corrections for small data-sets and a fudge factor for tables of larger than 2x2 size.
ii. For continuous and large result datasets, a simple z-test analysis is used[3].

[1] http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test
[2] http://en.wikipedia.org/wiki/Yates%27_correction_for_continuity
[3] http://en.wikipedia.org/wiki/Z-test


BUGS
=====
No known bugs yet. If you find one, post a comment/fix on the github page.


AUTHOR
=====
Tim Davey <timjdavey@gmail.com>
Something went wrong with that request. Please try again.