Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
branch: master
Commits on Aug 21, 2012
Commits on Aug 14, 2012
  1. Report percentage of successful new users

    authored
    This report is identical to the "sites" report (number of sites logged
    in), except for one difference. While the "sites" report lets users
    switch between "area" and "line" mode for the graph display, this report
    loads into "line" mode by default and hides the option to switch modes.
    
    This was done because area mode is cumulative, which doesn't make sense
    with fractions that this report uses.
    (e.g., if we had a segmentation with three browsers, each at 0.5 success, the
    cumulative graph would display 1.5=150% success, which is nonsensical.)
    
    Closes #15
Commits on Aug 13, 2012
Commits on Jul 25, 2012
  1. Merge database backend implementation

    authored
    Closes #27: Pre-calculate reports, store them in CouchDB for retrieval
Commits on Jul 18, 2012
  1. Fix #30: holes in data break user flow over time

    authored
    The problem was that, in the data returned by the server, some days
    didn't have a value for every step.
    
    That is an indicator that 0 people completed the missing step, but
    that needed to be stated explicitly.
  2. Convert sites logged in report to database backend

    authored
    The report is also converted to display the mean number of sites logged in
    instead of the median (closes #29).
    
    The reason behind it (from #29):
    
    Report #1 is the median number of sites a user logs into with Persona.
    
    As part of migrating to CouchDB as the backend (#27), finding the median
    of the data series becomes a significantly harder technical challenge.
    (To do it in a map/reduce framework requires a quick-select algorithm,
    which there doesn't seem to be a good way to do in CouchDB.)
    
    Alternately, the median value for each day could be precalculated when
    data arrives and then stored in the database. However, this would
    require either a new database (cumbersome) or a change to the data
    format and code of the current one (very undesirable).
    
    Calculating the mean of the dataset, however, is much easier.
    
    While the median is a more sensible value to look at (it is less
    sensitive to outliers), it has been agreed, before, that this entire
    report is not hugely meaningful. The median value itself doesn't really
    say anything. The only way we'd use it is to watch the number and hope
    it trends up. In that case, however, the mean is just about as good: we
    can look at it and watch its trend.
Commits on Jul 16, 2012
  1. Unwrap segmentation values

    authored
    Due to code copy/pasting, segmentation values in the database were
    stored inside a size-one array (e.g., ["value"]). Though things weren't
    breaking, there is absolutely no reason for that here.
  2. Fix segments not reappearing on re-selection

    authored
    After being toggled off, then toggled on, segments in the "sites logged in" and "sign-in
    attempts" reports weren't shown.
    
    The issue was occurring because, in the process of updating displayed
    segments, the series in the report object were being overwritten with the
    "filtered" series — the ones with the segment removed. Thus, this data
    was being lost permanently.
    
    To fix this, the full (unfiltered) series are stored in a temporary
    variable before the graph is updated; then they are restored.
    
    This regression was introduced in fa809ba.
  3. Ensure only known segmentation values are stored in database

    authored
    "Known" means the ones listed in the config file.
    
    Fixes the way we populate the database to be consistent with how data
    aggregation used to happen.
  4. Add first segmentation to db-backed new user report

    authored
    Segmentation by OS
Commits on Jul 14, 2012
  1. Partially migrate new user report (#2) to database backend

    authored
    Uses database backend for cumulative requests;
    requests for segmented data get processed using legacy code.
Commits on Jul 13, 2012
  1. Pre-calculate user flow over time report

    authored
    Prior process:
        On every request, data was downloaded, the report was prepared, then
        sent back to the user.
    
    Now:
        Separately from the server, data is downloaded and stored in CouchDB.
            (This is done, manually for now, using the script server/bin/update.)
        The server sets up views in CouchDB that map/reduce the data into a form
            ready for the report.
        On user request, the data is retrieved from the appropriate view and
        sent back.
    
    Benefit:
        Data download and report calculation only needs to happen once.
    
    Scope:
        This commit only switches over the "new user flow over time" report.
    
    This is the first major part of implementing #10.
  2. Load configurations synchronously

    authored
    Ensures that, whenever settings are requested, they have already been loaded.
  3. Move date determination into new util module

    authored
    Turns out, it's needed in multiple places
Commits on Jul 12, 2012
  1. Adopt KPIggybank's data format

    authored
    This means:
        - The fake data server generates data using that format.
        - The data downloaded from the server is expected to be in that
          format too
    
    The difference:
        KPIggybank's wraps the payload of KPI data into an object that has
        an ID and where the data itself is under the "value" field.
        (This is how the data is extracted from CouchDB.)
Commits on Jul 7, 2012
Commits on Jul 6, 2012
  1. Increase number of points generated by fake data server

    authored
    To make the new user flow over time report (336dfa1) less noisy.
Commits on Jul 5, 2012
Commits on Jul 3, 2012
Something went wrong with that request. Please try again.