677 lines (460 sloc) 28.3 KB

Contributions Guidelines

You are welcome to contribute to this project. Here are the guidelines we try to stick to in this project.

Questions or Problems

If you have a question about the site or about web compatibility in general, feel free to join us in the #webcompat channel on the Mozilla IRC network. Here's how to join.

Otherwise, you can try to ping Mike Taylor on the Freenode network with the following command /msg miketaylr Hi, I have a question about

Filing an Issue

If you're using and something is confusing, broken, or you think it could be done in a better way, please let us know. Issues are how the project moves forward—let us know what's bothering you.

  • Search the issues list for existing similar issues. Consider adding to an existing issue if you find one.
  • Choose a descriptive title.
  • Provide a test, snippet of code or screenshot that illustrates the problem. This small gesture goes a long way towards getting a speedy fix.

Triaging Issues

One way to contribute is to triage issues. This could be as simple as confirming a bug, or as complicated as debugging errors and providing fixes. A tiny bit of effort in someone else's issue can go a long way.

Closing Bugs as Invalid

The wiki contains a list of reasons why bugs might be closed as invalid. When in doubt, ask questions in the bug.

Finding an Issue to Work On

The logic for the issue tracker is this -

  • Milestones - initiatives in priority order (ignore the dates)
  • Unprioritized - new ideas, cool things, easy issues to tackle

Anything labeled "status: good-first-bug" is perfect for getting started!

Feature Requests

You can request a new feature by submitting an issue to our repo. If you would like to implement a new feature then consider what kind of change it is:

  • Major Changes that you wish to contribute to the project should be discussed first in an issue or irc so that we can better coordinate our efforts, prevent duplication of work, and help you to craft the change so that it is successfully accepted into the project.
  • Small Changes can be crafted and submitted as a Pull Request.

Submission Guidelines

All code contributions should come in the form of a pull request, as a topic branch.

  1. Have a quick search through existing issues and pull requests so you don't waste any of your time.

  2. If no existing issue covers the change you want to make, please open a new issue before you start coding.

  3. Fork repository


    You'll probably want to set up a local development environment to get that far. If you've already been through this process, make sure you've set the main repo as an upstream remote and make sure your fork is up to date before sending pull requests.

  4. Make your changes in a new branch

    git checkout -b name-of-fix-branch

  5. Create your patch; commit your changes. Referencing the issue number you're working on from the message is recommended.

    git commit -m 'Issue #123 - Fixes broken layout on mobile browsers'

  6. Push your branch to GitHub:

    git push origin name-of-fix-branch

  7. If you want to discuss your code or ask questions, please comment in the corresponding issue. You can link to the code you have pushed to your repository to ask for code review.

  8. When your code is ready to be integrated into the project, use the GitHub site to send a pull request to, aka the master branch of the repo you forked from. This will be the default choice.


  9. Set the title of the pull request to reference the issue number.

    Fixes #123 - Fixes broken layout on mobile browsers

  10. When sending the pull request do not forget to call out someone for review by using the following convention:

    r? @miketaylr

    This will notify the person that your request is waiting for a review for merging. Ask a review only by one person, this will avoid misunderstandings and the ball is dropped. (Python: karlcow, miketaylr. JavaScript: magsout, miketaylr, tagawa CSS: magsout).

  11. Continue discussion in the pull request.

    The discussion might lead to modify or abandon this specific pull request. This is the place where you can have a code review.

  12. Once the Pull Request got an explicit r+ from the reviewer(s), it is the responsibility of the reviewer (or the admin) to merge the branch. A pull request submitter should never merge the pull request themselves.

    The repo owners might choose to self-merge for urgent security or hot fixes.

After all that, if you'd like, you can send a pull request to add your name to our humans.txt file.

For product and design contributions, check out the Design Repo

Coding Style


Try to take care to follow existing conventions. Some of these are defined in an .editorconfig file. You can download the plugin for your editor here


As we are still very early in the project, we do not yet have that many conventions for naming, routes, APIs. If in doubt, ask us or open an issue. All Python code should pass pep8.

You can check this by installing the pep8 module.

sudo pip install pep8

Once at the root of the project you can run it with

pep8 --show-source --show-pep8 .

That will show you the list of errors and their explanations. Another tool, we have used for checking consistency of the code is flake8 + hacking. Hacking is a set of OpenStack guidelines which is used by the community for the stability of their projects. You will see that there's nothing really hard about it.

sudo pip install hacking

will install the relevant flake8 and hacking modules. In the same fashion, if you do

flake8 .

You will get in return the list of mistakes. Some of the basics recommendations are:

  • Modules are sorted by alphabetical order.
  • Do not do relative imports (such as from .foo import bar)
  • Import only modules not function name (because of possible name clashes)
  • Group modules by categories (sys, libraries, project)
  • When multilines docstrings are used. The first sentence is short and explains the module. Separated by a blank line.
  • docstrings sentences are finished by a period.

When in doubt, follow the conventions you see used in the source already.


We use cssnext as a tool for compiling css

This is a CSS transpiler (CSS4+ to CSS3) that allows you to use tomorrow's CSS syntax today. It transforms CSS specs that are not already implemented in popular browsers into more compatible CSS.

More info here :

Naming conventions

We use a very simple syntax based on BEM and it looks like:

  • ComponentName
  • ComponentName--modifierName
  • ComponentName-descendantName

CSS and JS

All classes that depend on javascript are prefixed by js-* . These classes are handled by JavaScript, no styles are applied.

Folder and file

The main stylesheet is main.css. There are @import statements to all other files, which are stored in the folder: Components, Page, layout, vendor.

Framework, plugin

We do not use frameworks. However we use libraries, such suitcss-components-grid, suitcss-utils-display.


The js folder contains two subfolders: lib contains all project source files and vendor contains all third party libraries. The files out of the two sub folders contain the compiled source code.

Note: All code changes should be made to the files in lib

@@something to write by miketaylr@@

Working Environment setup

For testing code locally, you will need a very basic setup. There are a few requirements. These instructions have been made for working with Linux, Windows and MacOSX. You need:

Note: If you install Python on Windows using the MSI installer, it is highly recommended to check the "Add to path"-box during installation. If you have not done so, see if one of the answers to the StackOverflow post Adding Python path on Windows 7 can help you - it should also work fine for later versions of Windows.

Windows typically doesn't have the make tool installed. Windows users without make should look at the "detailed setup" section below.

As an alternative to Windows, a cloud IDE such as Cloud 9 can be used for a relatively easier setup. If you take this route, please update to the latest Python version with the following. (This is to avoid InsecurePlatformWarning errors that arise when the default Python 2.7.6 is used).

sudo apt-add-repository ppa:fkrull/deadsnakes-python2.7
sudo apt-get update
sudo apt-get install python2.7 python2.7-dev

In Ubuntu, sometimes even after installing Node.js, the command node -v does not show the installed version. To complete installation, a symbolic link has to be created to the sbin folder.

#remove old symbolic links if any
sudo rm -r /usr/bin/node

#add new symbolic link
sudo ln -s /usr/bin/nodejs /usr/bin/node
sudo ln -s /usr/bin/nodejs /usr/sbin/node

Simple setup (Mac OS and Linux)

Initializing Project source code

We use Grunt as a task runner to perform certain things (minify + concat JS assets, for example). You need to have Node.js to be able to run Grunt.

# clone the repo
git clone<username>/ #replace your github username
# change to directory
# check out submodules
npm run module
# initializing project
[sudo] npm run setup

Note: if you got an error message, you may need to install pip before running make install again.

Detailed setup (All platforms)

Installing pip

We use pip to install other Python packages. You may need to install pip if you haven't done so for another project or Python development.

To determine if you need to install pip, type the following command into the terminal:

pip --version

If you get an error message, Mac/Linux users can try to install pip with this command:

# (Mac/Linux)
sudo easy_install pip

(If easy_install isn't installed, you'll need to install setuptools.)

Windows users should simply download the most recent Python 2.7 installer and run it again, it installs pip by default.

Installing virtualenv

# Install virtualenv
[sudo] pip install virtualenv

Installing Project source code

# clone the repo. Change username to your Github username
git clone
# change to directory
# check out submodules
git submodule init
git submodule update
# set up virtual environment
[sudo] virtualenv env
source env/bin/activate
# install Pillow image lib dependencies (if you plan on hacking on image upload features)
#  OSX:
#  Windows:
#  Linux:
# install rest of dependencies
pip install -r config/requirements.txt
# In Ubuntu: if ImportError: No module named flask.ext.github occurs, it means the dependencies in requirements.txt are installed in /usr/lib instead of <project_repository>/env/python<version>/site-packages.
# In this case, use virtual environment's pip from <project_repository>/env/lib/pip folder of the project repository instead of the global pip.

Installing Grunt

We use Grunt as a task runner to perform certain tasks (minify + concat JS assets, for example). You need to have Node.js installed to be able to run Grunt. Once that's done, npm can be used to install Grunt and other build dependencies.

First install the grunt-cli tool:

[sudo] npm install -g grunt-cli
[sudo] npm install

Configuring The Server

To test issue submission, you need to create a repository on GitHub. Create a new repository make note of the name. For example, the user miketaylr has created a repository called "test-repo" for this purpose.

# set up, filling in appropriate secrets and pointers to repos
# Mac / Linux
cp config/ config/
# Windows
copy config/ config/

Note: If you are using Cloud 9, you have to update and replace"IP", ""), port=int(os.getenv("PORT", 8080))).

You can now edit and

  1. Add the right values to the repo issues URIs. ISSUES_REPO_URI = "<user>/<repo>/issues". For example, miketaylr's setup needs to say ISSUES_REPO_URI = "miketaylr/test-repo/issues"

  2. You have the option of creating a "bot account" (a dummy account for the purpose of testing), or using your own account for local development. Either way, you'll need a personal access token to proceed — this is the oauth token we use to report issues on behalf of people who don't want to give GitHub oauth access (or don't have GitHub accounts).

The instructions for creating a personal access token are given on GitHub. Select public_repo to grant access to the public repositories through the personal access token. Once you have created the token you can add it in the variable OAUTH_TOKEN = "" (yes, even if you're using your own credentials we still refer to it as a bot). More advanced users might want to create an environment variable called OAUTH_TOKEN. Either way is fine.

  1. Add the client id and client secret to If you're part of the webcompat GitHub organization, you can get the client id and client secret from GitHub. Otherwise, create your own test and production applications (instructions here) — when prompted for a "Authorization callback URL", use http://localhost:5000/callback,(Cloud 9 users should use and take note of the client id and client secret GitHub gives you.

When you have the client id and client secret put them in the corresponding lines in for the localhost application:

# We're running on localhost, use the test application
GITHUB_CLIENT_ID = os.environ.get('FAKE_ID') or "<client id goes here>"
GITHUB_CLIENT_SECRET = os.environ.get('FAKE_SECRET') or  "<client secret goes here>"

Note: You can ignore the FAKE_ID and FAKE_SECRET environment variables, we use that as a hack for automated tests.

Note: If you get a 404 at GitHub when clicking "Login", it means you haven't filled in the GITHUB_CLIENT_ID or GITHUB_CLIENT_SECRET in

Auth 404

Starting The Server

# start local server


# start local server
npm run start

You should now have a local instance of the site running at http://localhost:5000/. Please file bugs if something went wrong!

Building the Project

After certain kinds of changes are made, you need to build the project before serving it from a webserver will work

  • CSS: a build will run cssnext, combine custom media queries, and concat all source files into You'll need to re-build the CSS to see any changes, so it's recommended to use a watch task (see make watch or grunt watch).
  • JS: a build will run eslint, minify and concat source files.
  • HTML templates: the changes should be served from disk without the need for rebuilding
  • Python: the Flask local server will detect changes and restart automatically. No need to re-build.

You can build the entire project (CSS and JavaScript files and optimize images) by executing this command on Mac/Linux:

npm run build

and this command on Windows:

npm run watch


Build the entire project (CSS and JavaScript files and optimize images) on the fly everytime you save a file to see changes immediately.

# watching CSS and JS
npm run watch

Linting static JS files with project coding styles.

# linting style JS
npm run lint

Fixing static JS files with project coding styles, if an error occurs.

# fixing linting style JS
npm run fix

By default, a build will not optimize images (which is done before deploys). If you'd like to optimize images, you can run grunt imagemin.

Running Tests

You can run the Python unit tests from the project root with the nosetests command.

Running functional tests is a bit more involved (see the next section).

Tests are also run automatically on Travis for each commit. If you would like to skip running tests for a given commit, you can use use the magical [ci skip] string in your commit message. See the Travis docs for more info.

Functional Tests

We use Intern to run functional tests.

Note: This version is known to work with Firefox 50.1.0. If things aren't working with the current stable version of Firefox, please file a bug!

Installing Java

Java is used to run Selenium functional tests. Version 1.8.0+ is required.

To test if your version of Java is recent enough, type the java -version into your terminal.

> java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

Download from

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

The firefox binary will also need to be in your PATH. Here's how this can be done on OS X:

export PATH="/Applications/$PATH"

If you are a member of webcompat organization in GitHub, edit config/ The value of ISSUES_REPO_URI is the path of the repository containing test issues.

Change the value to : ISSUES_REPO_URI = 'webcompat/webcompat-tests/issues'.

Start the application server:

source env/bin/activate && python

In a separate terminal window or tab, run the tests:

node_modules/.bin/intern-runner config=tests/intern

Shortly after running this command, you should see the browser open and various pages appear and disappear automatically for a minute or two. The tests are complete when the browser window closes and you see a report of how many passed or failed in the terminal window that you ran the intern-runner command in.

Many tests require the ability to log in with GitHub OAuth. This can be achieved by passing in a valid GitHub username: user and password: pw as command-line arguments:

node_modules/.bin/intern-runner config=tests/intern user=testusername pw=testpassword

Note Be aware that this will add the testusername and testpassword to your bash history. It is possible to run the tests without using a GitHub username and password as command-line arguments. In that case, the automatic login will fail and you then have 10 seconds to manually enter a username and password in the GitHub login screen that appears.

node_modules/.bin/intern-runner config=tests/intern user=testusername pw=testpassword

This will give you 10 extra seconds to enter a 2FA token when the inital login happens. By default there is no delay, so if you don't need this — you don't need to do anything differently.

To run a single test suite, where foo.js is the file found in the tests/functional directory:

node_modules/.bin/intern-runner config=tests/intern functionalSuites=tests/functional/foo.js user=testusername pw=testpassword

Functional Tests using Fixture Data

It's possible to mock the communications with GitHub API servers using local fixture data. To run tests using these mocked repsonses, run the server in "test mode":

python -t

You can then run intern tests or do local development and the files in the /tests/fixtures/ directory will be served as responses.

Adding Fixtures

To indicate that the app should send fixture data, use the @mockable_response decorator for an API endpoint.

If the endpoint you are trying to mock has GET parameters, you will need to create a file that has the GET parameters encoded in the filename. The source of @mockable_repsonse explains how this is done:

if get_args:
    # if there are GET args, encode them as a hash so we can
    # have different fixture files for different response states
    checksum = hashlib.md5(json.dumps(get_args)).hexdigest()
    file_path = FIXTURES_PATH + request.path + "." + checksum
    print('Expected fixture file: ' + file_path + '.json')

You can look at the server console's Expected fixture file: message to know what file it is expecting.

Writing Tests

Contributions that add or modify major functionality to the project should typically come with tests to ensure we're not breaking things (or won't in the future!). There's always room for more testing, so general contributions in this form are always welcome.

Python Unit Tests

Our Python unit tests are vanilla flavored unittest tests. Unit tests placed in the tests directory will be automatically detected by nose—no manual registration is necessary.

Unit tests are preferred for features or functionality that are independent of the browser front-end, i.e., API responses, application routes, etc.

Important documentation links:

JS Functional Tests

Functional tests are written in JavaScript, using Intern. There's a nice guide on the Intern wiki that should explain enough to get you started.

Important documentation links:

  • Leadfoot: the library that drives the browser (via Selenium).
  • ChaiJS: the library used for assertions.
  • Intern wiki: contains useful examples.

It's also recommended to look at the other test files in the tests/functional directory to see how things are commonly done.

Production Server Setup

The current instance of has a nginx front server in front of the Flask application. These are the few things you need to know if you wanted to replicate the current configuration of the server. You will need to adjust for your own environment.

The configuration file is often located at something similar to:


It depends on your local system. So we encourage you to read any documentation of your local server. You would then create a symbolic link to your local /etc/nginx/sites-enabled/. The gist of the nginx configuration file holds into

server {
  listen 80;
  root $HOME/;
  error_log $LOGS/nginx-error.log;
  location / {
    # serve static assets, or pass off requests to uwsgi/python
    try_files $HOME/$uri $uri @wc;
  location @wc {
    uwsgi_pass unix:///tmp/uwsgi.sock;
    include uwsgi_params;

We also have the following content type handlers.

# Gzip Settings

gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.0;

# Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
# "gzip_types text/html" is assumed.
gzip_types text/javascript;
gzip_types text/plain;
gzip_types text/xml;

We are also using uWSGI.

upstream uwsgicluster {

with the following configuration file uwsgi.conf

# our uWSGI script to run

description "uwsgi service"
start on runlevel [2345]
stop on runlevel [06]


# .ini files for (staging.ini) and (production.ini) are in $HOME/vassals
exec /usr/local/bin/uwsgi --emperor $HOME/vassals

We have been using uWSGI Emperor to manage two environments for staging and production. It gives us the possibility to test features which are not yet fully ready for production without messing the actual site.



socket = $FOO/uwsgi.sock
chmod-socket = 666
chdir = $HOME/
master = true
module = webcompat
callable = app
logto = $LOGS/uwsgi.log
buffer-size = 8192

and staging.ini


socket = $FOO/uwsgi2.sock
chmod-socket = 666
chdir = $HOME/
module = webcompat
callable = app
logto = $LOGS/staging-uwsgi.log
buffer-size = 8192

Hopefully this will help you clear up a few struggles.


A lot of this document was inspired directly by the excellent Backbone.LayoutManager and Angular.js CONTRIBUTING files.