Skip to content
This repository has been archived by the owner on Aug 19, 2022. It is now read-only.


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Point Loma

A Python library to execute Lighthouse audits and export results of the running performance tests against different URL's. Aims to track code quality and improve user experience.


  • Python 3 (not compatible with Python 2 at the moment)
  • Lighthouse
  • Google Chrome (>= Chrome 59 for headless support)


  • Clone the repository or download the project as a zip file from this Github page
  • Install Lighthouse as a Node command line tool:
    npm install -g lighthouse
    # or use yarn:
    yarn global add lighthouse

Note: Lighthouse requires Node 6 or later. It is recommended to install the current Long-Term Support version of Node.



Create virtualenv to target the Python 3 interpreter, e.g.:

  • python3.6 -m venv ~/.venvs/pointloma3.6
  • source ~/.venvs/pointloma3.6/bin/activate

Fetch package dependencies

pip install -r requirements.txt


python pointloma [-h] [-r RUNS] [-o OUTPUT_PATH] [-v] url


positional arguments:
  url                   url to test against

optional arguments:
  -h, --help            show this help message and exit
  -r RUNS, --runs RUNS  number of test runs
  -o OUTPUT_PATH, --output-path OUTPUT_PATH
                        path to csv file output
  -v, --verbose         increase output verbosity
  -a AUTH_MODULE, --auth-module AUTH_MODULE
                        authentication module to use



Point Loma supports testing URLs which are behind authentication with custom auth modules. Currently, a single auth module is supported, kolibri, but additional ones can be written for specific use cases.

kolibri auth module enables authentication to Kolibri instances and its code is extensively commented so it should be rather straightforward to use it to write your own custom authentication module.


To be able to use kolibri (or custom auth module) you should add the following environment variables with the user credentials, e.g. in your ~/.bashrc:

  • POINTLOMA_USERNAME=yourusername
  • POINTLOMA_PASSWORD=yourpassword

You could also simply prepend environment variables to the pointloma CLI command, but of course, in that case you would have to type it every time you use it (adding it to your ~/.bashrc or similiar script would persist it), e.g.:

POINTLOMA_USERNAME=yourusername POINTLOMA_PASSWORD=yourpassword python pointloma --auth-module kolibri


Specifying the number of tests to run

python pointloma -r 3

Specifying the number of tests to run and the name of the csv file output

python pointloma -r 3 --output-path /tmp/example.csv http://localhost:8000

Running the test once time with verbose logging output

python pointloma -v

Running the test using authentication modules

Environment variables added to e.g. ~/.bashrc:

python pointloma --auth-module kolibri

Environment variables prepended to the command:

POINTLOMA_USERNAME=yourusername POINTLOMA_PASSWORD=yourpassword python pointloma --auth-module kolibri



Resulting output is a comma delimited csv file with the following columns:

  • Timestamp
  • First Meaningful Paint [ms]
  • First Interactive [ms]
  • Consistently Interactive [ms]
  • Speed Index [ms]
  • Estimated Input Latency [ms]


The csv file with the results will be written to one of the two following locations:

  • path specified by -o or --output-path CLI arguments
  • to the output directory under the the pointloma codebase root directory, e.g.:
    • output/results_1987_08_20__11_22_33__123456.csv

Appending to the output

The same output path can be used for multiple test runs (by using the -o or --output-path CLI options) as the results will simply be appended to the specified csv file.

This approach can be useful when testing a single URL with different code states or repository branches and wanting to gather the results in a single csv file for easier processing.

Next steps


No description, website, or topics provided.






No releases published


No packages published

Contributors 4