Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Latest commit 7017ed2 Feb 14, 2020
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Update docs Nov 6, 2019
scripts Update generate-carbon-impact-statement Feb 14, 2020
test Moving over from other repo Nov 6, 2019
.gitignore Initial commit Nov 6, 2019
LICENSE Initial commit Nov 6, 2019 Update Manifest to include data files Jan 14, 2020
setup.cfg Change name Jan 31, 2020


The experiment-impact-tracker is meant to be a simple drop-in method to track energy usage, carbon emissions, and compute utilization of your system. Currently, on Linux systems with Intel chips (that support the RAPL powercap interface) and NVIDIA GPUs, we record: power draw from CPU and GPU, hardware information, python package versions, estimated carbon emissions information, etc. In California we even support realtime carbon emission information by querying!

Once all this information is logged, you can generate an online appendix which shows off this information like seen here:,_default_settings/0.html


To install:

pip install experiment-impact-tracker


Please go to the docs page for detailed info on the design, usage, and contributing:

If you think the docs aren't helpful or need more expansion, let us know with a Github Issue!

Below are some short snippets that might help.


You just need to add a few lines of code!

from experiment_impact_tracker.compute_tracker import ImpactTracker
tracker = ImpactTracker(<your log directory here>)

This will launch a separate process (more like thread) that will gather compute/energy/carbon information in the background.

NOTE: Because of the way python multiprocessing works, this process will not interrupt the main one UNLESS you periodically call the following. This will read the latest info from the log file and check for any errors that might've occured in the tracking process. If you have a better idea on how to handle exceptions in the tracking thread please open an issue or submit a pull request!!!

info = tracker.get_latest_info_and_check_for_errors()

Generating an HTML appendix

After putting all your experments into a folder, we can automatically search for the impact tracker's logs and generate an HTML appendix using the command like below.

First, create a json file with the structure of the website you'd like to see (this lets you create hierarchies of experiment as web pages).

For an example of all the capabilities of the tool you can see the json structure here:

Basically, you can group several runs together and specify variables to summarize. You should probably just copypaste the example above and remove what you don't need, but here are some descriptions of what is being specified:

"Comparing Translation Methods" : {
  # FILTER: this regex we use to look through the directory 
  # you specify and find experiments with this in the directory structure,
  "filter" : "(translation)", 
  # Use this to talk about your experiment
  "description" : "An experiment on translation.", 
  # executive_summary_variables: this will aggregate the sums and averages across these metrics.
  # you can see available metrics to summarize here: 
  "executive_summary_variables" : ["total_power", "exp_len_hours", "cpu_hours", "gpu_hours", "estimated_carbon_impact_kg"],   
  # The child experiments to group together
  "child_experiments" : 
            "Transformer Network" : {
                                "filter" : "(transformer)",
                                "description" : "A subset of experiments for transformer experiments"
            "Conv Network" : {
                                "filter" : "(conv)",
                                "description" : "A subset of experiments for conv experiments"

Then you just run this script, pointing to your data, the json file and an output directory.

create-compute-appendix ./data/ --site_spec leaderboard_generation_format.json --output_dir ./site/

To see this in action, take a look at our RL Energy Leaderboard.

The specs are here:

And the output looks like this:

Creating a carbon impact statement

We can also use a script to automatically generate a carbon impact statement for your paper! Just call this, we'll find all the logfiles generated by the tool and calculate emissions information! Specify your ISO3 country code as well to get a dollar amount based on the per-country cost of carbon.

generate-carbon-impact-statement my_directories that_contain all_my_experiments "USA"

Looking up cloud provider emission info

Based on energy grid locations, we can estimate emission from cloud providers using our tools. A script to do that is here:

lookup-cloud-region-info aws

Or you can look up emissions information for your own address!

% get-region-emissions-info address --address "Stanford, California"

({'geometry': <shapely.geometry.multipolygon.MultiPolygon object at 0x1194c3b38>,
  'id': 'US-CA',
  'properties': {'zoneName': 'US-CA'},
  'type': 'Feature'},
 {'_source': ' '
             '(ElectricityMap Average, 2019)',
  'carbonIntensity': 250.73337617853463,
  'fossilFuelRatio': 0.48888711737336304,
  'renewableRatio': 0.428373256377554})

Asserting certain hardware

It may be the case that you're trying to run two sets of experiments and compare emissions/energy/etc. In this case, you generally want to ensure that there's parity between the two sets of experiments. If you're running on a cluster you might not want to accidentally use a different GPU/CPU pair. To get around this we provided an assertion check that you can add to your code that will kill a job if it's running on a wrong hardware combo. For example:

from experiment_impact_tracker.gpu.nvidia import assert_gpus_by_attributes
from experiment_impact_tracker.cpu.common import assert_cpus_by_attributes

assert_gpus_by_attributes({ "name" : "GeForce GTX TITAN X"})
assert_cpus_by_attributes({ "brand": "Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz" })

Building docs

sphinx-build -b html docsrc docs

Compatible Systems

Right now, we're only compatible with Linux systems running NVIDIA GPU's and Intel processors (which support RAPL).

If you'd like support for your use-case or encounter missing/broken functionality on your system specs, please open an issue or better yet submit a pull request! It's almost impossible to cover every combination on our own!

Tested Successfully On


  • NVIDIA Titan X
  • NVIDIA Titan V


  • Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
  • Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz


  • Ubuntu 16.04.5 LTS


If you use this work, please cite our paper:

    title={Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning},
    author={Peter Henderson and Jieru Hu and Joshua Romoff and Emma Brunskill and Dan Jurafsky and Joelle Pineau},

Also, we rely on a number of downstream packages and work to make this work possible. For carbon accounting, we relied on open source code from as an initial base. psutil provides many of the compute metrics we use. nvidia-smi and Intel RAPL provide energy metrics.

You can’t perform that action at this time.