Skip to content
Comet-For-MLFlow Extension
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Initial commit Feb 12, 2020
comet_for_mlflow Initial commit Feb 12, 2020
examples Initial commit Feb 12, 2020
tests Initial commit Feb 12, 2020
.editorconfig Initial commit Feb 12, 2020
.flake8 Initial commit Feb 12, 2020
.gitignore Initial commit Feb 12, 2020
.isort.cfg Initial commit Feb 12, 2020
.pre-commit-config.yaml Initial commit Feb 12, 2020
AUTHORS.rst Initial commit Feb 12, 2020
CONTRIBUTING.rst Initial commit Feb 12, 2020 Release 0.1.1 Feb 17, 2020
LICENSE Initial commit Feb 12, 2020 Initial commit Feb 12, 2020 Initial commit Feb 12, 2020
requirements-test.txt Initial commit Feb 12, 2020
requirements_dev.txt Initial commit Feb 12, 2020
setup.cfg Initial commit Feb 12, 2020 Bump the version Feb 17, 2020
tox.ini Initial commit Feb 12, 2020

Comet-For-MLFlow Extension

image CI Build Updates

The Comet-For-MLFlow extension is a CLI that maps MLFlow experiment runs to Comet experiments. This extension allows you to see your existing experiments in the UI which provides authenticated access to experiment results, dramatically improves the performance for high volume experiment runs, and provides richer charting and visualization options.

This extension will synchronize previous MLFlow experiment runs with all runs tracked with Comet's Python SDK with MLFlow support, for deeper experiment instrumentation and improved logging, visibility, project organization and access management.

The Comet-For-MLFlow Extension is available as free open-source software, released under GNU General Public License v3. The extension can be used with existing accounts or with a new, free Individual account.


pip install comet-for-mlflow

If you install comet-for-mlflow in a different Python environment than the one you used to generate mlflow runs, please ensure that you use the same mlflow version in both environments.

Basic usage

For automatically synchronizing MLFlow runs in their default storage location (./mlruns) with, run:

comet_for_mlflow --api-key $COMET_API_KEY --rest-api-key $COMET_REST_API_KEY

If you'd like to review the mapping of MLFlow runs in their default storage location without synchronizing them with automatically, you can run:

comet_for_mlflow --no-upload

After review, you can upload the mapped MLFlow runs with:

comet upload /path/to/


 __   __         ___ ___     ___  __   __                 ___       __
/  ` /  \  |\/| |__   |  __ |__  /  \ |__) __  |\/| |    |__  |    /  \ |  |
\__, \__/  |  | |___  |     |    \__/ |  \     |  | |___ |    |___ \__/ |/\|

Please create a free Comet account with your email.

Please enter a username for your new account.
Username: kstewart

A account has been created for you and an email was sent to you to setup your password later.
Your Comet API Key has been saved to ~/.comet.ini, it is also available on your dashboard.
Starting Comet Extension for MLFlow

Preparing data locally from: '/home/ks/project/mlruns'
You will have an opportunity to review.

# Preparing experiment 1/3: Default

# Preparing experiment 2/3: Keras Experiment
## Preparing run 1/4 [2e02df92025044669701ed6e6dd300ca]
## Preparing run 2/4 [93fb285da7cf4c4a93e279ab7ff19fc5]
## Preparing run 3/4 [2e8a1aed22544549b2b6b6b2c5976ed9]
## Preparing run 4/4 [82f584bad7604289af61bc505935599b]

# Preparing experiment 3/3: Tensorflow Keras Experiment
## Preparing run 1/2 [99550a7ce4c24677aeb6a1ae4e7444cb]
## Preparing run 2/2 [88ca5c4262f44176b576b54e0b24731a]

 MLFlow name:   | name:   |   Prepared count:
 Experiments    | Projects         |                 3
 Runs           | Experiments      |                 6
 Tags           | Others           |                39
 Parameters     | Parameters       |                51
 Metrics        | Metrics          |                60
 Artifacts      | Assets           |                27

All prepared data has been saved to: /tmp/tmpjj74z8bf

Upload prepared data to [y/N] y

# Start uploading data to
100%|███████████████████████████████████████████████████████████████████████| 6/6 [01:00<00:00, 15s/it]
Explore your experiment data on with the following links:
Get deeper instrumentation by adding Comet SDK to your project:

If you need support, you can contact us at or

Advanced use

Importing MLFlow runs in a database store or in the MLFLow server store

If your MLFlow runs are not located in the default local store (./mlruns), you can either set the CLI flag --mlflow-store-uri or the environment variable MLFLOW_TRACKING_URI to point to the right store.

For example, with a different local store path:

comet_for_mlflow --mlflow-store-uri /data/mlruns/

With a SQL store:

comet_for_mlflow --mlflow-store-uri sqlite:///path/to/file.db

Or with a MLFlow server:

comet_for_mlflow --mlflow-store-uri http://localhost:5000

Importing MLFlow artifacts stored remotely

If your MLFlow runs have artifacts stored remotely (in any of supported remote artifact stores, you need to configure your environment the same way as when you ran those experiments. For example, with a local Minio server:

env MLFLOW_S3_ENDPOINT_URL=http://localhost:9001 \
    AWS_ACCESS_KEY_ID=minio \
    AWS_SECRET_ACCESS_KEY=minio123 \


How can I configure my API Key or Rest API Key?

You can either pass your API Key or Rest API Key as command-line flags or through the usual configuration options.

How are MLFlow experiments mapped to projects?

Each MLFlow experiment is mapped to a unique project ID. This way even if you rename the project or the MLFlow experiment, new runs will be imported in the correct project. The name for newly created is mlflow-$MLFLOW_EXPERIMENT_NAME. The original MLFlow experiment name is also saved as an Other field named mlflow.experimentName.

Below is a complete list of MLFlow experiment and run fields mapped to equivalent concepts:

  • MLFlow Experiments are mapped as projects
  • MLFlow Runs are mapped as experiments
  • MLFlow Runs fields are imported according to following table:
MLFlow Run Field Experiment Field
File name File name
Tags Others
User Git User + System User
Git parent Git parent
Git origin Git Origin
Params Params
Metrics Metrics
Artifacts Assets

Do I have to run this for future experiments?

No, the common pattern is to import Comet's Python SDK with MLFlow support in your MLFlow projects, which will keep all future experiment runs synchronized.


This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

You can’t perform that action at this time.