Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MLflow Integration (dependent on Surrogate Modelling) #120

Open
tim-huntley opened this issue Dec 6, 2023 · 0 comments · May be fixed by #123
Open

MLflow Integration (dependent on Surrogate Modelling) #120

tim-huntley opened this issue Dec 6, 2023 · 0 comments · May be fixed by #123

Comments

@tim-huntley
Copy link
Collaborator

tim-huntley commented Dec 6, 2023

Dependency

#83

What we want to achieve

We would like to integrate xplainable into the MLflow workflow. This involves the following workflows:

  • If the model is an xplainable model, the model is logged to MLflow and Xplainable Cloud (with the same experiment_id)
  • If the model is NOT an xplainable model, a surrogate model is created and logged to MLflow and Xplainable Cloud

Key considerations

This should be as uninvasive as possible. We don't want people to have to change their workflow drastically, rather just import xplainable and automate the explainer creation and logging.

Flexibility in approach

Please run with your own creative freedom here. We would love to discuss different approaches, as long as the disruption to user workflow is minimal.

Vision 1

This idea involves implementing a manual logging step. Behind the scenes it will train an xplainable surrogate model and log to both MLflow and Xplainable Cloud.

import xplainable as xp
import mlflow

# <- Build any model here

with mlflow.start_run():
    signature = infer_signature(X_train, model.predict(X_train))

    # <- other logging here

    mlflow.log_model(
        model=model,
        signature=signature,
        input_example=X_train,
        registered_model_name="model-with-xplainable"
    )

    # This should log to mlflow and to xplainable cloud (if an api key is active)
    xp.mlflow.log_explanation(model.predict, X)

Vision 2

This is similar to vision 1, but without the need to manually log explainers. It's not clear how this would be achieved, but it is an idea worth fleshing out.

import xplainable as xp
import mlflow

# <- Build any model here

xp.mlflow.auto_logging = True

with mlflow.start_run():
    signature = infer_signature(X_train, model.predict(X_train))

    # <- other logging here

    mlflow.log_model(
        model=model,
        signature=signature,
        input_example=X_train,
        registered_model_name="model-with-xplainable"
    ) # < -- This should auto-log to mlflow and to xplainable cloud (if an api key is active)
lashdk added a commit to lashdk/xplainable-mlflow-integration that referenced this issue Jan 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant