Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

comparing metrics with timestamp is not working if step is same between them [BUG] #11982

Open
4 of 23 tasks
pradipneupane opened this issue May 13, 2024 · 4 comments
Open
4 of 23 tasks
Labels
area/model-registry Model registry, model registry APIs, and the fluent client calls for model registry area/tracking Tracking service, tracking client APIs, autologging area/uiux Front-end, user experience, plotting, JavaScript, JavaScript dev server bug Something isn't working

Comments

@pradipneupane
Copy link

pradipneupane commented May 13, 2024

Issues Policy acknowledgement

  • I have read and agree to submit bug reports in accordance with the issues policy

Where did you encounter this bug?

Local machine

Willingness to contribute

No. I cannot contribute a bug fix at this time.

MLflow version

  • Client: 2.12.2

System information

  • Python version:

Describe the problem

When metrics are pushed to mlflow using both timestamp and step parameter, and then comparing those metrics is not working.

Tracking information


Code to reproduce issue

import os
import  time
from random import random, randint
from sklearn.linear_model import LinearRegression

import mlflow.sklearn
from mlflow import log_metric, log_param, log_artifacts
now = int(time.time()) * 1000
if __name__ == "__main__":
    # Log a parameter (key-value pair)
    log_param("param1", randint(0, 100))

    # Log a metric; metrics can be updated throughout the run
    log_metric("foo", random(), timestamp=now)

    model = LinearRegression()
    mlflow.sklearn.log_model(sk_model=model, artifact_path='model',
                             registered_model_name='he2', )
    # Log an artifact (output file)
    if not os.path.exists("outputs"):
        os.makedirs("outputs")
    with open("outputs/test.txt", "w") as f:
        f.write("hello world!")
    log_artifacts("outputs")

repeating that above screenshot for example 2 -3 times will be enough and when comparing between these metrics, it goes default to histogram plot but when trying to compare with relative-time, the plot is blank.

Stack trace

there is no error shown in the code when pushing

Other info / logs

What component(s) does this bug affect?

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

What interface(s) does this bug affect?

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

What language(s) does this bug affect?

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

What integration(s) does this bug affect?

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations
@pradipneupane pradipneupane added the bug Something isn't working label May 13, 2024
@github-actions github-actions bot added area/model-registry Model registry, model registry APIs, and the fluent client calls for model registry area/tracking Tracking service, tracking client APIs, autologging area/uiux Front-end, user experience, plotting, JavaScript, JavaScript dev server labels May 13, 2024
@harupy
Copy link
Member

harupy commented May 13, 2024

@pradipneupane Thanks for reporting this issue. Can you take a screen recording and attach that? If a screen recording is too large to be attached, a screen shot would be helpful.

@pradipneupane
Copy link
Author

pradipneupane commented May 13, 2024

@harupy , I cannot attach screenrecording but here is the screenshot; by default, it goes to this:
default_it_goes_here

but when I manually remove that foo from there and try to plot; it will be blank when I click :
image

it should have plotted line-plot; time vs metrics but it goes still to same previous plot but then blank; this was the data I downloaded from that experiments:

`run_id,key,value,step,timestamp

715dd6b2d59a46acb2641be03192b943,foo,0.478320955,1,1715591506000
801726b785d64791962768121bae028b,foo,0.05083749,1,1715591519000
7622796bfe83442e80770fa6727c3144,foo,0.077082663,1,1715591543000`

Copy link

@mlflow/mlflow-team Please assign a maintainer and start triaging this issue.

@pradipneupane
Copy link
Author

@harupy hello, did you had time to check for this issues. If it is a bug and if you can verify this and if you can point me to right part of source-code; I can also try to create a PR and fix it. Also Is there any workaround for this ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/model-registry Model registry, model registry APIs, and the fluent client calls for model registry area/tracking Tracking service, tracking client APIs, autologging area/uiux Front-end, user experience, plotting, JavaScript, JavaScript dev server bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants