Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncap the torchmetrics version, so we can get fixes in 1.0+. #3741

Merged
merged 2 commits into from Oct 20, 2023

Conversation

trevenrawr
Copy link
Contributor

While trying to figure out why my air-gapped installation and deployment of Ludwig was failing to detect my cached copy of the punkt tokenizer for nltk, I came across Lightning-AI/torchmetrics#1779 which got merged into torchmetrics==1.0.0... but Ludwig was avoiding that version. So I forcibly installed torchmetrics==1.2.0 (latest) in my image (after installing Ludwig), ignored pip's words of caution, and managed to run some LoRA-based fine-tuning without a hitch! Since that code path worked okay with the updated library, I'm submitting this PR to remove the pin and leaning on CI to help identify if there are issues on other code paths or if it's truly safe to release.

@justinxzhao
Copy link
Collaborator

Thanks @trevenrawr! Let's see what the CI says -- I recall we had to pin originally to mitigate some test failures, though perhaps the test failures are reasonable to fix now.

@github-actions
Copy link

github-actions bot commented Oct 19, 2023

Unit Test Results

  6 files  ±0    6 suites  ±0   22m 25s ⏱️ + 1m 41s
12 tests ±0    7 ✔️  -   2    5 💤 +  2  0 ±0 
60 runs  ±0  30 ✔️  - 12  30 💤 +12  0 ±0 

Results for commit 248f8d8. ± Comparison against base commit f98b8f6.

This pull request skips 2 tests.
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[ames_housing.gbm.yaml]
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[mercedes_benz_greener.gbm.yaml]

♻️ This comment has been updated with latest results.

@arnavgarg1
Copy link
Contributor

Good news is that there's only 1 test failure! Specifically, in the combinatorial tests: test_number_encoder_loss[config3-dataset3].

I ran this again since I thought it might be transient but seems like it is a persistent error that may require a tiny bit of inspection

@justinxzhao
Copy link
Collaborator

Looks like my most recent commit fixed the broken test! I haven't seen anyone make use of RMSE kwargs, so I'm inclined to check this in as is.

Metrics unit tests look good.

@justinxzhao justinxzhao merged commit 77259c7 into ludwig-ai:master Oct 20, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants