Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add skip_all_evaluation as a mechanic to skip all evaluation. #3543

Merged
merged 5 commits into from Aug 28, 2023

Conversation

justinxzhao
Copy link
Collaborator

Evaluating a model, especially on large validation or test sets, can be time-consuming.

This parameter can also be useful if you are training a model with a well-known configuration on a well-known dataset and are confident about the expected results, or if you are planning to evaluate the model separately, outside of ludwig.

@github-actions
Copy link

github-actions bot commented Aug 23, 2023

Unit Test Results

  6 files  ±0    6 suites  ±0   1h 14m 3s ⏱️ - 6m 33s
34 tests ±0  29 ✔️ ±0    5 💤 ±0  0 ±0 
88 runs  ±0  72 ✔️ ±0  16 💤 ±0  0 ±0 

Results for commit af3b1db. ± Comparison against base commit 8d4c96b.

♻️ This comment has been updated with latest results.

@justinxzhao justinxzhao merged commit f34c272 into master Aug 28, 2023
13 of 16 checks passed
@justinxzhao justinxzhao deleted the skip_all_evaluation branch August 28, 2023 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants