Skip to content

Commit

Permalink
Merge pull request #687 from mv1388/docstring_addition
Browse files Browse the repository at this point in the history
Add parameter descriptions
  • Loading branch information
mv1388 committed Jul 8, 2022
2 parents 245d94f + c73cd55 commit 93a3e11
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions aitoolbox/torchtrain/callbacks/performance_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ def __init__(self, result_package, args,
Args:
result_package (aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage):
args (dict):
args (dict): used hyper-parameters
on_each_epoch (bool): calculate performance results just at the end of training or at the end of each epoch
on_train_data (bool):
on_val_data (bool):
on_train_data (bool): should the evaluation be done on the training dataset
on_val_data (bool): should the evaluation be done on the validation dataset
eval_frequency (int or None): evaluation is done every specified number of epochs. Useful when predictions
are quite expensive and are slowing down the overall training
if_available_output_to_project_dir (bool): if using train loop version which builds project local folder
Expand All @@ -38,7 +38,7 @@ def __init__(self, result_package, args,
the result_package's output folder shouldn't be full path but just the folder name and the full folder
path pointing inside the corresponding project folder will be automatically created.
If such a functionality should to be prevented and manual full additional metadata results dump folder
is needed potentially outside the project folder, than set this argument to False and
is needed potentially outside the project folder, then set this argument to False and
specify a full folder path.
"""
AbstractCallback.__init__(self, 'Model performance calculator - evaluator')
Expand Down

0 comments on commit 93a3e11

Please sign in to comment.