This repository has been archived by the owner on Jun 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 32
PyTorch model is never saved as checkpoint after first epoch #48
Comments
thanks @apyskir |
apyskir
added a commit
to apyskir/steps
that referenced
this issue
May 16, 2018
kamil-kaczmarek
pushed a commit
that referenced
this issue
May 17, 2018
merged in #52 |
kamil-kaczmarek
pushed a commit
that referenced
this issue
May 23, 2018
* initial * cleaned up * updates keras models and callbacks * added explanation notebook * renamed notebook * ran notebook * transformer loading now to be clicked * dropped nlp * fixed typos * updated whats missing * updated with dsb and talking data * added simple pipeline notebook * imports optimized * feature_names=None and categorical_features=None * refactor the way the model parameters are passed * moved adapters to seperate file, renamed identity/take_first adapters * Update misc.py * fit_transform -> transform * Update base.py * fixed #10 * fixed typo * * added: get_upsample_pad, get_downsample_pad * added: multiclass_segmentation_loss * *warning added * Add some user interface messages * Notebook#3: Adapter for ensembling * Add example: 1-getting-started.ipynb * Remove custom adapter; other minor changes after reviewer feedback * Add example 2-multi-step.ipynb and some minor changes to example 1-getting-started.ipynb * Cleanup of examples 1 and 2 * Fix a mistake in notebook text * Add steps/__pycache__ to .gitignore * Further changes based on Github feedback * add self.conv_stride parameter in unet definition * Change steps/__pycache__ to *.pyc in .gitignore * Changes in Notebook#3 following code review * train forests on train data and ensembler on ensembling data * decoupled sklearn preprocessing from text preprocessing with heavy de… (#32) * decoupled sklearn preprocessing from text preprocessing with heavy dependencies * fixed import error * dropped anonymization from text.py, reformatted imports, small refactors * small refactor for readability * Change time format (#40) * Dev docstrings (#33) * Step halfway done * Step init args done * Steps docstrings done * added docstrings to Step properties * Step docstring done * dropped mock transformer addeed docstrings to Base and Dummy Transformers * base documentation finished, minor refactors * docstrings added to adapters * docstrings added * started refactoring lgbm * Update base.py * updated docstrings * base transformer docstring fix * fixed logging docstrings * Notebook #5: Example with Keras (#41) * Notebook #5: Example with Keras * Corrected notebook for Keras and necessary refactors in ClassifierGenerator * Fix issue #28: Unintuitive adapter syntax (#42) * Write tests for new adapter syntax * Refactor adapter * Improve handling of caches and logs in tests * Fix minor issues mentioned in PR comments * Rewrite tests in pytest framework * Move adapting to seperate class, alter behaviour * Correction: mutable object as default argument in Step initializer * Issue #16: make_transformer (#50) * Corrections and tests for Sklearn wrappers * make_transformer taking arbitrary functions * fix issues #48 and #49 (#52) * fix issues #48 and #49 * use of setdefault method * fix according to Kamil's request * Fix post merge issues * Prepare package (#58) * Remove tutorial notebooks * Remove modules with heavy dependencies * Rename steps -> steppy * Add setup.py * Tell git to ignore egg files * Read the docs might output of sphinx-quickstart * Requirements file * Try adding sphinx-apidoc output * Add some requirements * Add IPython to requirements * removed intro.ipynb and updated requirements.txt * corrected setup.py * refactored NoOperation to IdentityOperation, optimized inputs * moved misc.py to steppy-toolkit * simplified ProbabilityCalibation, BaseTransformer small refactor * prepare setup.py and setup.cfg for PyPI registration * added install requires to setup.py * name refactor * Create README.md
Merged
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Look here:
https://github.com/minerva-ml/gradus/blob/dev/steps/pytorch/callbacks.py#L266
If
self.epoch_id
is equal to 0, thenloss_sum
is equal toself.best_score
and model is not saved. I think it should be fixed, because sometimes we want to have model after first epoch saved.The text was updated successfully, but these errors were encountered: