Skip to content
Dec 17, 2018
Satisfy pypi version linter

@ottonemo ottonemo released this Dec 13, 2018 · 27 commits to master since this release

Version 0.5.0

Added

  • Basic usage notebook now runs on Google Colab
  • Advanced usage notebook now runs on Google Colab
  • MNIST with scikit-learn and skorch now runs on Google Colab
  • Better user-facing messages when module or optimizer are re-initialized
  • Added an experimental API (net._register_virtual_param) to register "virtual"
    parameters on the network with custom setter functions. (#369)
  • Setting parameters lr, momentum, optimizer__lr, etc. no longer resets
    the optmizer. As of now you can do net.set_params(lr=0.03) or
    net.set_params(optimizer__param_group__0__momentum=0.86) without triggering
    a re-initialization of the optimizer (#369)
  • Support for scipy sparse CSR matrices as input (as, e.g., returned by sklearn's
    CountVectorizer); note that they are cast to dense matrices during batching
  • Helper functions to build command line interfaces with almost no
    boilerplate, example that shows usage

Changed

  • Reduce overhead of BatchScoring when using train_loss_score or valid_loss_score by skipping superfluous inference step (#381)
  • The on_grad_computed callback function will yield an iterable for named_parameters only when it is used to reduce the run-time overhead of the call (#379)
  • Default fn_prefix in TrainEndCheckpoint is now train_end_ (#391)
  • Issues a warning when Checkpoints's monitor parameter is set to monitor and the history contains <monitor>_best. (#399)

Fixed

  • Re-initialize optimizer when set_params is called with lr argument (#372)
  • Copying a SliceDict now returns a SliceDict instead of a dict (#388)
  • Calling == on SliceDicts now works as expected when values are numpy arrays and torch tensors
Assets 2

@ottonemo ottonemo released this Oct 24, 2018 · 63 commits to master since this release

Organisational

From now on we will organize a change log and document every change directly. If
you are a contributor we encourage you to document your changes directly in the
change log when submitting a PR to reduce friction when preparing new releases.

Added

  • Support for PyTorch 0.4.1
  • There is no need to explicitly name callbacks anymore (names are assigned automatically, name conflicts are resolved).
  • You can now access the training data in the on_grad_computed event
  • There is a new image segmentation example
  • Easily create toy network instances for quick experiments using skorch.toy.make_classifier and friends
  • New ParamMapper callback to modify/freeze/unfreeze parameters at certain point in time during training:
>>> from sklearn.callbacks import Freezer, Unfreezer
>>> net = Net(module, callbacks=[Freezer('layer*.weight'), Unfreezer('layer*.weight', at=10)])
  • Refactored EpochScoring for easier sub-classing
  • Checkpoint callback now supports saving the optimizer, this avoids problems with stateful
    optimizers such as Adam or RMSprop (#360)
  • Added LoadInitState callback for easy continued training from checkpoints (#360)
  • NeuralNetwork.load_params now supports loading from Checkpoint instances
  • Added documentation for saving and loading highlighting the new features

Changed

  • The ProgressBar callback now determines the batches per epoch automatically by default (batches_per_epoch=auto)
  • The on_grad_computed event now has access to the current training data batch

Deprecated

  • Deprecated filtered_optimizer in favor of Freezer callback (#346)
  • NeuralNet.load_params and NeuralNet.save_params deprecate f parameter for the sake
    of f_optimizer, f_params and f_history (#360)

Removed

  • skorch.net.NeuralNetClassifier and skorch.net.NeuralNetRegressor are removed.
    Use from skorch import NeuralNetClassifier or skorch.NeuralNetClassifier instead.

Fixed

  • uses_placeholder_y should not require existence of y field (#311)
  • LR scheduler creates batch_idx on first run (#314)
  • Use OrderedDict for callbacks to fix python 3.5 compatibility issues (#331)
  • Make to_tensor work correctly with PackedSequence (#335)
  • Rewrite History to not use any recursion to avoid memory leaks during exceptions (#312)
  • Use flaky in some neural network tests to hide platform differences
  • Fixes ReduceLROnPlateau when mode == max (#363)
  • Fix disconnected weights between net and optimizer after copying the net with copy.deepcopy (#318)
  • Fix a bug that interfered with loading CUDA models when the model was a CUDA tensor but
    the net was configured to use the CPU (#354, #358)

Contributors

Again we'd like to thank all the contributors for their awesome work.
Thank you

  • Andrew Spott
  • Dave Hirschfeld
  • Scott Sievert
  • Sergey Alexandrov
  • Thomas Fan
Assets 2

@ottonemo ottonemo released this Jul 26, 2018 · 120 commits to master since this release

Features

API changes

  • train_step is now split in train_step and train_step_single in order to support LBFGS, where train_step_single takes the role of your typical training inner-loop when writing PyTorch models
  • device parameter on skorch.dataset.Dataset is now deprecated
  • Checkpoint parameter target is deprecated in favor of f_params

Contributors

A big thanks to our contributors who helped making this release possible:

  • Andrew Spott
  • Scott Sievert
  • Sergey Alexandrov
  • Thomas Fan
  • Tomasz Pietruszka
Assets 2

@ottonemo ottonemo released this May 4, 2018 · 221 commits to master since this release

Features

  • PyTorch 0.4 support
  • Add GradNormClipping callback
  • Add generic learning rate scheduler callback
  • Add CyclicLR learning rate scheduler
  • Add WarmRestartLR learning rate scheduler
  • Scoring callbacks now re-use predictions, accelerating training
  • fit() and inference methods (e.g., predict()) now support torch.util.data.Dataset as input as long as (X, y) pairs are returned
  • forward and forward_iter now allow you to specify on which device to store intermediate predictions
  • Support for setting optimizer param groups using wildcards (e.g., {'layer*.bias': {'lr': 0}})
  • Computed gradients can now be processed by callbacks using on_grad_computed
  • Support for fit_params parameter which gets passed directly to the module
  • Add skorch.helper.SliceDict so that you can use dict as X with sklearn's GridSearchCV, etc.
  • Add Dockerfile

API changes

  • Deprecated use_cuda parameter in favor of device parameter
  • skorch.utils.to_var is gone in favor of skorch.utils.to_tensor
  • training_step and validation_step now return a dict with the loss and the module's prediction
  • predict and predict_proba now handle multiple outputs by assuming the first output to be the prediction
  • NeuralNetClassifier now only takes log of prediction if the criterion is set to NLLLoss

Examples

  • RNN sentiment classification

Communication

Contributors

A big thanks to our contributors who helped making this release possible:

  • Felipe Ribeiro
  • Grzegorz Rygielski
  • Juri Paern
  • Thomas Fan
Assets 2
Dec 8, 2017
Prepare 0.1.0
You can’t perform that action at this time.