Skip to content

NNI v2.7 Release

Compare
Choose a tag to compare
@J-shang J-shang released this 18 Apr 13:06
1546962

Documentation

A full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:

Hyper-Parameter Optimization

  • [Improvement] TPE and random tuners will not generate duplicate hyperparameters anymore.
  • [Improvement] Most Python APIs now have type annotations.

Neural Architecture Search

  • Jointly search for architecture and hyper-parameters: ValueChoice in evaluator. (doc)
  • Support composition (transformation) of one or several value choices. (doc)
  • Enhanced Cell API (merge_op, preprocessor, postprocessor). (doc)
  • The argument depth in the Repeat API allows ValueChoice. (doc)
  • Support loading state_dict between sub-net and super-net. (doc, example in spos)
  • Support BN fine-tuning and evaluation in SPOS example. (doc)
  • Experimental Model hyper-parameter choice. (doc)
  • Preview Lightning implementation for Retiarii including DARTS, ENAS, ProxylessNAS and RandomNAS. (example usage)
  • Preview A search space hub that contains 10 search spaces. (code)

Model Compression

  • Pruning V2 is promoted as default pruning framework, old pruning is legacy and keeps for a few releases.(doc)
  • A new pruning mode balance is supported in LevelPruner.(doc)
  • Support coarse-grained pruning in ADMMPruner.(doc)
  • [Improvement] Support more operation types in pruning speedup.
  • [Improvement] Optimize performance of some pruners.

Experiment

  • [Improvement] Experiment.run() no longer stops web portal on return.

Notable Bugfixes

  • Fixed: experiment list could not open experiment with prefix.
  • Fixed: serializer for complex kinds of arguments.
  • Fixed: some typos in code. (thanks @a1trl9 @mrshu)
  • Fixed: dependency issue across layer in pruning speedup.
  • Fixed: uncheck trial doesn't work bug in the detail table.
  • Fixed: filter name | id bug in the experiment management page.