Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

0.26.0

Latest
Compare
Choose a tag to compare
@TimDettmers TimDettmers released this 29 Nov 18:10
· 15 commits to main since this release

This release has important bug fixes for the StableEmbedding layer and it introduces new optimizers AdaGrad, and AdamW. The 0.26.0 release also features a new, lightweight embedding class, bnb.nn.Embedding which uses 32-bit optimizers but no layer norm. This layer allows for the easy use of pretrained models that do not use embedding layer norms. Now available on pip.

Changelog

Features:

  • Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
  • Added AdamW (copy of Adam with weight decay init 1e-2). #10
  • Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
  • Added bnb.nn.Embedding layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19

Bug fixes:

  • Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
  • Fixed an unsafe use of eval. #8
  • Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())). #13 #15

Docs:

  • Added instructions how to solve "__fatbinwrap_" errors.