Skip to content

Commit

Permalink
[chore] 0.4.6 release (#953)
Browse files Browse the repository at this point in the history
* [chore] 0.4.6 release

* added the third party libs removed by precommit
  • Loading branch information
tmarkstrum committed Mar 9, 2022
1 parent 8fa26ae commit 3e36cd0
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 1 deletion.
11 changes: 10 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [0.4.6] - TBD
## [0.4.7] - TBD

### Added

### Fixed


[0.4.6] - 2022-03-08

### Added
- CosFace's LMCL is added to MEVO. This is a loss function that is suitable
Expand All @@ -18,6 +24,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
if set, wraps the root module regardless of how many unwrapped params there were
left after children were wrapped. [#930]
- FSDP: Add support for saving optimizer state when using expert replicas with FSDP.
- OSS: Add a new arg "forced_broadcast_object" to OSS __init__ to apply "_broadcast_object"
for rebuilding the sharded optimizer. [#937]
- FSDP: Add an arg disable_reshard_on_root for FSDP __init__ [#878]

### Fixed
- FSDP: fixed handling of internal states with state_dict and load_state_dict
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ FairScale was designed with the following values in mind:

## What's New:

* March 2022 [fairscale 0.4.6 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.6).
* We have support for CosFace's LMCL in MEVO. This is a loss function that is suitable for large number of prediction target classes.
* January 2022 [fairscale 0.4.5 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.5).
* We have experimental support for layer wise gradient scaling.
* We enabled reduce_scatter operation overlapping in FSDP backward propagation.
Expand Down

0 comments on commit 3e36cd0

Please sign in to comment.