Skip to content

Releases: ifsheldon/stannum

Minor fixes

11 Jan 11:07
Compare
Choose a tag to compare

This release only contains minor fixes that does not change the behaviors of stannum

v0.9.0: Major API change and Dynamic Dimension Calculation

29 Dec 20:06
Compare
Choose a tag to compare

Super excited to announce our new stannum!

Here are two major changes to Tube:

  • To avoid any potential confusion, negative indices in Tube for don't-care(any) dim and matching dim and None for batching dim are deprecated. Instead, use explicit stannum.AnyDim, stannum.MatchDim(dim_id) and stannum.BatchDim in 0.9 and onward.
  • DimensionCalculator is added to enable dynamic dimension calculation in Tube, which will be super useful in cases like convolution.

The documentation is updated as well. Check it out!

v0.8: Bug fix and API change

28 Dec 11:40
Compare
Choose a tag to compare

Since last release:

  • A bug has been fix. The bug appears when after forward computation, if we update kernel extra args using set_kernel_extra_args (once or multiple times), backward computation is messed up due to inconsistent kernel inputs during forward and backward passes.
  • The APIs of Tin and EmptyTin have been changed: the constructors need auto_clear_grad specified, which is a reminder to users that gradients of fields must be taken care of carefully so to not have incorrect gradients after multiple runs of Tin or EmptyTin layers.

v0.7.0: Performance improvement along with new Taichi release

20 Sep 14:53
Compare
Choose a tag to compare

Nothing big has change in the code base of stannum, but since Taichi developers have delivered a long waited performance improvement, I want to urge everyone using stannum to update Taichi in use to 1.1.3. And some kind warning and documentation are added to help stannum users to understand this important upstream update.

v0.6.4: Compatibility changes

10 Aug 17:25
Compare
Choose a tag to compare

Got prepared for Taichi v1.1.0, which is expected to contain a lot of refactoring and API changes.

v0.6.3: Fixed error due to upstream API changes

27 Jun 12:20
Compare
Choose a tag to compare
Merge remote-tracking branch 'origin/main'

v0.6.2: Reduce overhead in forward-only mode

21 Mar 12:30
Compare
Choose a tag to compare

Introduced a configuration in Tube enable_backward. When enable_backward is False, Tube will eagerly recycle Taichi memory by destroying SNodeTree right after forward calculation. This should improve performance of forward-only calculations and should mitigate the memory problem of Taichi in forward-only mode.

Overhead reduce and auto batching requirement update

08 Mar 14:31
Compare
Choose a tag to compare
  • #7 is fixed because of upstream Taichi has fixed uninitialized memory problem in 0.9.1
  • Intermediate fields are now required to be batched if any input tensors are batched

Tube eager mode and fixes

23 Feb 19:53
43e29d3
Compare
Choose a tag to compare

Persistent mode and Eager mode of Tube

Before v0.5.0, the Taichi fields created in Tube is persistent and their lifetime is like:
PyTorch upstream tensors -> Tube -> create fields -> forward pass -> copy values to downstream tensors -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed -> destroy fields

They're so-called persistent fields as they persist when the compute graph is being constructed.

Now in v0.5.0, we introduce an eager mode of Tube. With persistent_fields=False when instancing a Tube, eager mode is turned on, in which the lifetime of fields is like:
PyTorch upstream tensors -> Tube -> fields -> copied values to downstream tensors -> destroy fields -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed

Zooming in the optional backward pass, since we've destroyed fields that store values in the forward pass, we need to re-allocate new fields when calculating gradients, then the backward pass is like:
Downstream gradients -> Tube -> create fields and load values -> load downstream gradients to fields -> backward pass -> copy gradients to tensors -> Destroy fields -> Upstream PyTorch gradient calculation

This introduces some overhead but may be faster on "old" Taichi (any Taichi that does not merge taichi-dev/taichi#4356). For details, please see this PR. At the time we release v0.5.0, stable Taichi does not merge this PR.

Compatibility issue fixes

At the time we release v0.5.0, Taichi has been being under refactoring heavily, so we introduced many small fixes to deal with incompatibilities caused by such refactoring. If you find compatibility issues, feel free to submit issues and make PRs.

v0.4.4 Many fixes

21 Feb 09:50
Compare
Choose a tag to compare

Fix many problems due to Taichi changes and bugs: