Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Commit

Permalink
Update setup and changelog for v0.2. (#49)
Browse files Browse the repository at this point in the history
  • Loading branch information
egrefen committed Apr 30, 2020
1 parent f06986c commit 772717e
Show file tree
Hide file tree
Showing 2 changed files with 43 additions and 1 deletion.
42 changes: 42 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,48 @@
Changelog
=========

Version 0.2
-----------
New:
- Patched model parameters can be modified directly, while still tracking
updates for the purpose of computing higher order gradients. This allows practices like:
```python
fmodel = monkeypatch(model)
weight = fmodel.linear.weight
new_weight = some_differentiable_function(weight)
fmodel.linear.weight = new_weight
```
- Support calling submodules of patched module directly, e.g.:
```python
class Model(nn.Module):
def __init__(self):
super().__init__()
self.submodule = nn.Linear(3,4)

def forward(self, inputs):
return self.submodule(inputs)

model = Model()
fmodel = higher.monkeypatch(model)
inputs = torch.rand(2,3)

models = (model, fmodel, model.submodule, fmodel.submodule)
for m1 in models:
for m2 in models:
assert torch.equal(m1(inputs), m2(inputs))
```
- Add property `track_higher_grads` to patched module, allowing them to behave like normal (unpatched) modules at test time. This makes their performance roughly equivalent to running the unpatched module, reducing the need to write more code for test loops.

Fixes:
- Fix monkey-patching logic for RNN variants to support PyTorch v1.4.
- Incorporate `eps` hyperparameter in differentiable Adagrad implementation.
- Release references to fast weights in `params_box[0]` after each `forward` call. This should avoid memory leaks in certain use cases.
- Fix how `fmodel.parameters()` returns iterables which avoids logic errors when running patched modules in test mode.

Improvements:
- Extended test coverage for RNN variants.
- General codebase clean-up (removing deprecated functions, fixing typos).

Version 0.1.5
-------------
New:
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
name='higher',
author='Edward Grefenstette',
author_email='egrefen@fb.com',
version='0.1.5',
version='0.2',
keywords='second-order, gradient descent, optimization, meta-learning',
packages=['higher'],
install_requires=['torch'],
Expand Down

0 comments on commit 772717e

Please sign in to comment.