Skip to content

Commit

Permalink
learning rate logging V2 (#135)
Browse files Browse the repository at this point in the history
* add lr_schedule and steps_per_epoch to Optimizer

* ignore .theia

* Optimizers now accepts *args an chains them

* Try to avoid tests on WIP

* try harder

* another try

* test

* test again with proper logic

* added lr logging

* add WIP message

* clean ci

* undo delete

* format black

* add python 3.9

* remove 3.9 since poetry yields dependency errors

* update docs

* update dependencies

Co-authored-by: charlielito <candres.alv@gmail.com>
  • Loading branch information
cgarciae and charlielito committed Dec 25, 2020
1 parent 78777ff commit 14fba6c
Show file tree
Hide file tree
Showing 14 changed files with 302 additions and 78 deletions.
12 changes: 10 additions & 2 deletions .github/workflows/ci_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,19 @@ name: GitHub CI
on:
push:
# Sequence of patterns matched against refs/heads
branches:
branches:
# Push events on master branch
- master
pull_request:
jobs:
wip:
if: ${{ !contains(github.event.pull_request.title, 'WIP') }}
name: Check for WIP
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
black-test:
if: ${{ !contains(github.event.pull_request.title, 'WIP') }}
name: Black Python code format
runs-on: ubuntu-latest
steps:
Expand All @@ -24,6 +31,7 @@ jobs:
- name: Ensure contributor used ("black ./") before commit
run: black --check .
test:
if: ${{ !contains(github.event.pull_request.title, 'WIP') }}
runs-on: ubuntu-latest
strategy:
matrix:
Expand Down Expand Up @@ -53,4 +61,4 @@ jobs:
run: pytest --cov=elegy --cov-report=term-missing --cov-report=xml

- name: Upload coverage
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v1
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ cython_debug/

# custom
/.vscode
/.theia
/test.*
/summaries
/runs
Expand Down
16 changes: 16 additions & 0 deletions docs/api/Optimizer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@

# elegy.Optimizer

::: elegy.model.model_base.Optimizer
selection:
inherited_members: true
members:
- __init__
- call
- add_parameter
- get_parameters
- set_parameters
- reset
- init
- initialized

22 changes: 22 additions & 0 deletions docs/api/model/Model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@

# elegy.model.Model

::: elegy.model.model.Model
selection:
inherited_members: true
members:
- evaluate
- fit
- load
- predict
- predict_on_batch
- reset
- reset_metrics
- save
- summary
- test_on_batch
- train_on_batch
- full_state
- parameters
- states

16 changes: 16 additions & 0 deletions docs/api/model/Optimizer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@

# elegy.model.Optimizer

::: elegy.model.model_base.Optimizer
selection:
inherited_members: true
members:
- __init__
- call
- add_parameter
- get_parameters
- set_parameters
- reset
- init
- initialized

6 changes: 6 additions & 0 deletions docs/api/model/load.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@

# elegy.model.load

::: elegy.model.model.load
selection:
inherited_members: true
17 changes: 16 additions & 1 deletion docs/guides/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,14 @@ pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.55-$PYTHON_VERSION-none
pip install --upgrade jax
```

#### Gitpod
An alternative way to contribute is using [gitpod](https://gitpod.io/) which creates a vscode-based cloud development enviroment.
To get started just login at gitpod, grant the appropriate permissions to github, and open the following link:

https://gitpod.io/#https://github.com/poets-ai/elegy

We have built a `python 3.8` enviroment and all development dependencies will install when the enviroment starts.

## Creating Losses and Metrics
For this you can follow these guidelines:

Expand Down Expand Up @@ -65,9 +73,16 @@ To build and visualize the documentation locally run
mkdocs serve
```

# Creating a PR
## Creating a PR
Before sending a pull request make sure all test run and code is formatted with `black`:

```bash
black .
```

## Changelog
`CHANGELOG.md` is automatically generated using [github-changelog-generator](https://github.com/github-changelog-generator/github-changelog-generator), to update the changelog just run:
```bash
docker run -it --rm -v (pwd):/usr/local/src/your-app ferrarimarco/github-changelog-generator -u poets-ai -p elegy -t <TOKEN>
```
where `<TOKEN>` token can be obtained from Github at [Personal access tokens](https://github.com/settings/tokens), you only have to give permission for the `repo` section.
51 changes: 26 additions & 25 deletions elegy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
)
from .losses import Loss
from .metrics import Metric
from .model import Model
from .model import Model, Optimizer
from .module import (
RNG,
LocalContext,
Expand Down Expand Up @@ -43,39 +43,40 @@
)

__all__ = [
"module",
"Loss",
"Metric",
"Model",
"Module",
"Optimizer",
"RNG",
"add_loss",
"add_metric",
"add_summary",
"callbacks",
"data",
"get_dynamic_context",
"get_losses",
"get_metrics",
"get_rng",
"get_static_context",
"get_summaries",
"hooks_context",
"initializers",
"is_training",
"jit",
"losses",
"metrics",
"model",
"module",
"name_context",
"nets",
"next_rng_key",
"nn",
"regularizers",
"hooks_context",
"training_context",
"name_context",
"add_loss",
"add_metric",
"add_summary",
"get_losses",
"get_metrics",
"get_summaries",
"next_rng_key",
"Loss",
"Metric",
"Model",
"Module",
"to_module",
"RNG",
"get_rng",
"set_context",
"set_rng",
"jit",
"is_training",
"set_training",
"get_static_context",
"get_dynamic_context",
"set_context",
"to_module",
"training_context",
"value_and_grad",
"data",
]
7 changes: 7 additions & 0 deletions elegy/model/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,8 @@
from .model import Model, load
from .model_base import Optimizer

__all__ = [
"Model",
"Optimizer",
"load",
]
71 changes: 28 additions & 43 deletions elegy/model/model.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,10 @@
# Implementation based on tf.keras.engine.training.py
# https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/engine/training.py

from elegy.model.model_base import ModelBase
from elegy.utils import Mode
import functools
import json
import logging
from elegy.model.model_base import ModelBase, Optimizer
import pickle
import re
import typing as tp
from copy import copy
from enum import Enum
from functools import partial
from io import StringIO
from pathlib import Path

import cloudpickle
Expand All @@ -21,17 +13,10 @@
import jax.numpy as jnp
import numpy as np
import optax
import toolz
import yaml
from tabulate import tabulate

from elegy import module as hooks
from elegy import types
from elegy.losses import loss_modes
from elegy.metrics import metric_modes
from elegy.module import RNG, LocalContext, Module
from elegy.module import Module

from elegy import utils
from elegy.callbacks import Callback, CallbackList, History
from elegy.data import (
DataHandler,
Expand Down Expand Up @@ -80,32 +65,32 @@ def call(self, image: jnp.ndarray) -> jnp.ndarray:
Checkout [Getting Started](https://poets-ai.github.io/elegy/getting-started) for
additional details.
Attributes:
parameters: A `haiku.Params` structure with the weights of the model.
states: A `haiku.State` structure with non-trainable parameters of the model.
optimizer_state: A `optax.OptState` structure with states of the optimizer.
metrics_states: A `haiku.State` structure with the states of the metrics.
initial_metrics_state: A `haiku.State` structure with the initial states of the metrics.
run_eagerly: Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step, like Python code, instead of
using Jax's `jit` to optimize the computation. Your model might run slower, but it should become easier for you to debug
it by stepping into individual layer calls.
"""
Model supports defining + monitoring custom learning rate schedules by passing an instance of `elegy.Optimizer` instead of
an `optax` object:
```python
model = elegy.Model(
module=MLP(n1=3, n2=1),
loss=elegy.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=elegy.metrics.SparseCategoricalAccuracy(),
optimizer=elegy.Optimizer(
optax.adam(1.0), # <---- important to set this to 1.0
lr_schedule=lambda step, epoch: 1 / (epoch * 100 + step),
steps_per_epoch=1000,
),
run_eagerly=True,
)
history = model.fit(
...
)
# public fields
# module: Module
# optimizer_state: tp.Optional[optax.OptState]
# initial_metrics_state: tp.Optional[tp.Dict]
# run_eagerly: bool

# # private fields
# loss_module: tp.Optional[Module]
# metrics_module: tp.Optional[Module]
# _optimizer: optax.GradientTransformation
# _rngs: RNG
# _parameters: tp.Optional[types.Parameters]
# _states: tp.Optional[types.States]
# _metrics_states: tp.Optional[tp.Dict]
assert "lr" in history.history
```
Notice how we set the learning rate parameter of the `adam` optimizer to `1.0`, this is necessary if you want the logged `lr`
be closer to the "actual" learning rate because we implement this feature by chaining an additional `optax.scale_by_schedule`
at the end.
"""

__all__ = [
"evaluate",
Expand All @@ -129,7 +114,7 @@ def __init__(
module: Module,
loss: tp.Union[tp.Callable, tp.List, tp.Dict, None] = None,
metrics: tp.Union[tp.Callable, tp.List, tp.Dict, None] = None,
optimizer: tp.Optional[optax.GradientTransformation] = None,
optimizer: tp.Union[Optimizer, optax.GradientTransformation, None] = None,
run_eagerly: bool = False,
**kwargs,
):
Expand Down

0 comments on commit 14fba6c

Please sign in to comment.