Skip to content

Commit

Permalink
Augmentation Features and DDP fixes (#240)
Browse files Browse the repository at this point in the history
* fix: save best final model

* fix date time format

* refactor common code

* refactor common code

* refactor

* fix: dataloader size for multi gpu

* added BaseModule for freeze batch norm

* added BaseModule

* updated models with BaseModule

* removed avg training time

* removed avg training time

* fix unit test

* added batch norm config to trainer

* fix unit test engine

* update step trainer

* empty cuda cache

* fix: ddp training

* added mock unit tests (#225)

* added mock unit tests (#233)

* merge ddp fix to development (#235)

* Bump version for DDP fix release

* Development (#234)

* Add tagged commit trigger for deployment workflow (#214)

* Update training config file to comply with the changes in the trainer (#215)

* Updated training config file to comply with changes in the trainer

* Made changes to default base trainer config as well

* Revert "Add tagged commit trigger for deployment workflow (#214)" (#217)

This reverts commit 6386fcb.

* fix training logs (#219)

* fix training logs

* setup local rank in ddp

* fix step trainer logs

* fix: save best final model

* fix date time format

* refactor common code

* refactor common code

* refactor

* fix: dataloader size for multi gpu

* added BaseModule for freeze batch norm

* added BaseModule

* updated models with BaseModule

* removed avg training time

* removed avg training time

* fix unit test

* added batch norm config to trainer

* fix unit test engine

* update step trainer

* empty cuda cache

* fix: ddp training

* added mock unit tests (#225)

* Bump ujson from 1.35 to 5.2.0 in /docs (#226)

Bumps [ujson](https://github.com/ultrajson/ultrajson) from 1.35 to 5.2.0.
- [Release notes](https://github.com/ultrajson/ultrajson/releases)
- [Commits](ultrajson/ultrajson@v1.35...5.2.0)

---
updated-dependencies:
- dependency-name: ujson
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Loosened requirements (#228)

* Loosened requirements (#229)

* Bump cookiecutter from 1.7.3 to 2.1.1 in /docs (#230)

Bumps [cookiecutter](https://github.com/cookiecutter/cookiecutter) from 1.7.3 to 2.1.1.
- [Release notes](https://github.com/cookiecutter/cookiecutter/releases)
- [Changelog](https://github.com/cookiecutter/cookiecutter/blob/master/HISTORY.md)
- [Commits](cookiecutter/cookiecutter@1.7.3...2.1.1)

---
updated-dependencies:
- dependency-name: cookiecutter
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump arrow from 0.13.1 to 0.15.1 in /docs (#231)

Bumps [arrow](https://github.com/arrow-py/arrow) from 0.13.1 to 0.15.1.
- [Release notes](https://github.com/arrow-py/arrow/releases)
- [Changelog](https://github.com/arrow-py/arrow/blob/master/CHANGELOG.rst)
- [Commits](arrow-py/arrow@0.13.1...0.15.1)

---
updated-dependencies:
- dependency-name: arrow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump arrow from 0.13.1 to 0.15.1 (#232)

Bumps [arrow](https://github.com/arrow-py/arrow) from 0.13.1 to 0.15.1.
- [Release notes](https://github.com/arrow-py/arrow/releases)
- [Changelog](https://github.com/arrow-py/arrow/blob/master/CHANGELOG.rst)
- [Commits](arrow-py/arrow@0.13.1...0.15.1)

---
updated-dependencies:
- dependency-name: arrow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Version bump

* added mock unit tests (#233)

Co-authored-by: Neelay Shah <shahnh19@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Co-authored-by: NeelayS <shahnh19@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* bugfix: unbounded cropsize

* fix ddp tensorboard logging

* Added Augmentation Operations and unit tests (#237)

* Added new augment ops and normalization

* Added uni test for augmentation

* Changed uni test for augmentation

* Augmentation changes

* Fixed issue

* Added translate func fix and documentation

* Made suggested changes and  unit tested

* Normalize param issue fix

* Duplicate removal and norm_param style change

* Fix docstrings

Co-authored-by: Prajnan Goswami <89991031+prajnan93@users.noreply.github.com>

* fix tensorboard logging multi-gpu

* fix start step

* update step trainer

* :fix unit test

Co-authored-by: NeelayS <shahnh19@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Devroop Kar <33129624+kardevroop@users.noreply.github.com>
  • Loading branch information
4 people committed Sep 6, 2022
1 parent 9333f9f commit 169e3b3
Show file tree
Hide file tree
Showing 32 changed files with 847 additions and 175 deletions.
1 change: 0 additions & 1 deletion configs/trainers/base.yaml
Expand Up @@ -51,7 +51,6 @@ DEVICE: "0"
DISTRIBUTED:
USE: False
WORLD_SIZE: 2
RANK: 0
BACKEND: nccl
MASTER_ADDR: localhost
MASTER_PORT: "12355"
Expand Down
5 changes: 0 additions & 5 deletions ezflow/__init__.py
@@ -1,5 +0,0 @@
"""Top-level package for EzFlow"""

__author__ = """Neelay Shah"""
__email__ = "nstraum1@gmail.com"
__version__ = "0.1.0"
24 changes: 13 additions & 11 deletions ezflow/data/dataloader/dataloader_creator.py
Expand Up @@ -91,7 +91,7 @@ def add_FlyingChairs(self, root_dir, split="training", augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand All @@ -101,7 +101,7 @@ def add_FlyingThings3D(
split="training",
dstype="frames_cleanpass",
augment=False,
**kwargs
**kwargs,
):
"""
Adds the Flying Things 3D dataset to the DataloaderCreator object.
Expand Down Expand Up @@ -131,7 +131,7 @@ def add_FlyingThings3D(
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand Down Expand Up @@ -163,7 +163,7 @@ def add_FlyingThings3DSubset(
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand All @@ -190,7 +190,7 @@ def add_Monkaa(self, root_dir, augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand All @@ -217,7 +217,7 @@ def add_Driving(self, root_dir, augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand Down Expand Up @@ -272,7 +272,7 @@ def add_MPISintel(
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand Down Expand Up @@ -302,7 +302,7 @@ def add_Kitti(self, root_dir, split="training", augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand All @@ -328,7 +328,7 @@ def add_HD1K(self, root_dir, augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand All @@ -355,7 +355,7 @@ def add_AutoFlow(self, root_dir, augment=False, **kwargs):
is_prediction=self.is_prediction,
append_valid_mask=self.append_valid_mask,
augment=augment,
**kwargs
**kwargs,
)
)

Expand Down Expand Up @@ -408,6 +408,8 @@ def get_dataloader(self, rank=0):
drop_last=self.drop_last,
)

print("Total image pairs: %d" % len(dataset))
print(
f"Total image pairs loaded: {len(data_loader)*self.batch_size}/{len(dataset)}\n"
)

return data_loader
4 changes: 4 additions & 0 deletions ezflow/data/dataset/autoflow.py
Expand Up @@ -45,7 +45,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(AutoFlow, self).__init__(
init_seed=init_seed,
Expand All @@ -57,6 +60,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)

self.is_prediction = is_prediction
Expand Down
11 changes: 10 additions & 1 deletion ezflow/data/dataset/base_dataset.py
Expand Up @@ -3,8 +3,9 @@
import numpy as np
import torch
import torch.utils.data as data
import torchvision.transforms as transforms

from ...functional import crop
from ...functional import Normalize, crop
from ...utils import read_flow, read_image


Expand Down Expand Up @@ -46,8 +47,11 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
sparse_transform=False,
norm_params={"use": False},
):

self.is_prediction = is_prediction
Expand All @@ -63,6 +67,7 @@ def __init__(

self.flow_list = []
self.image_list = []
self.normalize = Normalize(**norm_params)

def __getitem__(self, index):
"""
Expand Down Expand Up @@ -113,6 +118,8 @@ def __getitem__(self, index):
img1 = torch.from_numpy(img1).permute(2, 0, 1).float()
img2 = torch.from_numpy(img2).permute(2, 0, 1).float()

img1, img2 = self.normalize(img1, img2)

return img1, img2

if self.augment is True and self.augmentor is not None:
Expand All @@ -133,6 +140,8 @@ def __getitem__(self, index):
img2 = torch.from_numpy(img2).permute(2, 0, 1).float()
flow = torch.from_numpy(flow).permute(2, 0, 1).float()

img1, img2 = self.normalize(img1, img2)

if self.append_valid_mask:
if valid is not None:
valid = torch.from_numpy(valid)
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/driving.py
Expand Up @@ -45,7 +45,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(Driving, self).__init__(
init_seed=init_seed,
Expand All @@ -57,6 +60,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)

self.is_prediction = is_prediction
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/flying_chairs.py
Expand Up @@ -51,7 +51,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(FlyingChairs, self).__init__(
init_seed=init_seed,
Expand All @@ -63,6 +66,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)
assert (
split.lower() == "training" or split.lower() == "validation"
Expand Down
14 changes: 13 additions & 1 deletion ezflow/data/dataset/flying_things3d.py
Expand Up @@ -51,7 +51,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(FlyingThings3D, self).__init__(
init_seed=init_seed,
Expand All @@ -63,6 +66,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)
assert (
split.lower() == "training" or split.lower() == "validation"
Expand Down Expand Up @@ -137,10 +141,18 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(FlyingThings3DSubset, self).__init__(
augment, aug_params, is_prediction, init_seed, append_valid_mask
augment,
aug_params,
is_prediction,
init_seed,
append_valid_mask,
norm_params,
)
assert (
split.lower() == "training" or split.lower() == "validation"
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/hd1k.py
Expand Up @@ -45,7 +45,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(HD1K, self).__init__(
init_seed=init_seed,
Expand All @@ -57,6 +60,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=True,
norm_params=norm_params,
)

self.is_prediction = is_prediction
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/kitti.py
Expand Up @@ -49,7 +49,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(Kitti, self).__init__(
init_seed=init_seed,
Expand All @@ -61,6 +64,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=True,
norm_params=norm_params,
)
assert (
split.lower() == "training" or split.lower() == "validation"
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/monkaa.py
Expand Up @@ -45,7 +45,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(Monkaa, self).__init__(
init_seed=init_seed,
Expand All @@ -57,6 +60,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)

self.is_prediction = is_prediction
Expand Down
4 changes: 4 additions & 0 deletions ezflow/data/dataset/mpi_sintel.py
Expand Up @@ -52,7 +52,10 @@ def __init__(
"color_aug_params": {"aug_prob": 0.2},
"eraser_aug_params": {"aug_prob": 0.5},
"spatial_aug_params": {"aug_prob": 0.8},
"translate_params": {"aug_prob": 0.8},
"rotate_params": {"aug_prob": 0.8},
},
norm_params={"use": False},
):
super(MPISintel, self).__init__(
init_seed=init_seed,
Expand All @@ -64,6 +67,7 @@ def __init__(
augment=augment,
aug_params=aug_params,
sparse_transform=False,
norm_params=norm_params,
)

assert (
Expand Down
5 changes: 3 additions & 2 deletions ezflow/engine/eval.py
Expand Up @@ -230,12 +230,13 @@ def profile_inference(
)
)

avg_inference_time = sum(times) / len(times)
avg_inference_time = 0 if len(times) == 0 else sum(times) / len(times)
avg_inference_time /= batch_size # Average inference time per sample
fps = 0 if avg_inference_time == 0 else 1 / avg_inference_time
n_params = sum(p.numel() for p in model.parameters())

print("=" * 100)
print(f"Average inference time: {avg_inference_time}, FPS: {1/avg_inference_time}")
print(f"Average inference time: {avg_inference_time}, FPS: {fps}")

if count_params:
print(f"Number of model parameters: {n_params}")
Expand Down

0 comments on commit 169e3b3

Please sign in to comment.