Skip to content

Commit

Permalink
Merge pull request #247 from Nota-NetsPresso/dev
Browse files Browse the repository at this point in the history
Update from dev branch: `v0.0.10` release
  • Loading branch information
Hyoung-Kyu Song committed Nov 24, 2023
2 parents d1a2354 + bf5f9c6 commit 8f77132
Show file tree
Hide file tree
Showing 132 changed files with 3,267 additions and 1,362 deletions.
38 changes: 34 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,48 @@

## New Features:

No changes to highlight.
-

## Bug Fixes:

No changes to highlight.
-

## Breaking Changes:

No changes to highlight.
-

## Other Changes:

No changes to highlight.
-

# v0.0.10

## New Features:

- Add a gpu option in `train_with_config` (only single-GPU supported) by `@deepkyu` in [PR 219](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/219)
- Support augmentation for classification task: cutmix, mixup by `@illian01` in [PR 221](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/221)
- Add model: MixNet by `@illian01` in [PR 229](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/229)
- Add `model.name` to get the exact nickname of the model by `@deepkyu` in [PR 243](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/243/)
- Add transforms: RandomErasing and TrivialAugmentationWide by `@illian01` in [PR 246](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/246)

## Bug Fixes:

- Fix PIDNet model dataclass task field by `@illian01` in [PR 220](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/220)
- Fix default criterion value of classification `@illian01` in [PR 238](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/238)
- Fix model access of 2-stage detection pipeline to compat with distributed environment by `@illian` in [PR 239](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/239)

## Breaking Changes:

- Enable dataset augmentation customizing by `@illian01` in [PR 201](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/201)
- Add postprocessor module by `@illian01` in [PR 223](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/223)
- Equalize the model backbone configuration format by `@illian01` in [PR 228](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/228)
- Separate FPN and PAFPN as neck module by `@illian01` in [PR 234](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/234)
- Auto-download pretrained checkpoint from AWS S3 by `@deepkyu` in [PR 244](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/244)

## Other Changes:

- Update ruff rule (`W`) by `@deepkyu` in [PR 218](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/218)
- Integrate classification loss modules by `@illian01` in [PR 226](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/226)

# v0.0.9

Expand Down Expand Up @@ -121,6 +150,7 @@ This change is applied at [PR 151](https://github.com/Nota-NetsPresso/netspresso
- Initialize loss and metric at same time with optimizer and lr schedulers by `@deepkyu` in [PR 138](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/138)
- Hotfix the error which shows 0 for validation loss and metrics by fixing the variable name by `@deepkyu` in [PR 140](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/140)
- Add missing field, `save_optimizer_state`, in `logging.yaml` by `@illian01` in [PR 149](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/149)
- Hotfix for pythonic config name (classification loss) by `@deepkyu` in [PR 242](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/242)

## Breaking Changes:

Expand Down
41 changes: 12 additions & 29 deletions config/augmentation/classification.yaml
Original file line number Diff line number Diff line change
@@ -1,31 +1,14 @@
augmentation:
img_size: &img_size 256
hsv_h: ~
hsv_s: ~
hsv_v: ~
degrees: ~
translate: ~
scale: ~
max_scale: ~
min_scale: ~
crop_size_h: ~
crop_size_w: ~
resize_ratio0: ~
resize_ratiof: ~
resize_add: ~
shear: ~
perspective: ~
flipud: ~
fliplr: 0.5
mosaic: ~
mixup: 1.0
copy_paste: ~
mixup_alpha: 0.0
cutmix_alpha: 0.0
mixup_switch_prob: 0.5
color_jitter:
brightness: ~
contrast: ~
saturation: ~
hue: ~
colorjitter_p: ~
transforms:
-
name: randomresizedcrop
size: *img_size
interpolation: bilinear
-
name: randomhorizontalflip
p: 0.5
mix_transforms:
-
name: cutmix
alpha: 1.0
34 changes: 5 additions & 29 deletions config/augmentation/detection.yaml
Original file line number Diff line number Diff line change
@@ -1,31 +1,7 @@
augmentation:
img_size: &img_size 512
hsv_h: ~
hsv_s: ~
hsv_v: ~
degrees: ~
translate: ~
scale: ~
max_scale: 2048
min_scale: 768
crop_size_h: 512
crop_size_w: 512
resize_ratio0: 0.5
resize_ratiof: 2.0
resize_add: 1
shear: ~
perspective: ~
flipud: ~
fliplr: 0.5
mosaic: ~
mixup: ~
copy_paste: ~
mixup_alpha: ~
cutmix_alpha: ~
mixup_switch_prob: ~
color_jitter:
brightness: 0.25
contrast: 0.25
saturation: 0.25
hue: 0.1
colorjitter_p: 0.5
transforms:
-
name: resize
size: *img_size
interpolation: bilinear
44 changes: 15 additions & 29 deletions config/augmentation/segmentation.yaml
Original file line number Diff line number Diff line change
@@ -1,31 +1,17 @@
augmentation:
img_size: &img_size 512
hsv_h: ~
hsv_s: ~
hsv_v: ~
degrees: ~
translate: ~
scale: ~
max_scale: 1024
min_scale: *img_size
crop_size_h: *img_size
crop_size_w: *img_size
resize_ratio0: 1.0
resize_ratiof: 1.5
resize_add: 1
shear: ~
perspective: ~
flipud: ~
fliplr: 0.5
mosaic: ~
mixup: ~
copy_paste: ~
mixup_alpha: ~
cutmix_alpha: ~
mixup_switch_prob: ~
color_jitter:
brightness: 0.25
contrast: 0.25
saturation: 0.25
hue: 0.1
colorjitter_p: 0.5
transforms:
-
name: randomresizedcrop
size: *img_size
interpolation: bilinear
-
name: randomhorizontalflip
p: 0.5
-
name: colorjitter
brightness: 0.25
contrast: 0.25
saturation: 0.25
hue: 0.1
p: 0.5
56 changes: 26 additions & 30 deletions config/augmentation/template/common.yaml
Original file line number Diff line number Diff line change
@@ -1,31 +1,27 @@
augmentation:
img_size: &img_size 512
hsv_h: ~
hsv_s: ~
hsv_v: ~
degrees: ~
translate: ~
scale: ~
max_scale: 1024
min_scale: *img_size
crop_size_h: *img_size
crop_size_w: *img_size
resize_ratio0: 1.0
resize_ratiof: 1.5
resize_add: 1
shear: ~
perspective: ~
flipud: ~
fliplr: 0.5
mosaic: ~
mixup: ~
copy_paste: ~
mixup_alpha: ~
cutmix_alpha: ~
mixup_switch_prob: ~
color_jitter:
brightness: 0.25
contrast: 0.25
saturation: 0.25
hue: 0.1
colorjitter_p: 0.5
img_size: &img_size ~
transforms:
-
name: randomresizedcrop
size: ~
interpolation: bilinear
-
name: randomhorizontalflip
p: ~
-
name: randomverticalflip
p: ~
-
name: colorjitter
brightness: ~
contrast: ~
saturation: ~
hue: ~
p: ~
-
name: resize
size: ~
-
name: pad
padding: ~
mix_transforms: ~
60 changes: 38 additions & 22 deletions config/model/efficientformer/efficientformer-l1-classification.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
model:
task: classification
name: efficientformer_l1
checkpoint: ./weights/efficientformer/efficientformer_l1_1000d.pth
fx_model_checkpoint: ~
resume_optimizer_checkpoint: ~
Expand All @@ -8,29 +9,44 @@ model:
full: ~ # auto
backbone:
name: efficientformer
num_blocks: [3, 2, 6, 4]
hidden_sizes: [48, 96, 224, 448]
num_attention_heads: 8
attention_hidden_size: 256 # attention_hidden_size_splitted * num_attention_heads
attention_dropout_prob: 0.
attention_ratio: 4
attention_bias_resolution: 16
pool_size: 3
intermediate_ratio: 4
hidden_dropout_prob: 0.
hidden_activation_type: 'gelu'
layer_norm_eps: 1e-5
drop_path_rate: 0.
use_layer_scale: True
layer_scale_init_value: 1e-5
downsamples: [True, True, True, True]
down_patch_size: 3
down_stride: 2
down_pad: 1
vit_num: 1
params:
num_attention_heads: 8
attention_hidden_size: 256 # attention_hidden_size_splitted * num_attention_heads
attention_dropout_prob: 0.
attention_ratio: 4
attention_bias_resolution: 16
pool_size: 3
intermediate_ratio: 4
hidden_dropout_prob: 0.
hidden_activation_type: 'gelu'
layer_norm_eps: 1e-5
drop_path_rate: 0.
use_layer_scale: True
layer_scale_init_value: 1e-5
down_patch_size: 3
down_stride: 2
down_pad: 1
vit_num: 1
stage_params:
-
num_blocks: 3
hidden_sizes: 48
downsamples: True
-
num_blocks: 2
hidden_sizes: 96
downsamples: True
-
num_blocks: 6
hidden_sizes: 224
downsamples: True
-
num_blocks: 4
hidden_sizes: 448
downsamples: True
head:
name: fc
losses:
- criterion: label_smoothing_cross_entropy
smoothing: 0.1
- criterion: cross_entropy
label_smoothing: 0.1
weight: ~
58 changes: 38 additions & 20 deletions config/model/efficientformer/efficientformer-l1-detection.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
model:
task: detection
name: efficientformer_l1
checkpoint: ./weights/efficientformer/efficientformer_l1_1000d.pth
fx_model_checkpoint: ~
resume_optimizer_checkpoint: ~
Expand All @@ -8,26 +9,43 @@ model:
full: ~ # auto
backbone:
name: efficientformer
num_blocks: [3, 2, 6, 4]
hidden_sizes: [48, 96, 224, 448]
num_attention_heads: 8
attention_hidden_size: 256 # attention_hidden_size_splitted * num_attention_heads
attention_dropout_prob: 0.
attention_ratio: 4
attention_bias_resolution: 16
pool_size: 3
intermediate_ratio: 4
hidden_dropout_prob: 0.
hidden_activation_type: 'gelu'
layer_norm_eps: 1e-5
drop_path_rate: 0.
use_layer_scale: True
layer_scale_init_value: 1e-5
downsamples: [True, True, True, True]
down_patch_size: 3
down_stride: 2
down_pad: 1
vit_num: 1
params:
num_attention_heads: 8
attention_hidden_size: 256 # attention_hidden_size_splitted * num_attention_heads
attention_dropout_prob: 0.
attention_ratio: 4
attention_bias_resolution: 16
pool_size: 3
intermediate_ratio: 4
hidden_dropout_prob: 0.
hidden_activation_type: 'gelu'
layer_norm_eps: 1e-5
drop_path_rate: 0.
use_layer_scale: True
layer_scale_init_value: 1e-5
down_patch_size: 3
down_stride: 2
down_pad: 1
vit_num: 1
stage_params:
-
num_blocks: 3
hidden_sizes: 48
downsamples: True
-
num_blocks: 2
hidden_sizes: 96
downsamples: True
-
num_blocks: 6
hidden_sizes: 224
downsamples: True
-
num_blocks: 4
hidden_sizes: 448
downsamples: True
neck:
name: fpn
head:
name: faster_rcnn
losses:
Expand Down
Loading

0 comments on commit 8f77132

Please sign in to comment.