Skip to content

Commit

Permalink
Merge pull request #441 from Nota-NetsPresso/dev
Browse files Browse the repository at this point in the history
Update from dev branch: v0.2.1 release
  • Loading branch information
illian01 committed May 3, 2024
2 parents 9fe6e6a + 86880fe commit 3751061
Show file tree
Hide file tree
Showing 95 changed files with 2,983 additions and 1,786 deletions.
4 changes: 3 additions & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,6 @@ For example,

```
- Added a new feature by `@myusername` in [PR 2023](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/2023)
```
```

Please enable **Allow edits and access to secrets by maintainers** so that our maintainers can update the `CHANGELOG.md`.
24 changes: 24 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,30 @@ No changes to highlight.

No changes to highlight.

# v0.2.1

## New Features:

- Add dataset validation step and refactoring data modules by `@illian01` in [PR 417](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/417), [PR 419](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/419)
- Add various dataset examples including automatic open dataset format converter by `@illian01` in [PR 430](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/430)
- Allow using text file path for the `id_mapping` field by `@illian01` in [PR 432](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/432), [PR 435](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/435)

## Bug Fixes:

- Fix test directory check line by `@illian01` in [PR 428](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/428)
- Fix Dockerfile installation commandline `@cbpark-nota` in [PR 434](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/434)

## Breaking Changes:

No changes to highlight.

## Other Changes:

- Save training summary at every end of epochs by `@illian01` in [PR 420](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/420)
- Refacotring: rename postprocessors/register.py to registry.py by `@aychun` in [PR 424](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/424)
- Add example configuration set by `@illian01` in [PR 438](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/438)
- Documentation: fix simple use config file path by `@cbpark-nota` in [PR 437](https://github.com/Nota-NetsPresso/netspresso-trainer/pull/437)

# v0.2.0

## New Features:
Expand Down
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ COPY . /home/appuser/netspresso-trainer

RUN pip install -r requirements.txt && rm -rf /root/.cache/pip
RUN pip install -r requirements-optional.txt && rm -rf /root/.cache/pip
RUN python3 -m pip install -e .
38 changes: 32 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,37 @@ Please refer to [`scripts/example_train.sh`](./scripts/example_train.sh).

NetsPresso Trainer is compatible with [NetsPresso](https://netspresso.ai/) service. We provide NetsPresso Trainer tutorial that contains whole procedure from model train to model compression and benchmark. Please refer to our [colab tutorial](https://colab.research.google.com/drive/1RBKMCPEa4x-4X31zqzTS8WgQI9TQt3e-?usp=sharing).

## Dataset preparation (Local)

NetsPresso Trainer is designed to accommodate a variety of tasks, each requiring different dataset formats. You can find the specific dataset formats for each task in our [documentation](https://nota-netspresso.github.io/netspresso-trainer/components/data/).

If you are interested in utilizing open datasets, you can use them by following the [instructions](https://nota-netspresso.github.io/netspresso-trainer/getting_started/dataset_preparation/local/#open-datasets).

### Image classification

- [CIFAR100](https://github.com/Nota-NetsPresso/netspresso-trainer/blob/dev/tools/open_dataset_tool/cifar100.py)
- [ImageNet1K](https://github.com/Nota-NetsPresso/netspresso-trainer/blob/dev/tools/open_dataset_tool/imagenet1k.py)

### Semantic segmentation

- [PascalVOC 2012](https://github.com/Nota-NetsPresso/netspresso-trainer/blob/dev/tools/open_dataset_tool/voc2012_seg.py)

### Object detection

- [COCO 2017](https://github.com/Nota-NetsPresso/netspresso-trainer/blob/dev/tools/open_dataset_tool/coco2017.py)

### Pose estimation

- [WFLW](https://github.com/Nota-NetsPresso/netspresso-trainer/blob/dev/tools/open_dataset_tool/wflw.py)

## Dataset preparation (Huggingface)

NetsPresso Trainer is also compatible with huggingface dataset. To use datasets of huggingface, please check [instructions in our documentations](https://nota-netspresso.github.io/netspresso-trainer/getting_started/dataset_preparation/huggingface/). This enables to utilize a wide range of pre-built datasets which are beneficial for various training scenarios.

## Pretrained weights

Please refer to our [official documentation](https://nota-netspresso.github.io/netspresso-trainer/) for pretrained weights supported by NetsPresso Trainer.

## Tensorboard

We provide basic tensorboard to track your training status. Run the tensorboard with the following command:
Expand All @@ -109,9 +140,4 @@ tensorboard --logdir ./outputs --port 50001 --bind_all
```

where `PORT` for tensorboard is 50001.
Note that the default directory of saving result will be `./outputs` directory.


## Pretrained weights

Please refer to our [official documentation](https://nota-netspresso.github.io/netspresso-trainer/) for pretrained weights supported by NetsPresso Trainer.
Note that the default directory of saving result will be `./outputs` directory.
1 change: 0 additions & 1 deletion config/augmentation/pose_estimation.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
augmentation:
img_size: &img_size 256
train:
-
name: randomhorizontalflip
Expand Down
1 change: 0 additions & 1 deletion config/augmentation/segmentation.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
augmentation:
img_size: &img_size 512
train:
-
name: randomresizedcrop
Expand Down
31 changes: 1 addition & 30 deletions config/augmentation/template/common.yaml
Original file line number Diff line number Diff line change
@@ -1,32 +1,3 @@
augmentation:
train:
-
name: randomresizedcrop
size: ~
sclale: ~
ratio: ~
interpolation: bilinear
-
name: randomhorizontalflip
p: ~
-
name: randomverticalflip
p: ~
-
name: colorjitter
brightness: ~
contrast: ~
saturation: ~
hue: ~
p: ~
-
name: resize
size: ~
interpolation: ~
max_size: ~
-
name: pad
padding: ~
fill: ~
padding_mode: ~
train: ~
inference: ~
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
augmentation:
train:
-
name: randomresizedcrop
size: 256
scale: [0.08, 1.0]
ratio: [0.75, 1.33]
interpolation: bilinear
-
name: randomhorizontalflip
p: 0.5
inference:
-
name: resize
size: [256, 256]
interpolation: bilinear
max_size: ~
resize_criteria: ~
-
name: centercrop
size: 224
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
data:
name: IMAGENET1K
task: classification
format: local # local, huggingface
path:
root: ./data/imagenet1k # dataset root
train:
image: images/train # directory for training images
label: labels/imagenet_train.csv # directory for training labels or csv file
valid:
image: images/valid # directory for valid images
label: labels/imagenet_valid.csv # directory for valid labels or csv file
test:
image: ~
label: ~
id_mapping: id_mapping.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
environment:
seed: 1
num_workers: 4
gpus: 0, 1, 2, 3, 4, 5, 6, 7
batch_size: 32 # Batch size per gpu
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
logging:
project_id: ~
output_dir: ./outputs
tensorboard: true
image: true
stdout: true
save_optimizer_state: true
onnx_input_size: [224, 224]
validation_epoch: &validation_epoch 5
save_checkpoint_epoch: *validation_epoch # Multiplier of `validation_epoch`.
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
model:
task: classification
name: resnet18
checkpoint:
use_pretrained: False
load_head: False
path: ~
fx_model_path: ~
optimizer_path: ~
freeze_backbone: False
architecture:
full: ~ # auto
backbone:
name: resnet
params:
block_type: basicblock
norm_type: batch_norm
stage_params:
-
channels: 64
num_blocks: 2
-
channels: 128
num_blocks: 2
replace_stride_with_dilation: False
-
channels: 256
num_blocks: 2
replace_stride_with_dilation: False
-
channels: 512
num_blocks: 2
replace_stride_with_dilation: False
head:
name: fc
params:
num_layers: 1
intermediate_channels: ~
act_type: ~
dropout_prob: 0.
losses:
- criterion: cross_entropy
label_smoothing: 0.
weight: ~
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
training:
epochs: 90
mixed_precision: False
ema: ~
optimizer:
name: sgd
lr: 0.1
momentum: 0.9
weight_decay: 0.0001
nesterov: False
scheduler:
name: step
iters_per_phase: 30
gamma: 0.1
end_epoch: 90
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
augmentation:
train:
-
name: randomresizedcrop
size: 256
scale: [0.08, 1.0]
ratio: [0.75, 1.33]
interpolation: bilinear
-
name: randomhorizontalflip
p: 0.5
inference:
-
name: resize
size: [256, 256]
interpolation: bilinear
max_size: ~
resize_criteria: ~
-
name: centercrop
size: 224
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
data:
name: IMAGENET1K
task: classification
format: local # local, huggingface
path:
root: ./data/imagenet1k # dataset root
train:
image: images/train # directory for training images
label: labels/imagenet_train.csv # directory for training labels or csv file
valid:
image: images/valid # directory for valid images
label: labels/imagenet_valid.csv # directory for valid labels or csv file
test:
image: ~
label: ~
id_mapping: id_mapping.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
environment:
seed: 1
num_workers: 4
gpus: 0, 1, 2, 3, 4, 5, 6, 7
batch_size: 32 # Batch size per gpu
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
logging:
project_id: ~
output_dir: ./outputs
tensorboard: true
image: true
stdout: true
save_optimizer_state: true
onnx_input_size: [224, 224]
validation_epoch: &validation_epoch 5
save_checkpoint_epoch: *validation_epoch # Multiplier of `validation_epoch`.
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
model:
task: classification
name: resnet34
checkpoint:
use_pretrained: False
load_head: False
path: ~
fx_model_path: ~
optimizer_path: ~
freeze_backbone: False
architecture:
full: ~ # auto
backbone:
name: resnet
params:
block_type: basicblock
norm_type: batch_norm
stage_params:
-
channels: 64
num_blocks: 3
-
channels: 128
num_blocks: 4
replace_stride_with_dilation: False
-
channels: 256
num_blocks: 6
replace_stride_with_dilation: False
-
channels: 512
num_blocks: 3
replace_stride_with_dilation: False
head:
name: fc
params:
num_layers: 1
intermediate_channels: ~
act_type: ~
dropout_prob: 0.
losses:
- criterion: cross_entropy
label_smoothing: 0.
weight: ~
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
training:
epochs: 90
mixed_precision: False
ema: ~
optimizer:
name: sgd
lr: 0.1
momentum: 0.9
weight_decay: 0.0001
nesterov: False
scheduler:
name: step
iters_per_phase: 30
gamma: 0.1
end_epoch: 90
Loading

0 comments on commit 3751061

Please sign in to comment.