Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[models] Create specific model's heads for binary classification #19

Closed
frgfm opened this issue Oct 15, 2019 · 1 comment
Closed

[models] Create specific model's heads for binary classification #19

frgfm opened this issue Oct 15, 2019 · 1 comment
Assignees
Labels
ext: references Related to references help wanted Extra attention is needed module: models Related to models
Milestone

Comments

@frgfm
Copy link
Member

frgfm commented Oct 15, 2019

🚀 Feature

Create a models.utils submodule with a function to easily create model heads depending on the type of task, namely binary classification here.

Motivation

Current training scripts in references assume that the target is categorical (multi-label) rather than binary. A proper task setting would allow the model to better optimize during training.

Pitch

The function would take as arguments:

  • model
  • task type
  • options for head design (dropout, bn, concatenated pooling, etc.)

and return a torch.nn.Sequential object.

@frgfm frgfm added help wanted Extra attention is needed module: models Related to models ext: references Related to references labels Oct 15, 2019
frgfm added a commit that referenced this issue Oct 16, 2019
* feat: Added fastai training script

Model's head still has to be switched to binary classification and ImageDataBunch needs to be built while respecting the existing split of train/validation. See #3

* feat: Updated fastai training script

Fixed the split of train and valid set so that the ImageDataBunch takes the existing split. See #3

* feat: Added training script without fastai

See #3

* feat: Added deterministic mode to torch_train

See #3

* style: Added endfile lines

* fix: Fixed training script train/valid split

For users that did not have the validation split, the training was failing when instantiating OpenFire in non training mode.

* refactor: Reorganized reference scripts

Separated fastai and non-fastai scripts to be able to have different requirements.txt for cleaner usage.

* docs: Updated readme

Added instructions for training using references scripts.

* refactor: Removed unused imports and variables

* fix: Fixed fastai training script

Missing pandas dependency

* fix: Fixed scheduler step on training scripts

* fix: Reflected script argument in scheduler

* refactor: Removed unused variable

* feat: Updated torch training script

Changed evaluation and logging to have similar metrics to fastai script. See #3

* chore: Updated version

Switched from alpha to beta.

* refactor: Harmonize default data path between scripts

* chore: Updated training script requirements

OneCycleLR is only available since torch 1.3.0.

* style: Renamed learner

* feat: Added device setting to fastai training

* refactor: Unified device resolution for training scripts

Closes #3 
See #19, #24
@frgfm frgfm self-assigned this Oct 17, 2019
@frgfm frgfm added this to the 0.1.0 milestone Oct 17, 2019
blenzi pushed a commit to blenzi/PyroNear that referenced this issue Oct 21, 2019
* feat: Added fastai training script

Model's head still has to be switched to binary classification and ImageDataBunch needs to be built while respecting the existing split of train/validation. See pyronear#3

* feat: Updated fastai training script

Fixed the split of train and valid set so that the ImageDataBunch takes the existing split. See pyronear#3

* feat: Added training script without fastai

See pyronear#3

* feat: Added deterministic mode to torch_train

See pyronear#3

* style: Added endfile lines

* fix: Fixed training script train/valid split

For users that did not have the validation split, the training was failing when instantiating OpenFire in non training mode.

* refactor: Reorganized reference scripts

Separated fastai and non-fastai scripts to be able to have different requirements.txt for cleaner usage.

* docs: Updated readme

Added instructions for training using references scripts.

* refactor: Removed unused imports and variables

* fix: Fixed fastai training script

Missing pandas dependency

* fix: Fixed scheduler step on training scripts

* fix: Reflected script argument in scheduler

* refactor: Removed unused variable

* feat: Updated torch training script

Changed evaluation and logging to have similar metrics to fastai script. See pyronear#3

* chore: Updated version

Switched from alpha to beta.

* refactor: Harmonize default data path between scripts

* chore: Updated training script requirements

OneCycleLR is only available since torch 1.3.0.

* style: Renamed learner

* feat: Added device setting to fastai training

* refactor: Unified device resolution for training scripts

Closes pyronear#3 
See pyronear#19, pyronear#24
frgfm added a commit that referenced this issue Oct 22, 2019
frgfm added a commit that referenced this issue Oct 23, 2019
* fix: Fixed layer freezing

Pure pytorch reference script had the model fully unfrozen by default which would not be comparable to the one using fastai. See #3

* feat: Added options for model unfreezing

* style: Corrected whitespaces and indentation

* refactor: Switched to TTA

Test Time Augmentation is now similar to training augmentation.

* feat: Added resume option on references

* style: Clarified argument description

* fix: Moved saving message display

Saving message should only be displayed if the checkpoint is actually being created.

* docs: Updated docstrings

* feat: Added binary classification option

See #19

* style: Fixed pep8 compliance

* fix: Fixed argument description

* fix: Fixed training script saving message
@frgfm
Copy link
Member Author

frgfm commented Oct 23, 2019

Closed by #30 & #37

@frgfm frgfm closed this as completed Oct 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ext: references Related to references help wanted Extra attention is needed module: models Related to models
Projects
None yet
Development

No branches or pull requests

1 participant