Skip to content
This repository has been archived by the owner on Mar 22, 2021. It is now read-only.

Train empty/not empty model #35

Closed
jakubczakon opened this issue Aug 24, 2018 · 0 comments
Closed

Train empty/not empty model #35

jakubczakon opened this issue Aug 24, 2018 · 0 comments
Assignees

Comments

@jakubczakon
Copy link
Contributor

Could be useful for postprocessong

@jakubczakon jakubczakon self-assigned this Sep 24, 2018
@jakubczakon jakubczakon added this to models in kaggle-competition Sep 24, 2018
@jakubczakon jakubczakon moved this from models to done in kaggle-competition Sep 24, 2018
jakubczakon added a commit that referenced this issue Oct 13, 2018
* added image channel and params to config (#29)

* exping

* added large kernel matters architecture, renamed stuff, generalized c… (#30)

* added large kernel matters architecture, renamed stuff, generalized conv2drelubn block

* exping

* exping

* copied the old ConvBnRelu block to make sure it is easy to finetune old models

* reverted main

* Depth (#31)

* exping

* exping

* added depth loaders, and depth_excitation layer, adjusted models and callbacks to deal with both

* fixed minor issues

* exping

* merged/refactored

* exping

* refactored architectures, moved use_depth param to main

* added dropout to lkm constructor, dropped my experiment dir definition

* Second level (#33)

* exping

* first stacked unet training

* fixed minor typo-bugs

* fixed unet naming bug

* added stacking preds exploration

* dropped redundant imports

* adjusted callbacks to work with stacking, added custom to_tensor_stacking

* Auxiliary data (#34)

* exping

* added option to use auxiliary masks

* Stacking (#35)

* exping

* exping

* fixed stacking postpro

* Stacking (#36)

* exping

* exping

* fixed stacking postpro

* exping

* added fully convo stacking, fixed minor issues with loader_mode: stacking

* Update architectures.py

import fix

* Update README.md

* Update models.py

reverted to default (current best) large kernel matters internal_channel_nr

* Stacking (#37)

Stacking

* Stacking depth (#38)

* exping

* added depth option to stacking model, dropped stacking unet from models

* Empty non empty (#39)

* exping

* added empty vs non empty loaders/models and execution

* changed to lovasz loss as default from bce

* reverted default callbacks target name
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Development

No branches or pull requests

1 participant