Skip to content
This repository has been archived by the owner on Mar 22, 2021. It is now read-only.

Experiment with random gradient #38

Closed
jakubczakon opened this issue Aug 29, 2018 · 0 comments
Closed

Experiment with random gradient #38

jakubczakon opened this issue Aug 29, 2018 · 0 comments
Assignees

Comments

@jakubczakon
Copy link
Contributor

No description provided.

@jakubczakon jakubczakon self-assigned this Aug 29, 2018
kant added a commit to kant/open-solution-salt-identification that referenced this issue Sep 8, 2018
jakubczakon pushed a commit that referenced this issue Sep 8, 2018
jakubczakon added a commit that referenced this issue Oct 13, 2018
* added image channel and params to config (#29)

* exping

* added large kernel matters architecture, renamed stuff, generalized c… (#30)

* added large kernel matters architecture, renamed stuff, generalized conv2drelubn block

* exping

* exping

* copied the old ConvBnRelu block to make sure it is easy to finetune old models

* reverted main

* Depth (#31)

* exping

* exping

* added depth loaders, and depth_excitation layer, adjusted models and callbacks to deal with both

* fixed minor issues

* exping

* merged/refactored

* exping

* refactored architectures, moved use_depth param to main

* added dropout to lkm constructor, dropped my experiment dir definition

* Second level (#33)

* exping

* first stacked unet training

* fixed minor typo-bugs

* fixed unet naming bug

* added stacking preds exploration

* dropped redundant imports

* adjusted callbacks to work with stacking, added custom to_tensor_stacking

* Auxiliary data (#34)

* exping

* added option to use auxiliary masks

* Stacking (#35)

* exping

* exping

* fixed stacking postpro

* Stacking (#36)

* exping

* exping

* fixed stacking postpro

* exping

* added fully convo stacking, fixed minor issues with loader_mode: stacking

* Update architectures.py

import fix

* Update README.md

* Update models.py

reverted to default (current best) large kernel matters internal_channel_nr

* Stacking (#37)

Stacking

* Stacking depth (#38)

* exping

* added depth option to stacking model, dropped stacking unet from models

* Empty non empty (#39)

* exping

* added empty vs non empty loaders/models and execution

* changed to lovasz loss as default from bce

* reverted default callbacks target name
jakubczakon added a commit that referenced this issue Oct 13, 2018
* Update README.md

* Solution 3 simple (#50)

* initial ref

* solution 3 ported

* Update README.md

* Update README.md

* Update README.md

* dropped old paths

* Update README.md

* sections to ASCII art

* removed add_fold_id_suffix, log_scores, save_predictions

* Update README.md

* Update README.md

* Update neptune.yaml

* Update README.md

* Update README.md

* Update neptune.yaml

* Typo on #38 & #201 (#65)

* Update README.md

* added k-fold validation and averaging, added saving oof predictions, … (#73)

* added k-fold validation and averaging, added saving oof predictions, fixed pytorch memory issues, updated results exploration

* updated utils

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Solution 5 (#79)

* moved solution 5

* Update neptune.yaml

updated configs

* Solution 5 (#80)

* moved solution 5

* Update neptune.yaml

updated configs

* Update README.md

* added FineTuningStep (#81)

* Solution 6 (#89)

* init solution 6

* Update README.md

* Update README.md

* Update README.md

* reversed sign of empty_vs_non_empty

* Augmenatations (#41)

* Seresnet pretrained (#42)

* restructured archs, added seresnetxt and seresnet

* fixed imports

* added densenet training 121 (#43)

* notebook updated

* PSPNet (#44)

* added pool0 fixed import errors (#45)

* fixed conlicts in readme

* fixed conflicts in stacking

* detached metadata preparation from running script
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant