Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev the return of the stream mode #137

Merged
merged 1 commit into from
Jun 15, 2018

Conversation

taraspiotr
Copy link
Contributor

No description provided.

@@ -4,8 +4,8 @@
from . import loaders
from .steps.base import Step, Dummy
from .steps.preprocessing.misc import XYSplit
from .utils import squeeze_inputs, make_apply_transformer
from .models import PyTorchUNet, PyTorchUNetWeighted
from .utils import squeeze_inputs, make_apply_transformer_stream
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taraspiotr it seems that the returning stream mode took reins over the old make_apply_transformer completely. I think the re-return of the non stream mode make_apply_transformer could make things easier. Let's see if they can coexist.

What I really mean is that I would rather have make_apply_transformer and make_apply_transformer_stream as 2 functions to make the distinction more vivid. I must have not communicated that before, sorry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jakubczakon you have these two functions in utils.py, make_apply_transformer_stream wraps make_apply_transformer adding stream mode option. I think it's cleaner, because you don't have to pass twice the same arguments in pipelines.py for each step, for example this would have to be typed twice:

make_apply_transformer_stream(post.resize_image,
output_name='resized_images',
apply_on=['images', 'target_sizes'],

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taraspiotr oh ok, so the only problem remaining is that we are using only make_apply_transformer_stream in the pipelines.py wheras most of the time we could use the simple make_apply_transformer . Do you strongly thing that using a flag is better?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taraspiotr you conviced me with doulbe arguments. Let's stay with the flag

@jakubczakon
Copy link
Collaborator

@taraspiotr some conflicts now after merging

@jakubczakon jakubczakon merged commit 1786012 into neptune-ai:dev-repo_cleanup Jun 15, 2018
jakubczakon added a commit that referenced this pull request Jun 15, 2018
* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)
jakubczakon added a commit that referenced this pull request Jun 19, 2018
* added gmean tta, experimented with thresholding (#125)

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* local

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Dev preparation path fix (#140)

* local

* cleaned up paths in the masks and metadata generation

* dropped debug stuff

* Dev non trainable transformer flag (#141)

* local

* added is_trainable flag to models
jakubczakon pushed a commit that referenced this pull request Jun 21, 2018
* initial restructure

* thresholds on unet output

* added gmean tta, experimented with thresholding (#125)

* feature exractor and lightgbm

* pipeline is running ok

* tmp commit

* lgbm ready for tests

* tmp

* faster nms and feature extraction

* small fix

* cleaning

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* finishing merge

* finishing merge v2

* finishing merge v3

* finishing merge v4

* tmp commit

* lgbm train and evaluate pipelines run correctly

* something is not yes

* fix

* working lgbm training with ugly train_mode=True

* back to pipelines.py

* small fix

* preparing PR

* preparing PR v2

* preparing PR v2

* fix

* fix_2

* fix_3

* fix_4
jakubczakon added a commit that referenced this pull request Jun 21, 2018
* initial restructure

* thresholds on unet output

* added gmean tta, experimented with thresholding (#125)

* feature exractor and lightgbm

* pipeline is running ok

* tmp commit

* lgbm ready for tests

* tmp

* faster nms and feature extraction

* small fix

* cleaning

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* finishing merge

* finishing merge v2

* finishing merge v3

* finishing merge v4

* tmp commit

* lgbm train and evaluate pipelines run correctly

* something is not yes

* fix

* working lgbm training with ugly train_mode=True

* back to pipelines.py

* small fix

* preparing PR

* preparing PR v2

* preparing PR v2

* fix

* fix_2

* fix_3

* fix_4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants