Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental Torch improvements #457

Closed
SergeyTsimfer opened this issue Dec 10, 2019 · 6 comments
Closed

Incremental Torch improvements #457

SergeyTsimfer opened this issue Dec 10, 2019 · 6 comments

Comments

@SergeyTsimfer
Copy link
Member

SergeyTsimfer commented Dec 10, 2019

  • refactor BasePool: it can be split into multiple classes with clear functionality (done in pooling refactoring #469)

  • improve Encoder/Decoder modules

  • refactor n_iters and decay configurations: no need to pass n_iters in the root configuration (done)

  • make so every block is sent to device: can be helpful with pre-trained models (done in Torch improvements #461)

  • refactor pyramid layers so they use common base

  • make Xception out of XceptionBlocks

@nikita-klsh
Copy link
Member

nikita-klsh commented Dec 10, 2019

  • not specifying layout in the initial_block / body / head of TorchModel config results in skipping module at all (done in Torch improvements #461)

@SergeyTsimfer
Copy link
Member Author

SergeyTsimfer commented Dec 10, 2019

  • add some warnings if task is classification and classes are not set in config: in this case, if model is built off of the first batch with data, classes can be infered incorrectly (understandably so)

@SergeyTsimfer
Copy link
Member Author

SergeyTsimfer commented Dec 10, 2019

@nikita-klsh
Copy link
Member

nikita-klsh commented Dec 16, 2019

  • kwargs for combine passed through config in Decoder do not reach the Combine module inside (done in Torch improvements #461)

@SergeyTsimfer
Copy link
Member Author

At the time of implementation, we've decided to keep the argument names of PyTorch layers the same as their TensorFlow counterparts. For example, dilation_rate instead of dilation; strides instead of stride. I think it is time to fix that and either change names to the PyTorch convention or make methods recognize those parameters as aliases

@SergeyTsimfer
Copy link
Member Author

Also, looks like we should change the letter of upsampling to U with parameter mode, and, maybe, P for pooling operations; otherwise, it is confusing to use nn interpolation which is also n letter (same as normalization layer)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants