Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eager Torch #438

Merged
merged 112 commits into from Dec 9, 2019
Merged

Eager Torch #438

merged 112 commits into from Dec 9, 2019

Conversation

SergeyTsimfer
Copy link
Member

This PR proposes to add EagerTorch model that:

  • can build off of batch_data during first call of the train method

  • allows for better usage of native torch modules

  • does not use redundant tf-like methods (make_inputs, has_classes, etc)

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

You'll be able to see Jupyter notebook diff and discuss changes. Powered by ReviewNB.

@SergeyTsimfer
Copy link
Member Author

@analysiscenter/batchflow currently, main subject of the review should be EagerTorch class itself

Current TODO:
Rewrite all the layers from TorchModel
Add features like microbatch, async training, multi-device, etc

@codecov
Copy link

codecov bot commented Oct 24, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@79ead40). Click here to learn what that means.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master     #438   +/-   ##
=========================================
  Coverage          ?   35.92%           
=========================================
  Files             ?      149           
  Lines             ?    14459           
  Branches          ?        0           
=========================================
  Hits              ?     5195           
  Misses            ?     9264           
  Partials          ?        0
Impacted Files Coverage Δ
batchflow/models/eager_torch/losses/core.py 0% <0%> (ø)
batchflow/models/eager_torch/unet.py 0% <0%> (ø)
batchflow/models/eager_torch/encoder_decoder.py 0% <0%> (ø)
batchflow/models/eager_torch/resnet.py 0% <0%> (ø)
batchflow/models/eager_torch/layers/__init__.py 0% <0%> (ø)
batchflow/models/eager_torch/losses/__init__.py 0% <0%> (ø)
batchflow/models/eager_torch/layers/conv.py 0% <0%> (ø)
batchflow/models/eager_torch/layers/pooling.py 0% <0%> (ø)
batchflow/models/eager_torch/base.py 0% <0%> (ø)
batchflow/models/eager_torch/utils.py 0% <0%> (ø)
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 79ead40...61bed69. Read the comment docs.

@SergeyTsimfer
Copy link
Member Author

Implemented #366

Something weird is happening with padding..

batchflow/models/eager_torch/blocks.py Outdated Show resolved Hide resolved
batchflow/models/eager_torch/blocks.py Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants