Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to PyTorch 1.0 #117

Open
meetps opened this issue Aug 16, 2018 · 2 comments
Open

Upgrade to PyTorch 1.0 #117

meetps opened this issue Aug 16, 2018 · 2 comments
Assignees

Comments

@meetps
Copy link
Owner

meetps commented Aug 16, 2018

Planned updates

  • Upgrade to pytorch-1.0
  • Allow support for models to have custom loss functions.
  • Pretrained models on S3 for common models-dataset pairs
  • Compatibility matrix for dataset with models
  • RefineNet and E-Net implementations
  • MS-COCO, Mapillary Vistas and BDD-100k datasets
  • Interface to combine datasets with a user-given mapping dictionary
  • Improved metric logging
@meetps meetps self-assigned this Aug 16, 2018
@kcyu2014
Copy link

Hi,
I have recently tested the current build on pytorch 0.4.1, and it encounter CUDA out of memory error.
There is no problem in 0.4.0 for the current time being.
Hope you could also look into this direction during the upgrade :)

kc

@Spritea
Copy link

Spritea commented Jan 6, 2019

Hi, guys:

I found a simple way to use this code with Pytorch 1.0 or 0.4.1. This error is caused by functools package used in fcn.py. So just comment related code below and it works.

#self.loss = functools.partial(cross_entropy2d, size_average=False)

Besides, for other models without functools, like segnet, it works correctly in Pytorch 1.0/0.4.1, without any modification.

See related #173

@meetps meetps pinned this issue Jan 6, 2019
@meetps meetps changed the title Upgrade to PyTorch 0.4.1 Upgrade to PyTorch 1.0 Jan 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants