New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fp16/half precision support #79

Closed
lukasc-ch opened this Issue Dec 22, 2015 · 10 comments

Comments

Projects
None yet
8 participants
@lukasc-ch
Contributor

lukasc-ch commented Dec 22, 2015

What is the current status on the half precision stuff? With the Tegra X1 out, this is becoming more interesting. I have seen that @soumith has committed some first stuff with 27e969b. Is anyone working on this? I would generally be interested on contributing to this. Are there some thoughts around on how this should best be done with CudaTensor currently supporting single-precision exclusively?

@soumith

This comment has been minimized.

Show comment
Hide comment
@soumith

soumith Dec 22, 2015

Owner

As soon as torch/cutorch#225 is merged, we can enable half precision. Should be merged this week.

Owner

soumith commented Dec 22, 2015

As soon as torch/cutorch#225 is merged, we can enable half precision. Should be merged this week.

@Darktex

This comment has been minimized.

Show comment
Hide comment
@Darktex

Darktex Mar 3, 2016

Hi soumith, any progress on this?

Darktex commented Mar 3, 2016

Hi soumith, any progress on this?

@jcjohnson

This comment has been minimized.

Show comment
Hide comment
@jcjohnson

jcjohnson Mar 19, 2016

Any idea when we will see FP16 support? torch/cutorch#225 Seems to have been merged a while back.

jcjohnson commented Mar 19, 2016

Any idea when we will see FP16 support? torch/cutorch#225 Seems to have been merged a while back.

@jorditg

This comment has been minimized.

Show comment
Hide comment
@jorditg

jorditg commented Apr 11, 2016

+1

@lukasc-ch

This comment has been minimized.

Show comment
Hide comment
@lukasc-ch

lukasc-ch Apr 19, 2016

Contributor

FYI, stuff is moving: PR #170, and szagoruyko/cudnn.torch/fp16

Contributor

lukasc-ch commented Apr 19, 2016

FYI, stuff is moving: PR #170, and szagoruyko/cudnn.torch/fp16

@victorhcm

This comment has been minimized.

Show comment
Hide comment
@victorhcm

victorhcm May 13, 2016

That'd be great, specially if the GTX 1080 will really support double rate at half precision.

victorhcm commented May 13, 2016

That'd be great, specially if the GTX 1080 will really support double rate at half precision.

@FuriouslyCurious

This comment has been minimized.

Show comment
Hide comment
@FuriouslyCurious

FuriouslyCurious Jun 28, 2016

Is FP16 still under development? What is the ETA? Thanks!

FuriouslyCurious commented Jun 28, 2016

Is FP16 still under development? What is the ETA? Thanks!

@szagoruyko

This comment has been minimized.

Show comment
Hide comment
@szagoruyko

szagoruyko Jun 28, 2016

Collaborator

@campbx there are 2 pull requests ongoing: #206 and torch/cutorch#431, fp16 is working and tested and hopefully will be merged soon after little modifications.

Collaborator

szagoruyko commented Jun 28, 2016

@campbx there are 2 pull requests ongoing: #206 and torch/cutorch#431, fp16 is working and tested and hopefully will be merged soon after little modifications.

@jorditg

This comment has been minimized.

Show comment
Hide comment
@jorditg

jorditg Jul 14, 2016

Hi all, I've seen that #206 and torch/cutorch#431 are ready.

When fp16 is expected to be merged?

jorditg commented Jul 14, 2016

Hi all, I've seen that #206 and torch/cutorch#431 are ready.

When fp16 is expected to be merged?

@soumith

This comment has been minimized.

Show comment
Hide comment
@soumith

soumith Jul 29, 2016

Owner

They've been merged now. Thanks everyone, and especially @szagoruyko

Owner

soumith commented Jul 29, 2016

They've been merged now. Thanks everyone, and especially @szagoruyko

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment