-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opt into Trusty builds. #2214
Opt into Trusty builds. #2214
Conversation
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
python 2.7.8 seems to not be supported by trusty travis |
the URL it tries to download i.e. https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/14.04/x86_64/python-2.7.8.tar.bz2 does not exist. |
Yes. According to Travis travis-ci/travis-ci#8153 they did not build this version of Python for Trusty. Is there are particular reason why this is the beginning of our version support? |
it's the default python version in 12.04 if I remember correctly |
Nope :)
But maybe there is an environment we care about where it is default. I'm looking. UPDATE: I couldn't find anything definitive. |
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Default python2 for Ubuntu releases: https://launchpad.net/ubuntu/+source/python-defaults Default python3 for Ubuntu releases: https://launchpad.net/ubuntu/+source/python3-defaults |
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits) Add ATen overload to AutoGPU. (pytorch#2234) Add comments for default value (pytorch#2242) Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235) fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239) Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170) Added aarch64 support (pytorch#2226) Increase tol. for float tensor qr big test. Improve Variable.retain_grad add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables Implement BatchNorm double backwards (pytorch#2207) [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221) fix for ATen API Change Opt into Trusty builds. (pytorch#2214) allow retain to be specified for unsafeTensorFromTH Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218) fix osx build errors related to long/int64_t Note [Undefined-dim versus 0-dim] Remove __func__ hack in auto nn. Enable Conv groups gradgradchecks. (pytorch#2216) fix a bug where some scalars were getting truncated to integers incorrectly. ...
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits) Add ATen overload to AutoGPU. (pytorch#2234) Add comments for default value (pytorch#2242) Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235) fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239) Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170) Added aarch64 support (pytorch#2226) Increase tol. for float tensor qr big test. Improve Variable.retain_grad add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables Implement BatchNorm double backwards (pytorch#2207) [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221) fix for ATen API Change Opt into Trusty builds. (pytorch#2214) allow retain to be specified for unsafeTensorFromTH Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218) fix osx build errors related to long/int64_t Note [Undefined-dim versus 0-dim] Remove __func__ hack in auto nn. Enable Conv groups gradgradchecks. (pytorch#2216) fix a bug where some scalars were getting truncated to integers incorrectly. ...
pytorch#2214) * Avoid duplicated log when explicitly specified engine is not available * Update operator.cc
The complex type defined by `defineComplexType` doesn't seem to work with nvcc. When compiled with nvcc, instead of using the definitions, include <complex>. Also disable some other parts that don't work with <complex>. This would of course break the complex type support, but as long as it's not used, it shouldn't matter. Compilation with nvcc should be just ad-hoc experiments of code modifications, so the limitation should be fine.
Signed-off-by: Edward Z. Yang ezyang@fb.com