Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Update torch requirement from <1.7.0,>=1.6.0 to >=1.6.0,<1.8.0 #4753

Merged
merged 2 commits into from
Oct 27, 2020

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 27, 2020

Updates the requirements on torch to permit the latest version.

Release notes

Sourced from torch's releases.

PyTorch 1.7 released w/ CUDA 11, New APIs for FFTs, Windows support for Distributed training and more

PyTorch 1.7.0 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • New Features
  • Improvements
  • Performance
  • Documentation

Highlights

The PyTorch 1.7 release includes a number of new APIs including support for NumPy-Compatible FFT operations, profiling tools and major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. In addition, several features moved to stable including custom C++ Classes, the memory profiler, the creation of custom tensor-like objects, user async functions in RPC and a number of other features in torch.distributed such as Per-RPC timeout, DDP dynamic bucketing and RRef helper.

A few of the highlights include:

  • CUDA 11 is now officially supported with binaries available at PyTorch.org
  • Updates and additions to profiling and performance for RPC, TorchScript and Stack traces in the autograd profiler
  • (Beta) Support for NumPy compatible Fast Fourier transforms (FFT) via torch.fft
  • (Prototype) Support for Nvidia A100 generation GPUs and native TF32 format
  • (Prototype) Distributed training on Windows now supported

To reiterate, starting PyTorch 1.6, features are now classified as stable, beta and prototype. You can see the detailed announcement here. Note that the prototype features listed in this blog are available as part of this release.

Front End APIs

[Beta] NumPy Compatible torch.fft module

FFT-related functionality is commonly used in a variety of scientific fields like signal processing. While PyTorch has historically supported a few FFT-related functions, the 1.7 release adds a new torch.fft module that implements FFT-related functions with the same API as NumPy.

This new module must be imported to be used in the 1.7 release, since its name conflicts with the historic (and now deprecated) torch.fft function.

Example usage:

>>> import torch.fft
>>> t = torch.arange(4)
>>> t
tensor([0, 1, 2, 3])
>>> torch.fft.fft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])
>>> t = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])
>>> torch.fft.fft(t)
tensor([12.+16.j, -8.+0.j, -4.-4.j,  0.-8.j])

  • Documentation | Link

... (truncated)

Commits
  • e85d494 make valgrind_toggle and valgrind_supported_platform private (#46718)
  • a6e96b1 Avoid leaking has_torch_function and handle_torch_function in torch namespace...
  • f9df694 Make add_relu an internal function (#46676) (#46765)
  • 6394982 gate load_library tests behind BUILD_TEST=1 (#46556)
  • 56166c1 properly handle getGraphExecutorOptimize to not leak memory due to (#46621)
  • 3957268 [hotfix] remove test.pypi.org (#46492) (#46591)
  • eed8d7e Cherry-picks for TE fixes for aten::cat. (#46513)
  • 33c1763 [v1.7] Fix backward compatibility test by moving dates forward.
  • cf77b08 [JIT] Improve class type annotation inference (#46422)
  • 9aecf70 [v1.7] Quick fix for view/inplace issue with DDP (#46407)
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually

Updates the requirements on [torch](https://github.com/pytorch/pytorch) to permit the latest version.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Commits](pytorch/pytorch@v1.6.0...v1.7.0)

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Oct 27, 2020
@epwalsh epwalsh self-requested a review October 27, 2020 19:56
@epwalsh epwalsh self-assigned this Oct 27, 2020
@epwalsh epwalsh merged commit 5d6670c into master Oct 27, 2020
@epwalsh epwalsh deleted the dependabot/pip/torch-gte-1.6.0-and-lt-1.8.0 branch October 27, 2020 20:27
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant