-
Notifications
You must be signed in to change notification settings - Fork 4k
Commit 6ab3489
authored
Python: Bump torch from 2.7.0 to 2.7.1 in /python (#12424)
Bumps [torch](https://github.com/pytorch/pytorch) from 2.7.0 to 2.7.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytorch/pytorch/releases">torch's
releases</a>.</em></p>
<blockquote>
<h2>PyTorch 2.7.1 Release, bug fix release</h2>
<p>This release is meant to fix the following issues (regressions /
silent correctness):</p>
<h3>Torch.compile</h3>
<p>Fix Excessive cudagraph re-recording for HF LLM models (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152287">#152287</a>)
Fix torch.compile on some HuggingFace models (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151154">#151154</a>)
Fix crash due to Exception raised inside torch.autocast (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152503">#152503</a>)
Improve Error logging in torch.compile (<a
href="https://redirect.github.com/pytorch/pytorch/pull/149831">#149831</a>)
Mark mutable custom operators as cacheable in torch.compile (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151194">#151194</a>)
Implement workaround for a graph break with older version einops (<a
href="https://redirect.github.com/pytorch/pytorch/pull/153925">#153925</a>)
Fix an issue with tensor.view(dtype).copy_(...) (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151598">#151598</a>)</p>
<h3>Flex Attention</h3>
<p>Fix assertion error due to inductor permuting inputs to flex
attention (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151959">#151959</a>)
Fix performance regression on nanogpt speedrun (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152641">#152641</a>)</p>
<h3>Distributed</h3>
<p>Fix extra CUDA context created by barrier (<a
href="https://redirect.github.com/pytorch/pytorch/pull/149144">#149144</a>)
Fix an issue related to Distributed Fused Adam in Rocm/APEX when using
nccl_ub feature (<a
href="https://redirect.github.com/pytorch/pytorch/pull/150010">#150010</a>)
Add a workaround random hang in non-blocking API mode in NCCL 2.26 (<a
href="https://redirect.github.com/pytorch/pytorch/pull/154055">#154055</a>)</p>
<h3>MacOS</h3>
<p>Fix MacOS compilation error with Clang 17 (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151344">#151316</a>)
Fix binary kernels produce incorrect results when one of the tensor
arguments is from a wrapped scalar on MPS devices (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152997">#152997</a>)</p>
<h3>Other</h3>
<p>Improve PyTorch Wheel size due to introduction of addition of 128 bit
vectorization (<a
href="https://redirect.github.com/pytorch/pytorch/pull/148320">#148320</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/pull/152396">#152396</a>)
Fix fmsub function definition (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152075">#152075</a>)
Fix Floating point exception in torch.mkldnn_max_pool2d (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151848">#151848</a>)
Fix abnormal inference output with XPU:1 device (<a
href="https://redirect.github.com/pytorch/pytorch/pull/153067">#153067</a>)
Fix Illegal Instruction Caused by grid_sample on Windows (<a
href="https://redirect.github.com/pytorch/pytorch/pull/152613">#152613</a>)
Fix ONNX decomposition does not preserve custom
CompositeImplicitAutograd ops (<a
href="https://redirect.github.com/pytorch/pytorch/pull/151826">#151826</a>)
Fix error with dynamic linking of libgomp library (<a
href="https://redirect.github.com/pytorch/pytorch/pull/150084">#150084</a>)
Fix segfault in profiler with Python 3.13 (<a
href="https://redirect.github.com/pytorch/pytorch/pull/153848">#153848</a>)</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pytorch/pytorch/commit/e2d141dbde55c2a4370fac5165b0561b6af4798b"><code>e2d141d</code></a>
set thread_work_size to 4 for unrolled kernel (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154541">#154541</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/121419899b9955bbd41b14a207205a9b298ed3f0"><code>1214198</code></a>
[c10d] Fix extra CUDA context created by barrier (<a
href="https://redirect.github.com/pytorch/pytorch/issues/152834">#152834</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/790cc2f02c193b955a1225cea2f79b624a3013bc"><code>790cc2f</code></a>
[c10d] Add more tests to prevent extra context (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154179">#154179</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/62ea99a94771dc547320583576b0bc10ded6a3ce"><code>62ea99a</code></a>
[CI] Remove the xpu env source for linux binary validate (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154409">#154409</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/941732c8298c3a5f5e72b4f744c9d723b751967e"><code>941732c</code></a>
[ROCm] Added unit test to test the cuda_pluggable allocator (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154135">#154135</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/769d5da70224ee032fce196b9400927f8b515540"><code>769d5da</code></a>
[binary builds] Linux aarch64 CUDA builds. Make sure tag is set
correctly (<a
href="https://redirect.github.com/pytorch/pytorch/issues/1">#1</a>...</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/306ba122bd4653875d450a94918227a180ebe5d4"><code>306ba12</code></a>
Fix uint view copy (<a
href="https://redirect.github.com/pytorch/pytorch/issues/151598">#151598</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/154121">#154121</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/1ae99532808dac1b833df6bfa37fb21f6e17aa0f"><code>1ae9953</code></a>
[ROCm] Update CUDAPluggableAllocator.h (<a
href="https://redirect.github.com/pytorch/pytorch/issues/1984">#1984</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/153974">#153974</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/4a815ed15aa99dee3527011cfdd11068dcb946ad"><code>4a815ed</code></a>
ci: Set minimum cmake version for halide build (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154122">#154122</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/4c7314e78c3aef08ef9ac674ef708200fb10c35c"><code>4c7314e</code></a>
[Dynamo] Fix einops regression (<a
href="https://redirect.github.com/pytorch/pytorch/issues/154053">#154053</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pytorch/pytorch/compare/v2.7.0...v2.7.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>1 parent 44f1253 commit 6ab3489Copy full SHA for 6ab3489
File tree
Expand file treeCollapse file tree
1 file changed
+1
-1
lines changedFilter options
- python
Expand file treeCollapse file tree
1 file changed
+1
-1
lines changed+1-1Lines changed: 1 addition & 1 deletion
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
94 | 94 |
| |
95 | 95 |
| |
96 | 96 |
| |
97 |
| - | |
| 97 | + | |
98 | 98 |
| |
99 | 99 |
| |
100 | 100 |
| |
|
0 commit comments