Skip to content

Fixes #3855#3867

Open
jvz37 wants to merge 3 commits intopytorch:mainfrom
jvz37:fix/3855-deprecated-inplace-ops
Open

Fixes #3855#3867
jvz37 wants to merge 3 commits intopytorch:mainfrom
jvz37:fix/3855-deprecated-inplace-ops

Conversation

@jvz37
Copy link
Copy Markdown

@jvz37 jvz37 commented May 5, 2026

Fixes #3855

Description

  • Replace torch.sin_(b) with b.sin_() in tensors_deeper_tutorial.py
  • Replace torch.sin_(a) with a.sin_() in autogradyt_tutorial.py
  • Replace torch.linalg.matmul() with torch.matmul() in tensors_deeper_tutorial.py
  • Update editorial text around in-place ops in tensors_deeper_tutorial.py

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

cc @subramen

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 5, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3867

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented May 5, 2026

Hi @jvz37!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented May 5, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented May 5, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla Bot added the cla signed label May 5, 2026
@sekyondaMeta
Copy link
Copy Markdown
Contributor

@claude review and compare to work in issue #3855

@claude
Copy link
Copy Markdown

claude Bot commented May 5, 2026

Claude finished @sekyondaMeta's task in 3m 59s —— View job


Review of PR #3867 vs Issue #3855 Requirements


Issue #3855 Alignment

Based on the PR description (which references issue #3855), the required fixes are:

Requirement Status Details
Replace torch.sin_(b) with b.sin_() in tensors_deeper_tutorial.py ✅ Done Line 489
Replace torch.sin_(a) with a.sin_() in autogradyt_tutorial.py ✅ Done Line 470
Replace torch.linalg.matmul() with torch.matmul() in tensors_deeper_tutorial.py ✅ Done Line 452
Update editorial text around in-place ops in tensors_deeper_tutorial.py ✅ Done Lines 513-517

All core requirements from the issue are addressed correctly.


Code Review Findings

Substantive changes (correct):

  1. torch.sin_(b)b.sin_() — fixes the deprecated module-level in-place op ✅
  2. torch.sin_(a)a.sin_() in autograd tutorial (commented code example) ✅
  3. torch.linalg.matmul(m1, m2)torch.matmul(m1, m2)torch.linalg.matmul is an undocumented alias; torch.matmul is the canonical API ✅
  4. Updated editorial text to correctly explain that in-place ops (including trig) are tensor methods ✅

Minor issue — Sphinx Gallery formatting (non-blocking):

At tensors_deeper_tutorial.py lines 517-519, there's a bare blank line separating two comment blocks:

# module-level functions.
                          # ← line 518: blank (no #)
#
# There is another option for placing the result of a computation in an

On main, this was a continuous comment block with # (empty comment line) keeping the RST text as one section. The PR introduces a bare blank line which, in Sphinx Gallery, splits this into two separate rendered text cells. This won't break the build but creates a minor visual split in the rendered tutorial. Consider changing line 518 to # to keep it as one text block.

Fix this →

Additional changes (cosmetic):

The PR includes a large number of whitespace and style changes (275 additions, 259 deletions across 2 files), but the vast majority are:

  • Trailing whitespace removal (# #)
  • Quote style changes ('...'"...")
  • Comment alignment reformatting
  • f-string modernization
  • Blank line cleanup

These are all harmless and pass lintrunner, but they do make the diff harder to review. Future PRs might benefit from separating functional fixes from style cleanup.


CI Status

All 5 CI checks pass: ✅ Lintrunner, ✅ Link check, ✅ Redirect check, ✅ Build tutorials, ✅ Build tutorials (nightly/test)


Summary

This PR correctly addresses all requirements from issue #3855. The deprecated torch.sin_() module-level calls are replaced with the proper tensor method syntax, and torch.linalg.matmul is replaced with the canonical torch.matmul. The editorial text is updated to reflect the correct API surface. One minor Sphinx Gallery formatting nit noted above (non-blocking).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update Intro YouTube Tensors & Autograd tutorials — deprecated in-place ops, matmul alias, formatting

3 participants