Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SymInt to support -1 as a concrete int #77882

Closed
Tracked by #77830
miladm opened this issue May 19, 2022 · 2 comments
Closed
Tracked by #77830

SymInt to support -1 as a concrete int #77882

miladm opened this issue May 19, 2022 · 2 comments
Assignees
Labels
lazy Lazy Tensor work items

Comments

@miladm
Copy link
Collaborator

miladm commented May 19, 2022

🐛 Describe the bug

It turns out some ops, like torch.Tensor.expand accept -1 as size: "Passing -1 as the size for a dimension means not changing the size of that dimension."

The present implementation of SymInt disallows negative numbers and reserves the most significant bit of the data_ member variable for handling symbolic shapes.

This issue blocks expand.SymInt shape inference to handle -1 correctly.

Proposed solution:

  • Treat data_ values "smaller" than -1 to represent symbolic dimensions - assuming we know of no other use case to go below -1.

CC @Krovatkin @ezyang @zou3519 @Gamrix @wconstab @JackCaoG @shauheen

Versions

Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 11.2.0-19) 11.2.0
Clang version: 13.0.1-3+build2
CMake version: version 3.22.1
Libc version: glibc-2.33

Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.16.18-1rodete2-amd64-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A

Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0a0+gitb32758f
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py38h7a5d4dd_0
[conda] numpy-base 1.22.3 py38hb8be1f0_0
[conda] torch 1.12.0a0+gitb32758f pypi_0 pypi

@miladm miladm added the lazy Lazy Tensor work items label May 19, 2022
@wconstab
Copy link
Contributor

cc @suo who I think is already aware of this and working on something

@suo
Copy link
Member

suo commented May 20, 2022

#77913

@Krovatkin Krovatkin added this to To do in Dynamic Shapes May 20, 2022
@Krovatkin Krovatkin moved this from To do to Done in Dynamic Shapes May 20, 2022
@suo suo closed this as completed May 20, 2022
pytorchmergebot pushed a commit that referenced this issue Jun 29, 2022
Added support for `expand` in LazyTensor shape inference
Fixes #77831

---

**Blockers:**

- [x] #77880
- [x] #77882
Pull Request resolved: #77830
Approved by: https://github.com/Krovatkin
facebook-github-bot pushed a commit that referenced this issue Jun 30, 2022
…7830)

Summary:
Added support for `expand` in LazyTensor shape inference
Fixes #77831

 ---

**Blockers:**

- [x] #77880
- [x] #77882

Pull Request resolved: #77830
Approved by: https://github.com/Krovatkin

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/0922cc024eeafa2158c0d00396494a0ae983f8cb

Reviewed By: b0noI

Differential Revision: D37523035

fbshipit-source-id: 2e88e9a8a85c0a9e504fec92925cec0a05588892
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lazy Lazy Tensor work items
Projects
Status: Done
Development

No branches or pull requests

3 participants