Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add device asserts in scatter/gather kernels #1377

Closed
wants to merge 2 commits into from

Conversation

killeent
Copy link
Contributor

Addresses #1368.

Test Plan:

  • Test in existing CuTorch repository, verify that tests still pass
  • Test with invalid index, and CUDA_LAUNCH_BLOCKING=1, verify device-side assert is triggered

@apaszke
Copy link
Contributor

apaszke commented Apr 27, 2017

Can you also add a test in PyTorch?

@colesbury
Copy link
Member

Will PyTorch tests work? Don't device-side asserts require resetting the device after they're triggered? (That would mean killing the PyTorch process)

@killeent
Copy link
Contributor Author

We don't have tests for scatter/gather in PyTorch, perhaps @apaszke is referring to adding the basic unit tests here, not particularly for the failure case?

@colesbury
Copy link
Member

@pytorchbot add to whitelist

@@ -93,6 +93,7 @@ __global__ void THCudaTensor_gatherKernel(
src, &srcOffset);

IndexType indexValue = (IndexType)index.data[indexOffset] - TH_INDEX_BASE;
assert(indexValue < src.sizes[dim]);

This comment was marked as off-topic.

This comment was marked as off-topic.

@@ -1036,8 +1036,8 @@ def compare(t, k, dim, dir):
random.randint(1, SIZE),
random.randint(1, SIZE))

for kTries in range(3):
for dimTries in range(3):

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith
Copy link
Member

soumith commented May 3, 2017

this is now merged into master

@soumith soumith closed this May 3, 2017
houseroad added a commit to houseroad/pytorch that referenced this pull request Sep 6, 2018
…8ffb52 (pytorch#11346)

Summary:
Pull Request resolved: pytorch#11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 1a8c22262ae4946897e4be030d3f1cf3a3ad58b6
facebook-github-bot pushed a commit that referenced this pull request Sep 7, 2018
…8ffb52 (#11346)

Summary:
Pull Request resolved: #11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
…8ffb52 (pytorch#11346)

Summary:
Pull Request resolved: pytorch#11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Jan 26, 2022
* Have Kernel Inherit IrContainer (pytorch#1375)
* Kernel<-Fusion Step 1 - Convert ExprSort to StmtSort (pytorch#1376)
* Kernel<-Fusion Step 2 - Mutator refactor (pytorch#1377)
* Kernel<-Fusion Step 3 - Debug print for expr_eval and type promotion fix (pytorch#1379)
* Kernel<-Fusion Step 4 - Have kernel inherit Fusion (pytorch#1380)
* Kernel<-Fusion Step 5 - Move lowering passes into their own files (pytorch#1382)
* Kernel<-Fusion Step 6 - Remove kir::IrBuilder (pytorch#1383)
* Kernel<-Fusion Step 7 - Remove kir functions from ComputeAtMap (pytorch#1384)
* Kernel<-Fusion Step 8 - Clean up [lower/executor] utils (pytorch#1387)
* Kernel<-Fusion Step 9 - Remove TensorView::fuserTv (pytorch#1388)
* Kernel<-Fusion Step 10 - Remove lowerVal/lowerExpr (pytorch#1389)
* Kernel<-Fusion Step 11 - Finish cleaning up kir (pytorch#1390)
pytorch-bot bot pushed a commit that referenced this pull request Apr 24, 2024
C++20 mangling rules were recently added to hip-clang. This flag
maintains compatibility since pytorch is at C++17. Otherwise the linker
fails.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants