-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add device asserts in scatter/gather kernels #1377
Conversation
Can you also add a test in PyTorch? |
Will PyTorch tests work? Don't device-side asserts require resetting the device after they're triggered? (That would mean killing the PyTorch process) |
We don't have tests for scatter/gather in PyTorch, perhaps @apaszke is referring to adding the basic unit tests here, not particularly for the failure case? |
@pytorchbot add to whitelist |
@@ -93,6 +93,7 @@ __global__ void THCudaTensor_gatherKernel( | |||
src, &srcOffset); | |||
|
|||
IndexType indexValue = (IndexType)index.data[indexOffset] - TH_INDEX_BASE; | |||
assert(indexValue < src.sizes[dim]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -1036,8 +1036,8 @@ def compare(t, k, dim, dir): | |||
random.randint(1, SIZE), | |||
random.randint(1, SIZE)) | |||
|
|||
for kTries in range(3): | |||
for dimTries in range(3): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
496168f
to
f1eeab6
Compare
this is now merged into master |
…8ffb52 (pytorch#11346) Summary: Pull Request resolved: pytorch#11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 1a8c22262ae4946897e4be030d3f1cf3a3ad58b6
…8ffb52 (#11346) Summary: Pull Request resolved: #11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
…8ffb52 (pytorch#11346) Summary: Pull Request resolved: pytorch#11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
* Have Kernel Inherit IrContainer (pytorch#1375) * Kernel<-Fusion Step 1 - Convert ExprSort to StmtSort (pytorch#1376) * Kernel<-Fusion Step 2 - Mutator refactor (pytorch#1377) * Kernel<-Fusion Step 3 - Debug print for expr_eval and type promotion fix (pytorch#1379) * Kernel<-Fusion Step 4 - Have kernel inherit Fusion (pytorch#1380) * Kernel<-Fusion Step 5 - Move lowering passes into their own files (pytorch#1382) * Kernel<-Fusion Step 6 - Remove kir::IrBuilder (pytorch#1383) * Kernel<-Fusion Step 7 - Remove kir functions from ComputeAtMap (pytorch#1384) * Kernel<-Fusion Step 8 - Clean up [lower/executor] utils (pytorch#1387) * Kernel<-Fusion Step 9 - Remove TensorView::fuserTv (pytorch#1388) * Kernel<-Fusion Step 10 - Remove lowerVal/lowerExpr (pytorch#1389) * Kernel<-Fusion Step 11 - Finish cleaning up kir (pytorch#1390)
C++20 mangling rules were recently added to hip-clang. This flag maintains compatibility since pytorch is at C++17. Otherwise the linker fails.
Addresses #1368.
Test Plan:
CUDA_LAUNCH_BLOCKING=1
, verify device-side assert is triggered