forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge from upstream #157
Merged
Merged
Merge from upstream #157
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: PackedSequence is never supposed to be created by user, but unfortunately some community repo is already doing this (e.g., [here](https://github.com/huggingface/torchMoji/blob/7c191048ce906fc0404fe156827d97cb990ebecb/torchmoji/model_def.py#L218-L229)). Some change we made break the calling pattern `PackedSequence(data=x, batch_sizes=y)`. This patch adds back support for that. Pull Request resolved: pytorch#9864 Differential Revision: D9011739 Pulled By: SsnL fbshipit-source-id: 0e2012655d7f4863ec54803550df30874ec35d75
Summary: The scalar situation has gotten a lot better and now we can remove all instances of FIXME_zerol(). cc zdevito Pull Request resolved: pytorch#10900 Differential Revision: D9514206 Pulled By: zou3519 fbshipit-source-id: e4e522f324126c5454cd6de14b832d2d1f6cb0ce
Summary: - Added `__repr__` for Constraints and Transforms. - Arguments passed to the constructor are now rendered with :attr: Closes pytorch#10884 Pull Request resolved: pytorch#10894 Differential Revision: D9514161 Pulled By: apaszke fbshipit-source-id: 4abf60335d876449f2b6477eb9655afed9d5b80b
Summary: I missed these in pytorch#10900 cc apaszke jamesr66a zdevito Pull Request resolved: pytorch#10905 Differential Revision: D9516748 Pulled By: zou3519 fbshipit-source-id: a5c3e3b65a33c339d5c4e9fc160462c3d35705f3
Summary: Pull Request resolved: pytorch#10859 Reviewed By: newstzpz Differential Revision: D9498312 fbshipit-source-id: 08b8a596f774c9102286019f286ca0b74d1f5304
…es. (pytorch#10812) Summary: * Fix the necessary pathways so that tuples and lists can be inputs to the script. * prevent linear algebra functions from being run in shape prop because they frequently will error out for nonsense data. * favor schema-driven python input conversion where possible. remaining cases where we directly create Stacks without schema are only for debugging * Make the error messages when calling script/trace functions more pythonic * Simplify FlattenTuples -- now that tuples are supported we can choose to only flatten tuples when needed. This may have to be revisited pending onnx test results, but is necessary for making tuple io work. Pull Request resolved: pytorch#10812 Differential Revision: D9477982 Pulled By: zdevito fbshipit-source-id: ed06fc426e6ef6deb404602a26c435a7fc40ea0c
Summary: Pull Request resolved: pytorch#10909 Differential Revision: D9516837 Pulled By: gchanan fbshipit-source-id: fad7e3284e74c599b873ebaae2dcdf5013505855
…ytorch#10877) Summary: Pull Request resolved: pytorch#10877 change default value of DeviceOption.numa_node_id to 0 and use has_numa_node_id() to check existence Reviewed By: ilia-cher Differential Revision: D9473891 fbshipit-source-id: 91ac6a152f445644691023110c93d20a3ce80d43
Summary: Previously when tracing slicing & select negative indices would get normalized, fixing the index to the size of the traced tensor. This makes the behavior the same as script so aten::select with negative indices is emitted. Pull Request resolved: pytorch#10560 Differential Revision: D9493614 Pulled By: eellison fbshipit-source-id: ce7a8bae59863723247208d86b9f2948051ccc6c
…ch#10833) Summary: Commits: 1. Make `torch.cuda.*` take device objects 2. Update `torch.distributed` docs to emphasize calling `torch.cuda.set_device` before `init_process_group` Pull Request resolved: pytorch#10833 Differential Revision: D9514241 Pulled By: SsnL fbshipit-source-id: 2497464305fb1e63d6c495291a5744aaa7e2696e
Summary: The goal of this PR is to enable miopen engine(for hip devices) for recurrent operator and also enable corresponding unit test. bddppq petrex Pull Request resolved: pytorch#10840 Differential Revision: D9518980 Pulled By: bddppq fbshipit-source-id: 214661e79a47c5dc6b712ef0fba986bd99db051f
Summary: Moved kl div loss to aten. benchmarks for 5000 iterations on input size (1000,100) New ``` cuda: forward [0.9736350309103727, 0.9922929517924786, 0.9694818360731006] input requires_grad=True: backward [0.5595634011551738, 0.558339926879853, 0.5546616851352155] double backward [1.2445648494176567, 1.2245905152522027, 1.2349751549772918] target requires_grad=True: backward (new C++) [0.9489959231577814, 0.9553070571273565, 0.9556351029314101] double backward (new C++) [1.8184774098917842, 1.8164670099504292, 1.845708406995982] cpu: forward (new C++) [7.892430987209082, 8.3068826389499, 7.985283812973648] input requires_grad=True: backward (new C++) [4.328460982069373, 4.45323242014274, 4.27946363389492] double backward (new C++) [5.153504415880889, 4.629372010007501, 4.712803596165031] target requires_grad=True: backward (new C++) [3.4181493939831853, 3.3771288259886205, 3.7086612950079143] double backward (new C++) [0.21922698011621833, 0.1858532396145165, 0.19477044604718685] ``` Old ``` cuda: forward [3.101281268056482, 3.068499860819429, 3.0527669726870954] input requires_grad=True: backward [0.5650290949270129, 0.5730433077551425, 0.5588279226794839] double backward [1.1287697306834161, 1.13834543293342, 1.1298578432761133] target requires_grad=True: backward [0.9470391101203859, 0.9560198178514838, 0.9750375030562282] double backward [1.85760727385059, 1.7989214668050408, 1.788982989732176] cpu: forward (new C++) [12.474591840058565, 12.511441555805504, 12.666544185951352] input requires_grad=True: backward (new C++) [7.660991386976093, 7.449987292289734, 7.513917901087552] double backward (new C++) [4.073225498665124, 4.264980792999268, 4.429787891916931] target requires_grad=True: backward (new C++) [3.448499082121998, 3.9072313378565013, 3.2433970272541046] double backward (new C++) [2.126378359273076, 1.9045450473204255, 1.7932004742324352] ``` Pull Request resolved: pytorch#10336 Differential Revision: D9213636 Pulled By: li-roy fbshipit-source-id: 27cc530f6276f58d35dc7a1d56dfc758a0fc4a7b
Summary: Pull Request resolved: pytorch#10824 API additions: - Tensor(c10::intrusive_ptr<TensorImpl,UndefinedTensor>&&) - Tensor(const c10::intrusive_ptr<TensorImpl,UndefinedTensor>&) - Tensor::operator=(Tensor&&) && (for completeness sake) - TensorBase::unsafeGetTensorImpl() - TensorBase::unsafeReleaseTensorImpl() - TensorBase::getIntrusivePtr() - TensorImpl::type_id() - Tensor::set_data() - Tensor::is_same(Tensor) - Tensor::use_count() - Tensor::type_id() - Tensor::scalar_type() - WeakTensor::is_same(WeakTensor) - intrusive_ptr::weak_use_count() - weak_intrusive_ptr::weak_use_count() - c10::raw::intrusive_ptr::{incref,decref,make_weak} - c10::raw::weak_intrusive_ptr::{incref,decref,lock} API changes: - Tensor::pImpl is no longer public (and now named tensor_impl_) - Most methods accessed this way are now accessible on Tensor maybe_zero_dim() and set_wrapped_number() being prominent exceptions (they are now accessed through unsafeGetTensorImpl()) - Type is no longer friend of Tensor - TensorBase::reset(TensorImpl*) is deleted - TensorBase::reset(TensorImpl*, bool should_retain) is deleted - TensorBase::swap(TensorBaseImpl&) is deleted; use std::swap instead - TensorBase::get() is deleted; use unsafeGetTensorImpl() instead - TensorBase::detach() is deleted; use unsafeReleaseTensorImpl() instead - TensorBase::retain() is deleted; use _raw_incref() instead - TensorBase::release() is deleted; use _raw_decref() instead - WeakTensor lost most of its methods (it no longer inherits from TensorBase) - TensorImpl::storage() is now a const method - Tensor(TensorBase) constructor removed, instead we go through getIntrusivePtr(). I'm not sure about this change; I happened to have accidentally removed the TensorBase constructor and decided to fix call sites, but I could go the other way. - detail::set_data() is deleted; use Tensor::set_data() instead - c10::raw_intrusive_ptr_target removed; use the functions in c10::raw instead. (The reason for this change, is that it is invalid to cast an intrusive_ptr_target* to a raw_intrusive_ptr_target* to take advantage of the methods. But there is no reason the incref/decref methods shouldn't also work on intrusive_ptr_target; it is primarily an API consideration. We can be more standards compliant by keeping them as functions, which are universally applicable.) - intrusive_ptr::reclaim() and weak_intrusive_ptr::reclaim() now work on pointers of the NullType. (This counts as a bug fix, because the documentation specified that pointers produced by release() are valid to reclaim(), and a release() on a null intrusive_ptr produces the NullType::singleton()) Bug fixes: - Dispatch code for mutable references incorrectly returned a reference to a value argument (which would immediately go out of scope). They now correctly return a tensor by value. - intrusive_ptr copy/move assignment did not work correctly when an object was assigned to itself. We now check for this case and no-op if so. (This bug manifested itself as a Tensor mysteriously becoming an UndefinedTensor after lines of code like 'x = x.mul_(y)') Other changes: - The checked cast functions in Utils.h have now been renamed and detemplatized into checked unwrap functions. - Added type_id() and scalar_type() methods to Tensor - pImpl is no longer public - Documented what the && overloads are doing - All occurrences of 'new TensorImpl' (and similar spellings, like 'new THTensor') have been expunged. This is NO LONGER a valid way to create a new tensor, and if you do this, upon your first incref, you will catch an ASSERT failure saying that only tensors created by intrusive_ptr::release() are valid to reclaim(). Use c10::make_intrusive instead in this situation. - IValue is adjusted to use intrusive_ptr instead of Retainable, and all other sub-classes of Retainable were modified to use intrusive_ptr. When doing this, I had to make the constructors of sub-classes like ConstantList public, so that c10::make_intrusive could invoke them. Fortunately, if you incorrectly stack allocate a ConstantList, and then try to get an intrusive_ptr to it, it will fail, as stack allocated ConstantLists have refcount 0. - IValue very narrowly sidesteps the problem of handling NullType, as it considers intrusive_ptr<TensorImpl> identical to intrusive_ptr<TensorImpl, UndefinedTensor> which is not always true. This was always the case, but there's now a comment explaining what's going on. Some MSVC bugs were uncovered during the preparation of this patch. They are documented as comments in the code. Reviewed By: gchanan Differential Revision: D9481140 fbshipit-source-id: 14a8ea0c231ed88b5715fb86d92730926f9f92fc
…h#10789) Summary: `clang: warning: argument unused during compilation: '-rdynamic'` Pull Request resolved: pytorch#10789 Reviewed By: houseroad Differential Revision: D9467385 Pulled By: bddppq fbshipit-source-id: 610550a8f34cfa66b9dfa183752eb129dae21eaa
Summary: More support for tuples has uncovered a bug in constant prop where it assumed it can create constant nodes of tuples, even though we cannot easily create a single prim::Constant to represent a tuples. This fix checks when we cannot represent an IValue as a prim::Constant and then stops propagating the node. Pull Request resolved: pytorch#10923 Reviewed By: orionr Differential Revision: D9523417 Pulled By: zdevito fbshipit-source-id: 745058c4388d9a5e0fc1553eaa2731e31bc03205
Summary: Pull Request resolved: pytorch#10487 Differential Revision: D9370084 Pulled By: li-roy fbshipit-source-id: ecff1d5d7d006fd60e4f6238ee86c56ad168bfc8
Summary: Pull Request resolved: pytorch#10931 Reviewed By: cpuhrsch Differential Revision: D9526248 fbshipit-source-id: 2401a0c1cd8c5e680c6d2b885298fa067d08f2c3
Summary: Since we don't need `torch.autograd.Variable` anymore, I removed `torch.autograd.Variable` from `onnx.rst`. Pull Request resolved: pytorch#10810 Differential Revision: D9500960 Pulled By: zou3519 fbshipit-source-id: 1bc820734c96a8c7cb5d804e6d51a95018db8e7f
Summary: It's a no-op now that Scalars don't store tensors. Pull Request resolved: pytorch#10917 Differential Revision: D9520267 Pulled By: gchanan fbshipit-source-id: 5388ff9a4fbb8fc9b9e1ce92208246bf6f08eb92
Summary: Pull Request resolved: pytorch#10915 Reviewed By: ezyang Differential Revision: D9526416 Pulled By: cpuhrsch fbshipit-source-id: 68e43121d72b1b951c73df5bf7b598854fb0e291
Summary: Resolving [pytorch#10741. The current docs use `num_directions` quite a bit, without any explanation for them. `num_directions` is set to 2 if the RNN is bidirectional, or 1 otherwise. This change simply adds that to the docs. Pull Request resolved: pytorch#10786 Differential Revision: D9480235 Pulled By: zou3519 fbshipit-source-id: f61d1b0d2b943f84d5b7ff83df6fe0965a508a5e
Summary: Currently we assume to find cudnn includes and libraries in the `CUDA_HOME` root. But this is not always true. So we now support a `CUDNN_HOME`/`CUDNN_PATH` environment variable that can have its own `/include` and `/lib64` folder. This means cudnn extensions now also get support on the FAIR cluster. soumith fmassa Pull Request resolved: pytorch#10922 Differential Revision: D9526856 Pulled By: goldsborough fbshipit-source-id: 5c64a5ff7cd428eb736381c24736006b21f8b6db
Summary: fixes pytorch#7908 Pull Request resolved: pytorch#9911 Reviewed By: yf225 Differential Revision: D9023223 Pulled By: weiyangfb fbshipit-source-id: 68b199bef3940b7205d0fdad75e7c46e6fe65ba7
Summary: Pull Request resolved: pytorch#10804 Make ShareData and ShareExternalPointer to create new storage when the old one is used by multiple tensors. When we need to modify the field of storage, we'll create a new storage instead. Reviewed By: ezyang Differential Revision: D9350686 fbshipit-source-id: 68d2b6b886b0367b0fc4fabfd55b9a480e7388ca
Summary: Pull Request resolved: pytorch#10918 att Differential Revision: D9515040 fbshipit-source-id: 53c05c160ba5dda92104aadc2e40801519a2cd28
Summary: Changes the approach for resolving builtin ops so that the following works ``` add = torch.add script def foo(x): return add(x, x) ``` This handles cases when people alias torch and torch.nn.functional to shorter names. This works by building a table of id -> builtin name for the know builtin ops in torch, torch.nn.functional, and for any user-defined op created by accessing in torch.ops.foo.bar This allows us to clean up many SugaredValue types in the compiler. Notes: * we now consider any attributes on python modules to be constants (e.g. math.pi, and torch.double). * fixes a bug where we incorrectly allowed attribute lookup on arbitrary pyton objects. It is now restricted to modules only. Pull Request resolved: pytorch#10927 Differential Revision: D9527522 Pulled By: zdevito fbshipit-source-id: 0280422af08b4b0f48f302766d5a9c0deee47660
Summary: Pull Request resolved: pytorch#10550 Reviewed By: orionr Differential Revision: D9541932 Pulled By: houseroad fbshipit-source-id: 4d179d189c176482ae919e5cc74607b9d315ed26
…fe2_pb.h (pytorch#10946) Summary: Pull Request resolved: pytorch#10946 ``` codemod -d . --extensions cc,cpp,cu,cuh,h caffe2/proto/caffe2.pb.h caffe2/proto/caffe2_pb.h ``` Reviewed By: houseroad Differential Revision: D9539945 fbshipit-source-id: 497d04720e8e7e61c05ffe1b23733d0cb774de7e
Summary: Running `--accept` on a test doesn't tell you explicitly which sub-test is being updated, this PR fixes that Pull Request resolved: pytorch#10559 Differential Revision: D9353977 Pulled By: driazati fbshipit-source-id: a9d4014386ff0fe388a092f3dcf50f157e460f04
Summary: Pull Request resolved: pytorch#10944 Differential Revision: D9541085 Pulled By: soumith fbshipit-source-id: 59077f3b226d04c68a93cd6864894e8f6c594aba
Summary: - rebase of pytorch#9851 Pull Request resolved: pytorch#10959 Differential Revision: D9542292 Pulled By: weiyangfb fbshipit-source-id: ce51864d203c8ed89da3817f1da020a0ee932960
…nd CI (pytorch#10932) Summary: The previous NCCL all gather doesn't work as expected. This is a fully working async version. Tested on both C++ and Python Frontend. Multi-node: ``` tengli@learnfair042:~/new_pytorch/pytorch/torch/lib/build/c10d/test$ TMPFILE="/private/home/tengli/temp/tengli-test" RANK=0 WORLD_SIZE=2 ./ProcessGroupNCCLTest Multi-node world size: 2 rank: 0 Allreduce test successful Broadcast test successful Reduce test successful Allgather test successful tengli@learnfair117:~/new_pytorch/pytorch/torch/lib/build/c10d/test$ TMPFILE="/private/home/tengli/temp/tengli-test" RANK=1 WORLD_SIZE=2 ./ProcessGroupNCCLTest Multi-node world size: 2 rank: 1 Allreduce test successful Broadcast test successful Reduce test successful Allgather test successful ``` CI test: ``` test_set_get (__main__.FileStoreTest) ... ok test_set_get (__main__.PrefixFileStoreTest) ... ok test_set_get (__main__.PrefixTCPStoreTest) ... ok test_allreduce_ops (__main__.ProcessGroupGlooTest) ... ok test_broadcast_ops (__main__.ProcessGroupGlooTest) ... ok test_allgather_ops (__main__.ProcessGroupNCCLTest) ... ok test_allreduce_ops (__main__.ProcessGroupNCCLTest) ... ok test_broadcast_ops (__main__.ProcessGroupNCCLTest) ... ok test_reduce_ops (__main__.ProcessGroupNCCLTest) ... ok test_common_errors (__main__.RendezvousFileTest) ... ok test_nominal (__main__.RendezvousFileTest) ... ok test_common_errors (__main__.RendezvousTCPTest) ... ok test_nominal (__main__.RendezvousTCPTest) ... ok test_unknown_handler (__main__.RendezvousTest) ... ok test_set_get (__main__.TCPStoreTest) ... ok ``` Pull Request resolved: pytorch#10932 Differential Revision: D9542067 Pulled By: teng-li fbshipit-source-id: 25513eddcc3119fd736875d69dfb631b10f4ac86
Summary: Adds basic nomnigraph python bindings for quickly playing with the graphs. Reviewed By: duc0 Differential Revision: D9441936 fbshipit-source-id: fd70f8ea279b28c766e40f124008800acd94bddd
…md (pytorch#10802) Summary: Pull Request resolved: pytorch#10802 add documentation for new ReplaceSubgraph api to README.md Reviewed By: yinghai Differential Revision: D9473282 fbshipit-source-id: 144c895564af83cc8727a0370e894c2f0b7eadf5
Summary: `stringop-overflow` is added in GCC 7. Pull Request resolved: pytorch#10954 Differential Revision: D9546084 Pulled By: SsnL fbshipit-source-id: e6e68f993f1dbaa879ca66dc43bbcff9c49890ff
Summary: Pull Request resolved: pytorch#10837 Add CPU version of hard sigmoid operator to caffe2. The definition of this operator can be found here: https://github.com/onnx/onnx/blob/master/docs/Operators.md#HardSigmoid. Reviewed By: BIT-silence Differential Revision: D9489536 fbshipit-source-id: 67b3171ed96d5ebcc8d500d93e7827a4a9705a81
Summary: Flipping to hidden visibility one more time. Let's see what fails. cc mingzhe09088 pjh5 Yangqing Pull Request resolved: pytorch#10752 Reviewed By: ezyang Differential Revision: D9526343 Pulled By: orionr fbshipit-source-id: c0e9c29270e95e1b2e21c598095f720c199e1e52
pytorch#10975) Summary: pytorch@cfa5dba Pull Request resolved: pytorch#10975 Differential Revision: D9546838 Pulled By: bddppq fbshipit-source-id: 3bd6dc0a4eee582bb92fc33ed27fc40eb3ab1200
…ytorch#10929) Summary: Pull Request resolved: pytorch#10929 Workspace classes methods were missing on the Python side. Being able to write the New Checkpoint Framework with more control of the workspace and cleaner implementation. Added - ws.feed_blob(name, arr) - ws.remove_blob(name) Reviewed By: mraway Differential Revision: D9486867 fbshipit-source-id: ea02d2e3a39d716a5a3da0482f57d4ac4c893763
Summary: cleanup Reviewed By: duc0 Differential Revision: D9549449 fbshipit-source-id: 9154b36a39936566fc2711a6e7bd33049681d1c8
Summary: self explanatory Reviewed By: highker Differential Revision: D9551065 fbshipit-source-id: 14b3807af5337654c360a23816cffd7dd346bad5
…ll (pytorch#10872) Summary: This PR adds argument checking for script method invocation from C++. For this I had to: 1. The schema of a method is currently not serialized in script modules, so we now store the function schema in the `doc_string` field of the ONNX proto. Upon loading of a serialized script module, we parse the schema into the structured C++ form and assign it to the loaded method, 2. Inside `Method::operator()`, we now verify the number and types of arguments. CC The controller you requested could not be found. zdevito Pull Request resolved: pytorch#10872 Differential Revision: D9521219 Pulled By: goldsborough fbshipit-source-id: 5cb3d710af6f500e7579dad176652c9b11a0487d
Summary: TODO: integrate into torch.onnx.export -- separate PR *Problem:* We have a facility to trace PyTorch operations on Python code, but there are several failure modes where the trace is not representative of the actual underlying computation: * The tracer encountered dynamic control flow * Some computation escaped the tracer, and appeared as a Constant tensor node in the graph * Some stateful function was traced, e.g. someone did an optimization in Python by memoizing function outputs *Objective*: In an ideal world, this whole process would be automated and the user can trust that the system will magically capture the intended semantics from the program. Realistically speaking, we will likely have to settle with a human-in-the-loop error reporting system, allowing for the user to identify problems and modify the source code to allow for tracing. *Stage 1* (this PR): Output-level checking & graph diff. torch.jit.trace gains a kwarg 'check_inputs', which is a list of tuples of input arguments. We will iterate through the list and trace the function again for each set of check inputs. We'll also interpret the original trace with these inputs and compare output values and graphs, printing a diff of the graph if there is a difference. Examples: ``` torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(4, 5),)]) def foo(x): y = torch.arange(0, x.shape[0]).float() return x + y.unsqueeze(1) ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations! Graph diff: graph(%0 : Dynamic) { - %1 : Dynamic = prim::Constant[value= 0 1 2 [ CPULongType{3} ]]() ? ^ + %1 : Dynamic = prim::Constant[value= 0 1 2 3 [ CPULongType{4} ]]() ? +++ ^ %2 : int = prim::Constant[value=0]() %3 : Dynamic = aten::_cast_Float(%1, %2) %4 : int = prim::Constant[value=1]() %5 : Dynamic = aten::unsqueeze(%3, %4) %6 : int = prim::Constant[value=1]() %7 : Dynamic = aten::add(%0, %5, %6) return (%7); } Node diff: - %1 : Dynamic = prim::Constant[value= 0 1 2 [ CPULongType{3} ]]() ? ^ + %1 : Dynamic = prim::Constant[value= 0 1 2 3 [ CPULongType{4} ]]() ? +++ ^ Trace source location: dank.py(5): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(402): wrapper dank.py(3): <module> Check source location: dank.py(5): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(281): check_trace /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(408): wrapper dank.py(3): <module> ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code. Node: %1 : Dynamic = prim::Constant[value= 0 1 2 [ CPULongType{3} ]]() Source Location: dank.py(5): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(402): wrapper dank.py(3): <module> Comparison exception: Not equal to tolerance rtol=1e-07, atol=0 (shapes (3,), (4,) mismatch) x: array([0, 1, 2]) y: array([0, 1, 2, 3]) ``` == ``` torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(3, 4),)]) def foo(x): y = x.data return x + y ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Traced function outputs do not match the Python function outputs. ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code. Node: %1 : Dynamic = prim::Constant[value=<Tensor>]() Source Location: dank.py(6): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(402): wrapper dank.py(3): <module> Comparison exception: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 100.0%) x: array([0.397137, 0.956105, 0.169478, 0.560292, 0.392568, 0.108441, 0.97645 , 0.34412 , 0.951246, 0.793061, 0.557595, 0.770245], dtype=float32) y: array([0.243178, 0.315964, 0.972041, 0.0215 , 0.927751, 0.457512, 0.951092, 0.97883 , 0.048688, 0.118066, 0.779345, 0.271272], dtype=float32) ``` == ``` import torch torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(4, 4),)]) def foo(x): for _ in range(x.size(0)): x = torch.neg(x) return x ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Traced function outputs do not match the Python function outputs. ERROR: Graphs differed across invocations! Graph diff: graph(%0 : Dynamic) { %1 : Dynamic = aten::neg(%0) %2 : Dynamic = aten::neg(%1) %3 : Dynamic = aten::neg(%2) + %4 : Dynamic = aten::neg(%3) - return (%3); ? ^ + return (%4); ? ^ } ``` == ``` import torch def foo(x): if not hasattr(foo, 'cache'): foo.cache = torch.neg(x) return x + foo.cache traced = torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(3, 4),)])(foo) ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Traced function outputs do not match the Python function outputs. ERROR: Graphs differed across invocations! Graph diff: graph(%0 : Dynamic) { - %1 : Dynamic = aten::neg(%0) + %1 : Dynamic = prim::Constant[value=<Tensor>]() %2 : int = prim::Constant[value=1]() %3 : Dynamic = aten::add(%0, %1, %2) return (%3); } Node diff: - %1 : Dynamic = aten::neg(%0) + %1 : Dynamic = prim::Constant[value=<Tensor>]() Trace source location: test.py(5): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(402): wrapper test.py(8): <module> Check source location: test.py(6): foo /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(281): check_trace /Users/jamesreed/onnx-fairseq/pytorch/torch/jit/__init__.py(408): wrapper test.py(8): <module> ``` The following two examples show instances where program semantics are lost in the Python -> trace transformation, and repeated invocation does not give us useful debug information. Further design in underway for catching these scenarios. ``` import torch torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(3, 4),)]) def foo(x): for i in range(3): x[i, :] = torch.zeros(4) return x ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Traced function outputs do not match the Python function outputs. Exception: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 100.0%) x: array([0.830221, 0.915481, 0.940281, 0.555241], dtype=float32) y: array([0., 0., 0., 0.], dtype=float32) ``` == ``` import torch torch.jit.trace(torch.rand(3, 4), check_inputs=[(torch.rand(5, 6),)]) def foo(x): x.view(-1).add_(-x.view(-1)) return x ``` ``` torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Traced function outputs do not match the Python function outputs. Exception: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 100.0%) x: array([0.734441, 0.445327, 0.640592, 0.30076 , 0.891674, 0.124771], dtype=float32) y: array([0., 0., 0., 0., 0., 0.], dtype=float32) ``` Pull Request resolved: pytorch#10841 Differential Revision: D9499945 Pulled By: jamesr66a fbshipit-source-id: 1f842a32d0b0645259cc43b29700b86d99c59a45
Summary: Pull Request resolved: pytorch#10739 I wanted to assert that the blobs in the workspace of the new session after loading checkpoint are exactly the same as the blobs in the workspace of the old session before saving to a checkpoint. But I found that when calling `task.get_step()`, a dummy task output blob, `task:output/ConstIntFill:0`, is added. Also a dummy net `task:output` was also added along with it. See https://fburl.com/937lf2yk This makes it hard to assert "Equal", forcing me to assert "LessThan" or "GreaterThan". This adding a dummy TaskOutput when user specifies no TaskOutput is a hack. The reason for this is that ZMQ socket can't send empty blob list. As a result, if the Task on the Worker had no output, The master would never stop waiting and hang forever. See https://fburl.com/rd7fhy6p and imagine `socket.recv(net, 0)`. TaskOuput is at user layer. The hack shouldn't be exposed to user layer, polluting user workspaces. Instead, we should move the creating of the dummy blob to some deeper layer, and remove the dummy blob in the workspace afterwards to avoid polluting user workspaces. After this change, the workaround becomes totally transparent and no side-effect to users. Reviewed By: mraway Differential Revision: D9413150 fbshipit-source-id: 51aaf3201e26570b4fcf5738e9b9aa17c58777ac
Summary: This fixes multiple bugs in the handling of negative indices in both slicing and gather operations. These were uncovered by @[1466077526:Elias Ellison]'s diff D9493614, which made it so that we actually emit negative indices when we see them in PyTorch code. Pull Request resolved: pytorch#10973 Reviewed By: jhcross Differential Revision: D9546183 Pulled By: jamesr66a fbshipit-source-id: 6cb0e84e8ad399e47e24a96c44025f644c17b375
Summary: GT, LT, EQ all support numpy broadcasting, just enable the fusion. Pull Request resolved: pytorch#10845 Reviewed By: bddppq Differential Revision: D9494089 Pulled By: houseroad fbshipit-source-id: 7c65ca06c54dbd476ac7d07b47a413faaed3dd5e
@pytorchbot retest this please |
1 similar comment
@pytorchbot retest this please |
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 15, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance. <details> <summary>ASAN report</summary> ``` ================================================================= ==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390 READ of size 8 at 0x61000013d790 thread T0 #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154 ROCm#1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215 ROCm#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69 ROCm#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177 ROCm#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++- v11/bits/stl_algobase.h:1162 ROCm#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/ stl_algobase.h:1211 ROCm#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s tl_algobase.h:1219 ROCm#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg obase.h:1556 ROCm#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188 ROCm#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341 ROCm#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab leTypeManual.cpp:408 ROCm#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c 10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144 ROCm#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847 ROCm#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto rch/torch/csrc/autograd/VariableTypeManual.cpp:243 ROCm#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10 ::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu nctionIntoFunctor.h:13 ROCm#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c 10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor .h:480 ROCm#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at en/src/ATen/core/dispatch/Dispatcher.h:639 ROCm#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 ROCm#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137 ROCm#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452 ROCm#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417 ROCm#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419 ROCm#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344 ROCm#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#33 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#56 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#65 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#72 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#81 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#90 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267 ROCm#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#111 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#118 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#133 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#142 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305 ROCm#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#159 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#168 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#183 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#190 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#205 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#214 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#225 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#240 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#249 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800) freed by thread T0 here: #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75 previously allocated by thread T0 here: #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 ROCm#1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498 ROCm#2 0x3ff76f79e17 (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17) SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const Shadow bytes around the buggy address: 0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa 0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1115867==ABORTING ``` </details> <details> <summary>Additional backtraces (not full)</summary> Memory deallocation: ``` #0 operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75 ROCm#2 0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291 ROCm#3 0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370 ROCm#4 0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80 ROCm#5 0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90 ROCm#6 0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173 ROCm#7 0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) ( this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#8 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#9 0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96 ROCm#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 ROCm#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 ROCm#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401 ``` Memory access: ``` #0 c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215 ROCm#1 0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69 ROCm#2 0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177 ROCm#3 0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162 ROCm#4 0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211 ROCm#5 0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219 ROCm#6 0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556 ROCm#7 0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188 ROCm#8 0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341 ROCm#9 0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408 ROCm#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c 10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt >, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional< c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 ROCm#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 ROCm#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243 ``` </details> Pull Request resolved: pytorch#101064 Approved by: https://github.com/Skylion007, https://github.com/albanD
alugorey
pushed a commit
to alugorey/pytorch
that referenced
this pull request
May 17, 2023
arguments() returns vector member of object returned by schema() call. When object returned by schema() call is destroyed, the vector is deallocated as well, it's lifetime isn't extended. This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN. <details> <summary>ASAN output</summary> ``` ==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8 READ of size 8 at 0x60d0005a5790 thread T0 #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 ROCm#1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821 ROCm#2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617 ROCm#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ROCm#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138 ROCm#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is _method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)ROCm#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin d11/include/pybind11/pybind11.h:249 ROCm#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is _method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)ROCm#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl ude/pybind11/pybind11.h:224 ROCm#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543 ROCm#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#14 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#25 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#34 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#41 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#50 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#59 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267 ROCm#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#80 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#87 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#102 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#111 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305 ROCm#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#128 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#137 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#152 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#159 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#174 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#183 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#194 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#209 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#218 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#229 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#236 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#243 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8) freed by thread T0 here: #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145 previously allocated by thread T0 here: #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 ROCm#1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127 ROCm#2 0x3fff5849ecf ([stack]+0xb2ecf) SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) Shadow bytes around the buggy address: 0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa 0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd 0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd 0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa 0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa 0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1134126==ABORTING ``` Additional backtraces (not full): Allocation: ``` #0 __memset_z196 () at ../sysdeps/s390/memset-z900.S:144 ROCm#1 0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>, stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599 ROCm#2 0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039 ROCm#3 0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 ROCm#4 0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127 ROCm#5 0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=..., __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464 ROCm#6 0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98 ROCm#7 0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648 ROCm#8 0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342 ROCm#9 0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > ( this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409 ROCm#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862 ROCm#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878 ROCm#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769 ROCm#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...}) at /home/user/pytorch/aten/src/ATen/core/type.cpp:725 ROCm#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383 ROCm#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781 ROCm#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613 ROCm#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ROCm#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138 ROCm#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)ROCm#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)ROCm#1}::__invoke(pybind11::detail::function_call&) (call=...) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>, args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929 ``` Deallocation: ``` #0 operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020, __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145 ROCm#2 0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate ( __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496 ROCm#3 0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr ( this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74 ROCm#4 0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538 ROCm#5 0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184 ROCm#6 0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705 ROCm#7 0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154 ROCm#8 0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122 ROCm#9 0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142 ROCm#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142 ROCm#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348 ROCm#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168 ROCm#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705 ROCm#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154 ROCm#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122 ROCm#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613 ROCm#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ``` </details> Pull Request resolved: pytorch#101400 Approved by: https://github.com/zou3519
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 29, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed. <details> <summary>ASAN report</summary> ``` ================================================================= ==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930 READ of size 4 at 0x03ff70f54570 thread T0 #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 ROCm#1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550 ROCm#2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021 ROCm#3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182 ROCm#4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __ vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991 ROCm#5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074 ROCm#6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/ user/pytorch/aten/src/ATen/cpu/vml.h:71 ROCm#7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239 ROCm#8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71 ROCm#9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406 ROCm#11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/ c10/util/FunctionRef.h:43 ROCm#12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64 ROCm#13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt orch/aten/src/ATen/TensorIteratorInternal.h:52 ROCm#14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777 ROCm#15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749 ROCm#16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera tor.h:421 ROCm#17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out &) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158 ROCm#21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330 ROCm#22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307 ROCm#23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463 ROCm#25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:50 ROCm#26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:103 ROCm#27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s rc/ATen/core/dispatch/Dispatcher.h:639 ROCm#28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 ROCm#29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215 ROCm#30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107 ROCm#31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953 ROCm#32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955 ROCm#33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/ torch/csrc/utils/python_dispatch.cpp:175 ROCm#45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::Op eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 ROCm#46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operator Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 ROCm#47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b oxing/BoxedKernel_impl.h:41 ROCm#48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor e/boxing/KernelFunction_impl.h:43 ROCm#49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6 91 ROCm#50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 ROCm#51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 ROCm#52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 ROCm#53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1 0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 ROCm#54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:: IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 ROCm#55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 ROCm#56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin ux-gnu/11/include/g++-v11/bits/std_function.h:590 ROCm#57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 ROCm#58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:: kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 ROCm#59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 ROCm#60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#86 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch /torch/csrc/utils/python_dispatch.cpp:175 ROCm#103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::O peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 ROCm#104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operato rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 ROCm#105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/ boxing/BoxedKernel_impl.h:41 ROCm#106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co re/boxing/KernelFunction_impl.h:43 ROCm#107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h: 691 ROCm#108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 ROCm#109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 ROCm#110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 ROCm#111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c 10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 ROCm#112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10: :IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 ROCm#113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 ROCm#114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li nux-gnu/11/include/g++-v11/bits/std_function.h:590 ROCm#115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 ROCm#116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11: :kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 ROCm#117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 ROCm#118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83 ROCm#144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485 ROCm#146 0x3ffa5e84f2d in callmethod Objects/call.c:557 ROCm#147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577 ROCm#148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py torch/torch/csrc/utils/python_arg_parser.cpp:338 ROCm#149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827 ROCm#150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#170 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#177 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#226 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648 SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2 Shadow bytes around the buggy address: 0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9 0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==2030580==ABORTING ``` </details> It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x. See also: shibatch/sleef#464 Pull Request resolved: pytorch#102266 Approved by: https://github.com/malfet
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.