Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorList Python binding triggering C++ TypeError #119190

Open
thomas-bouvier opened this issue Feb 5, 2024 · 4 comments
Open

TensorList Python binding triggering C++ TypeError #119190

thomas-bouvier opened this issue Feb 5, 2024 · 4 comments
Labels
module: cpp-extensions Related to torch.utils.cpp_extension triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@thomas-bouvier
Copy link

thomas-bouvier commented Feb 5, 2024

馃悰 Describe the bug

Hello there,

This is the same issue as #80979, which had been closed even though I can still reproduce the error.

I am trying to write a C++ extension for PyTorch that has a custom operation that accepts a TensorList as an input. In Python, this should simply bind to a list of tensors. However, a TypeError is thrown when trying to call the function.

Here is the error:

  File "main.py", line 7, in main
    tensorlist.test_tensorlist(tensorl)
TypeError: test_tensorlist(): incompatible function arguments. The following argument types are supported:
    1. (arg0: c10::ArrayRef<at::Tensor>) -> bool

Invoked with: [tensor([42]), tensor([43]), tensor([44])]

And here is a reproducer:

#include <torch/extension.h>
#include <pybind11/pybind11.h>

bool test_tensorlist(const torch::TensorList& list) {
    return true;
}

PYBIND11_MODULE(tensorlist, m) {
    m.doc() = "Test how to pass tensor list to C++ extension.";
    m.def("test_tensorlist", &test_tensorlist, "Pass list of tensors (as torch::TensorList).");
}

Then, when simply importing the module in Python and calling the function (with any list of tensors), the above error is thrown.

import torch

import tensorlist

def main():
    tensorl = [torch.tensor([42]), torch.tensor([43]), torch.tensor([44])]
    tensorlist.test_tensorlist(tensorl)

if __name__ == "__main__":
    main()

Versions

(I compiled PyTorch myself using Spack)

PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Fedora Linux Asahi Remix 39 (Thirty Nine) (aarch64)
GCC version: (GCC) 13.2.1 20231205 (Red Hat 13.2.1-6)
Clang version: 17.0.4 (git@github.com:thomas-bouvier/spack.git 31c205d6dda80ced9b12d83275955ae46e98dc23)
CMake version: version 3.27.7
Libc version: glibc-2.38

Python version: 3.11.6 (main, Jan 15 2024, 22:48:13) [GCC 13.2.1 20231205 (Red Hat 13.2.1-6)] (64-bit runtime)
Python platform: Linux-6.6.3-413.asahi.fc39.aarch64+16k-aarch64-with-glibc2.38
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 10
On-line CPU(s) list: 0-9
Vendor ID: Apple
Model name: Blizzard-M2-Pro
Model: 0
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 0x1
Frequency boost: enabled
CPU(s) scaling MHz: 72%
CPU max MHz: 2424,0000
CPU min MHz: 912,0000
BogoMIPS: 48,00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint i8mm bf16 bti ecv
Model name: Avalanche-M2-Pro
Model: 0
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
Stepping: 0x1
CPU(s) scaling MHz: 47%
CPU max MHz: 3504,0000
CPU min MHz: 702,0000
BogoMIPS: 48,00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint i8mm bf16 bti ecv
L1d cache: 1 MiB (10 instances)
L1i cache: 1,6 MiB (10 instances)
L2 cache: 36 MiB (3 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-9
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[] No relevant packages
[conda] Could not collect

cc @malfet @zou3519 @jbschlosser

@malfet malfet added module: cpp Related to C++ API module: cpp-extensions Related to torch.utils.cpp_extension module: regression It used to work, and now it doesn't triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed module: cpp Related to C++ API labels Feb 5, 2024
@malfet
Copy link
Contributor

malfet commented Feb 5, 2024

Next time it is fixed, let's submit a regression test. Added triage review to discuss how regression sneaked it.

@malfet malfet added triage review and removed triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Feb 5, 2024
@thomas-bouvier
Copy link
Author

I tested with PyTorch versions 1.13.0 and 1.13.1 and I can reproduce this error too.

Now I'm wondering if what I'm trying to do has ever been supported?

@malfet malfet added module: regression It used to work, and now it doesn't and removed module: regression It used to work, and now it doesn't labels Feb 12, 2024
@jbschlosser jbschlosser added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: cpp-extensions Related to torch.utils.cpp_extension and removed triage review module: cpp-extensions Related to torch.utils.cpp_extension module: regression It used to work, and now it doesn't labels Feb 12, 2024
@thomas-bouvier
Copy link
Author

This is working when passing std::vector<torch::Tensor>.

@ninono12345
Copy link

no it isn't, I try to pass a list of tensors from python to c++ std::vectortorch::Tensor but unsuccessful. Also tried std::vectorat::Tensor but still doesn't work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cpp-extensions Related to torch.utils.cpp_extension triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants