Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError:/py310_cu113/slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0/slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.so: cannot open shared object file: No such file or directory #29

Open
whx57 opened this issue Jun 23, 2024 · 7 comments

Comments

@whx57
Copy link

whx57 commented Jun 23, 2024

is anyone meet the same problem?

ImportError Traceback (most recent call last)
Cell In[16], line 35
3 from xlstm import (
4 xLSTMBlockStack,
5 xLSTMBlockStackConfig,
(...)
10 FeedForwardConfig,
11 )
13 cfg = xLSTMBlockStackConfig(
14 mlstm_block=mLSTMBlockConfig(
15 mlstm=mLSTMLayerConfig(
(...)
32
33 )
---> 35 xlstm_stack = xLSTMBlockStack(cfg)
37 x = torch.randn(4, 256, 128).to("cuda")
38 xlstm_stack = xlstm_stack.to("cuda")

File /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/xlstm_block_stack.py:83, in xLSTMBlockStack.init(self, config)
80 super().init()
81 self.config = config
---> 83 self.blocks = self._create_blocks(config=config)
84 if config.add_post_blocks_norm:
85 self.post_blocks_norm = LayerNorm(ndim=config.embedding_dim)
...
File :1176, in create_module(self, spec)

File :241, in _call_with_frames_removed(f, *args, **kwds)

ImportError: /home/.cache/torch_extensions/py310_cu113/slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0/slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.so: cannot open shared object file: No such file or director

@miaozhixu
Copy link

ImportError: /home/.cache/torch_extensions/py310_cu113/
this path seems not right, .cache folder is under user home directory. Normally the .cache folder is
/home/someuser/.cache
You can see xlstm/blocks/sltsm/src/cuda_init use torch.utils.cpp_extension.load.
Tracking down the code, you can end up in os.path.expanduser. These lines of code:
if 'HOME' not in os.environ:
try:
import pwd
except ImportError:
# pwd module unavailable, return path unchanged
return path
try:
userhome = pwd.getpwuid(os.getuid()).pw_dir
except KeyError:
# bpo-10496: if the current user identifier doesn't exist in the
# password database, return the path unchanged
return path
else:
userhome = os.environ['HOME']
so, maybe $HOME is not set, or uid is not right, these reason cause the path did not include your username

@whx57
Copy link
Author

whx57 commented Jun 23, 2024

ImportError: /home/.cache/torch_extensions/py310_cu113/ this path seems not right, .cache folder is under user home directory. Normally the .cache folder is /home/someuser/.cache You can see xlstm/blocks/sltsm/src/cuda_init use torch.utils.cpp_extension.load. Tracking down the code, you can end up in os.path.expanduser. These lines of code: if 'HOME' not in os.environ: try: import pwd except ImportError: # pwd module unavailable, return path unchanged return path try: userhome = pwd.getpwuid(os.getuid()).pw_dir except KeyError: # bpo-10496: if the current user identifier doesn't exist in the # password database, return the path unchanged return path else: userhome = os.environ['HOME'] so, maybe $HOME is not set, or uid is not right, these reason cause the path did not include your username
image
the path is right. but the file is empty

@miaozhixu
Copy link

ImportError: /home/.cache/torch_extensions/py310_cu113/ this path seems not right, .cache folder is under user home directory. Normally the .cache folder is /home/someuser/.cache You can see xlstm/blocks/sltsm/src/cuda_init use torch.utils.cpp_extension.load. Tracking down the code, you can end up in os.path.expanduser. These lines of code: if 'HOME' not in os.environ: try: import pwd except ImportError: # pwd module unavailable, return path unchanged return path try: userhome = pwd.getpwuid(os.getuid()).pw_dir except KeyError: # bpo-10496: if the current user identifier doesn't exist in the # password database, return the path unchanged return path else: userhome = os.environ['HOME'] so, maybe $HOME is not set, or uid is not right, these reason cause the path did not include your username
image
the path is right. but the file is empty

  1. The ImportError shows /home/.cache, not /homt/wuxi/.cache
  2. try rename the torch_extensions directory to cause ninja rebuild the library. when it's successfully built, it should have a .so file under slstm_**************** directory.

@whx57
Copy link
Author

whx57 commented Jun 23, 2024

ImportError: /home/.cache/torch_extensions/py310_cu113/这个路径似乎不对,.cache 文件夹位于用户主目录下。通常 .cache 文件夹是/home/someuser/.cache您可以看到 xlstm/blocks/sltsm/src/cuda_init 使用 torch.utils.cpp_extension.load。跟踪代码,您最终会进入 os.path.expanduser。这些代码行: if 'HOME' not in os.environ: try: import pwd except ImportError: # pwd module unavailable, return path unchanged return path try: userhome = pwd.getpwuid(os.getuid()).pw_dir except KeyError: # bpo-10496: if the current user identifier doesn't exist in the # password database, return the path unchanged return path else: userhome = os.environ['HOME']所以,也许 $HOME 没有设置,或者 uid 不正确,这些原因导致路径不包含您的用户名路径
是正确的。但文件是空的
图像

  1. ImportError 显示 /home/.cache,而不是 /homt/wuxi/.cache
  2. 尝试重命名 torch_extensions 目录以使 ninja 重建库。成功构建后,它应该在 slstm_**************** 目录下有一个 .so 文件。

I renamed it and ran it, and it seems that the program didn't build the.so file at all. The folder under the newly generated path is still empty

@whx57
Copy link
Author

whx57 commented Jun 23, 2024

2. ninja

I update the ninja,and another problem is there

{'verbose': True, 'with_cuda': True, 'extra_ldflags': ['-L/usr/local/cuda/lib', '-lcublas'], 'extra_cflags': ['-DSLSTM_HIDDEN_SIZE=128', '-DSLSTM_BATCH_SIZE=8', '-DSLSTM_NUM_HEADS=4', '-DSLSTM_NUM_STATES=4', '-DSLSTM_DTYPE_B=float', '-DSLSTM_DTYPE_R=nv_bfloat16', '-DSLSTM_DTYPE_W=nv_bfloat16', '-DSLSTM_DTYPE_G=nv_bfloat16', '-DSLSTM_DTYPE_S=nv_bfloat16', '-DSLSTM_DTYPE_A=float', '-DSLSTM_NUM_GATES=4', '-DSLSTM_SIMPLE_AGG=true', '-DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false', '-DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0', '-DSLSTM_FORWARD_CLIPVAL_VALID=false', '-DSLSTM_FORWARD_CLIPVAL=0.0', '-U__CUDA_NO_HALF_OPERATORS', '-U__CUDA_NO_HALF_CONVERSIONS', '-U__CUDA_NO_BFLOAT16_OPERATORS', '-U__CUDA_NO_BFLOAT16_CONVERSIONS', '-U__CUDA_NO_BFLOAT162_OPERATORS__', '-U__CUDA_NO_BFLOAT162_CONVERSIONS__'], 'extra_cuda_cflags': ['-Xptxas="-v"', '-gencode', 'arch=compute_80,code=compute_80', '-res-usage', '--use_fast_math', '-O3', '-Xptxas -O3', '--extra-device-vectorization', '-DSLSTM_HIDDEN_SIZE=128', '-DSLSTM_BATCH_SIZE=8', '-DSLSTM_NUM_HEADS=4', '-DSLSTM_NUM_STATES=4', '-DSLSTM_DTYPE_B=float', '-DSLSTM_DTYPE_R=nv_bfloat16', '-DSLSTM_DTYPE_W=nv_bfloat16', '-DSLSTM_DTYPE_G=nv_bfloat16', '-DSLSTM_DTYPE_S=nv_bfloat16', '-DSLSTM_DTYPE_A=float', '-DSLSTM_NUM_GATES=4', '-DSLSTM_SIMPLE_AGG=true', '-DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false', '-DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0', '-DSLSTM_FORWARD_CLIPVAL_VALID=false', '-DSLSTM_FORWARD_CLIPVAL=0.0', '-U__CUDA_NO_HALF_OPERATORS', '-U__CUDA_NO_HALF_CONVERSIONS', '-U__CUDA_NO_BFLOAT16_OPERATORS', '-U__CUDA_NO_BFLOAT16_CONVERSIONS', '-U__CUDA_NO_BFLOAT162_OPERATORS__', '-U__CUDA_NO_BFLOAT162_CONVERSIONS__']}
Using /home/wuxi/.cache/torch_extensions/py310_cu113 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/wuxi/.cache/torch_extensions/py310_cu113/slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0/build.ninja...
Building extension module slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/8] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/TH -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -Xptxas="-v" -gencode arch=compute_80,code=compute_80 -res-usage --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATORS -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ -std=c++14 -c /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/slstm_forward.cu -o slstm_forward.cuda.o
FAILED: slstm_forward.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/TH -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -Xptxas="-v" -gencode arch=compute_80,code=compute_80 -res-usage --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATORS -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ -std=c++14 -c /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/slstm_forward.cu -o slstm_forward.cuda.o
/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_fp16.cuh(71): error: identifier "__hadd_rn" is undefined

/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_fp16.cuh(77): error: identifier "__hsub_rn" is undefined

/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_fp16.cuh(88): error: identifier "__hmul_rn" is undefined

/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_bf16.cuh(76): error: identifier "__hadd_rn" is undefined

/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_bf16.cuh(83): error: identifier "__hsub_rn" is undefined

/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/inline_ops_bf16.cuh(96): error: identifier "__hmul_rn" is undefined

6 errors detected in the compilation of "/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/slstm_forward.cu".
[2/8] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/TH -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -Xptxas="-v" -gencode arch=compute_80,code=compute_80 -res-usage --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATORS -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ -std=c++14 -c /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/slstm_backward.cu -o slstm_backward.cuda.o
FAILED: slstm_backward.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/TH -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -Xptxas="-v" -gencode arch=compute_80,code=compute_80 -res-usage --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATORS -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ -std=c++14 -c /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/slstm_backward.cu -o slstm_backward.cuda.o
...
/home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/xlstm/blocks/slstm/src/cuda/../util/support.h:41:3: note: in definition of macro ‘AT_DISPATCH_FLOATING_TYPES_AND_HALF2’
41 | AT_DISPATCH_SWITCH(TYPE, NAME,
| ^~~~~~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
CalledProcessError Traceback (most recent call last)
File /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1740, in _run_ninja_build(build_directory, verbose, error_prefix)
1739 stdout_fileno = 1
-> 1740 subprocess.run(
1741 command,
1742 stdout=stdout_fileno if verbose else subprocess.PIPE,
1743 stderr=subprocess.STDOUT,
1744 cwd=build_directory,
1745 check=True,
1746 env=env)
1747 except subprocess.CalledProcessError as e:
1748 # Python 2 and 3 compatible way of getting the error object.

File /home/media/ExtHDD1/wuxi/conda/minicinda3/py310/lib/python3.10/subprocess.py:526, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
525 if check and retcode:
--> 526 raise CalledProcessError(retcode, process.args,
527 output=stdout, stderr=stderr)
528 return CompletedProcess(process.args, retcode, stdout, stderr)

CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
...
1754 if hasattr(error, 'output') and error.output: # type: ignore[union-attr]
1755 message += f": {error.output.decode(*SUBPROCESS_DECODE_ARGS)}" # type: ignore[union-attr]
-> 1756 raise RuntimeError(message) from e

RuntimeError: Error building extension 'slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0'

@Huang1720
Copy link

I meet the same problem. Have you solved it now?

@whx57
Copy link
Author

whx57 commented Jun 25, 2024

I meet the same problem. Have you solved it now?

no,i give up. I think may be the code is not perfect, compatibility is relatively poor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants