-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker环境问题 #14
Comments
这个镜像不是我们提供的。 在pytorch1.12.1,cuda11.3,python3.9,v100的环境下,根据官方教程,我们可以正常编译和运行以下依赖库。 另外,使用更高的torch版本和对应的依赖库也可以的。 |
你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118 |
你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢 |
环境名称ape
…---原始邮件---
发件人: ***@***.***>
发送时间: 2024年1月10日(周三) 下午3:33
收件人: ***@***.***>;
抄送: ***@***.******@***.***>;
主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14)
使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(4, 1024, 16, 64) (torch.float32)
key : shape=(4, 1024, 16, 64) (torch.float32)
value : shape=(4, 1024, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
flshattF is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
requires a GPU with compute capability > 7.5
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
triton is not available
requires A100 GPU
cutlassF is not supported because:
xFormers wasn't build with CUDA support
smallkF is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 32
has custom scale
unsupported embed per head: 64
显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗
你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118
你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
请问您在什么显卡环境infer或者训练的 我在a100卡上跑个infer会报错 error in ms_deformable_im2col_cuda: no kernel image is available for execution on the device 看起来是架构不对 |
在4090显卡上跑的
…---原始邮件---
发件人: ***@***.*** ub.com>
发送时间: 2024年1月10日(周三) 下午4:06
收件人: ***@***.***>;
抄送: ***@***.******@***.***>;
主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14)
环境名称ape
…
---原始邮件--- 发件人: @.> 发送时间: 2024年1月10日(周三) 下午3:33 收件人: @.>; 抄送: @.@.>; 主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14) 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64 显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗 你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118 你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
请问您在什么显卡环境infer或者训练的 我在a100卡上跑个infer会报错 error in ms_deformable_im2col_cuda: no kernel image is available for execution on the device 看起来是架构不对
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
你试着在容器里面运行
|
你之前没有xformer如何运行的呀,我这没有xformer压根运行不了呀
我这没有xformer压根运行不了啊 |
keyk13/ape_cu118:v1
我使用的keyk13/ape_cu118:v1镜像,里面有 |
|
用这个 |
NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(4, 1024, 16, 64) (torch.float32)
key : shape=(4, 1024, 16, 64) (torch.float32)
value : shape=(4, 1024, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
flshattF
is not supported because:xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
requires a GPU with compute capability > 7.5
tritonflashattF
is not supported because:xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
triton is not available
requires A100 GPU
cutlassF
is not supported because:xFormers wasn't build with CUDA support
smallkF
is not supported because:xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 32
has custom scale
unsupported embed per head: 64
显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗
The text was updated successfully, but these errors were encountered: