Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, I want to use CAD model alone for image segmentation, which script should I use #49

Open
wsq1010 opened this issue Apr 26, 2024 · 1 comment

Comments

@wsq1010
Copy link

wsq1010 commented Apr 26, 2024

No description provided.

@wsq1010
Copy link
Author

wsq1010 commented Apr 26, 2024

I had a problem executing run_inference_custom.py

Traceback (most recent call last):
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/run_inference_custom.py", line 234, in
run_inference(
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/run_inference_custom.py", line 162, in run_inference
model.ref_data["descriptors"] = model.descriptor_model.compute_features(
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/dinov2.py", line 150, in compute_features
features = self.forward_by_chunk(images)
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/dinov2.py", line 163, in forward_by_chunk
feats = self.compute_features(
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/dinov2.py", line 152, in compute_features
features = self.model(images)
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/vision_transformer.py", line 321, in forward
ret = self.forward_features(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/vision_transformer.py", line 257, in forward_features
x = blk(x)
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/layers/block.py", line 247, in forward
return super().forward(x_or_x_list)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/layers/block.py", line 105, in forward
x = x + attn_residual_func(x)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/layers/block.py", line 84, in attn_residual_func
return self.ls1(self.attn(self.norm1(x)))
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/qtwsq/work/pythonProject/SAM-6D/SAM-6D/Instance_Segmentation_Model/model/layers/attention.py", line 76, in forward
x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 196, in memory_efficient_attention
return _memory_efficient_attention(
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 294, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 310, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 98, in _dispatch_fw
return _run_priority_list(
File "/opt/conda/envs/sam6d-ism/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 73, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(16, 257, 16, 64) (torch.float32)
key : shape=(16, 257, 16, 64) (torch.float32)
value : shape=(16, 257, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
flshattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
tritonflashattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
Operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
device=cpu (supported: {'cuda'})
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant