-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch to onnx #26
Comments
I have modified the |
the problem is solved,there is a bug /home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py 2023-08-08 17:01:24,156 - mmcv - INFO - 2023-08-08 17:01:24,156 - mmcv - INFO - 2023-08-08 17:01:24,156 - mmcv - INFO - 2023-08-08 17:01:24,156 - mmcv - INFO - 2023-08-08 17:01:24,156 - mmcv - INFO - 2023-08-08 17:01:24,232 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - 2023-08-08 17:01:24,283 - mmcv - INFO - /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - 2023-08-08 17:01:24,299 - mmcv - INFO - load checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth |
here is my new problem /home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py 2023-08-08 17:02:19,969 - mmcv - INFO - 2023-08-08 17:02:19,969 - mmcv - INFO - 2023-08-08 17:02:19,969 - mmcv - INFO - 2023-08-08 17:02:19,969 - mmcv - INFO - 2023-08-08 17:02:19,969 - mmcv - INFO - 2023-08-08 17:02:20,044 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - 2023-08-08 17:02:20,095 - mmcv - INFO - /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,111 - mmcv - INFO - 2023-08-08 17:02:20,112 - mmcv - INFO - 2023-08-08 17:02:20,112 - mmcv - INFO - 2023-08-08 17:02:20,112 - mmcv - INFO - 2023-08-08 17:02:20,112 - mmcv - INFO - load checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth missing keys in source state_dict: query_head.input_proj.weight, query_head.input_proj.bias, query_head.fc_cls.weight, query_head.fc_cls.bias, query_head.reg_ffn.layers.0.0.weight, query_head.reg_ffn.layers.0.0.bias, query_head.reg_ffn.layers.1.weight, query_head.reg_ffn.layers.1.bias, query_head.fc_reg.weight, query_head.fc_reg.bias /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:423: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! |
Is this problem solved? |
没有解决,现在看到工程项目又更新了,我找个时间更新项目再试试 |
@TempleX98
|
@xueyingliu, @chenzx2, @jielanZhang, Hi, I am sorry that there are some unsolved torch2onnx issues. Our repo is implemented using an older version of mmdet v2.25, which no longer maintains the feature of model export. Co-DETR has been recently incorporated into the official mmdet v3.x repo, and you can use this official implementation as well as MMDeploy for model export. |
@TempleX98 I have tried your suggestion but I encounter some problems but when verify the documentation in mmdeploy, I found that CO-DETR is not supported, here is the list of the supported models https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/03-benchmark/supported_models.md |
Hello, Any updates about the model exportation to onnx , please ? |
@MarouaneMja Hi, I find this PR (open-mmlab/mmdetection#10910) of mmdet v3.x supports onnx export and hope this can help you. |
Hi @TempleX98 , thank you I will look it up |
Any updates? Does MMdeploy already support ? |
@Mayyyybe The inference architecture of Co-DINO is the same as DINO. And MMdeploy supports the model export of DINO method. |
Hi @TempleX98 , I managed to export the CO-DETR to onnx using mmdeploy as you suggested, however the SoftNonMaxSuppression is not supported by onnxruntime |
Just remove the NMS operation in the config. |
Thank you for your help @TempleX98 , it worked. However when I launch infernce with triton inference server , the onnx model takes more space in gpu memory than using a simple Python backend which very stange lolll |
|
@xinlin-xiao I am using the first config https://github.com/RunningLeon/mmdetection/blob/support_dino_onnx/projects/CO-DETR/configs/codino/co_dino_5scale_swin_l_16xb1_16e_o365tococo.py “, for the deploy_config I am using "detection_onnxruntime_dynamic.py" in mmdeploy |
Thank you for your help!But,I trying to export CO-DETR to onnx using mmdeploy in this: Process Process-2: Could you please tell me which checkpoint is you use?And I find someone use ”model_checkpoint = 'checkpoints/co_dino_5scale_r50_lsj_8xb2_1x_coco-69a72d67.pth'“ in https://github.com/open-mmlab/mmdetection/issues/11011 |
Just remove the soft_nms from the config, and you can add it later |
Do you have any suggestions on how to fix this mistake? |
Yes, DetDataSample are not supported by JIT , you have to convert the final output to a tuple format. Try, to run a simple Detr to get an idea what the output looks like , then do the same |
I am trying find the model output is :
) at 0x7fb82c7c18e0>] But,I do not know the right output looks like,trying use |
hello,the config file,I don’t find the soft_nms |
Hi! I figured out to convert co-dino to onnx and would like to share with you guys. I am using the model trained using mmdet:v3.3.0 and using mmdeploy:v1.3.1, onnxruntime:1.16.3. I found that the model trained using co-detr official repo (mmdet-2.25) requires lot of tinkering as the model backbone(swintransformer-v1 vs v2) is little different and also preprocessing functions and some utilities functions required for inference are different. So, if you train a new model from mmdet:v3.3.0 repo, I think you will be able to export it to onnx. To solve this issue: RuntimeError:Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DetDataSample I modified ...(your-env-path)/site-packages/torch/jit/trace.py try:
result_ = self.inner(*trace_inputs)
result = result_[0]
scores, bboxes, labels = result.pred_instances.scores, result.pred_instances.bboxes, result.pred_instances.labels
# Combining scores and bboxes into the dino format
combined_tensor = torch.cat((bboxes, scores.unsqueeze(1)), dim=1)
formatted_labels = labels.unsqueeze(0).long() # Retaining integer format
# Creating tuples with combined tensors and labels
formatted_result = (combined_tensor.unsqueeze(0), formatted_labels)
outs.append(formatted_result)
except:
print(self.inner(*trace_inputs))
outs.append(self.inner(*trace_inputs)) Also, we need to remove soft_nms operations, just comment this section out like below in your config file: ...
test_cfg=[
dict(
max_per_img=300,
# nms=dict(iou_threshold=0.8, type='soft_nms')
),
... Hope this helps! Thank you! :) |
cool, I will try |
is pytoch2torchscript can modified ...(your-env-path)/site-packages/torch/jit/trace.py to slove `RuntimeError: Tracer cannot infer type of [<DetDataSample(
) at 0x7f2ef4d99970>] |
@xinlin-xiao Mm it seems it is returning in list format, ao just get the entry at 0th index of that list and it should solve your issue |
Hi @bibekyess, could you please provide the entire Moreover, I'm using the implementation that's included in mmdetection v3.3.0 and I can confirm that issue is present also here. |
Hi @sirolf-otrebla I am now on vacation so I will provide it later. But as far as I remember, line 125 includes this instruction |
Hi @sirolf-otrebla ,when using version 3.3.0 of mmdet, how should these two lines in mmdet/apis/inference.py be modified? Alternatively, could you tell me how to train with version 3.3.0? I'm not sure where I might be going wrong. Thank you very much! |
Hi @bibekyess ,when using version 3.3.0 of mmdet, how should these two lines in mmdet/apis/inference.py be modified? 'from mmcv.parallel import collate, scatter' I found that these two modules have been removed in this version of mmdet, but I haven't found any alternative modules. Could you please tell me how you trained with version 3.3.0? This issue has been bothering me for a long time, and I would greatly appreciate your help. |
@Sunny20236 I am not sure if this answers your question because it's not clear to me what the issue you're having is. I can tell you that I'm using this ( https://github.com/open-mmlab/mmdetection/blob/main/tools/train.py ) script to train the network, directly from their repo. I didn't have to modify anything inside mmdetection itself. |
Also, for future reference in case someone needs it: this git diff applies the patch @bibekyess suggested. It works with torch 2.0.0. For other versions of torch, you can write a similar patch yourself, but i encountered other issues with more modern versions (hence the downgrade to 2.0.0) index 4afe7349690..fb993b7371e 100644
--- ./jit/_trace.py
+++ ./jit/_trace.py
@@ -29,6 +29,14 @@ from torch.testing._comparison import default_tolerances
_flatten = torch._C._jit_flatten
_unflatten = torch._C._jit_unflatten
+def _unpack_mmdet_det_data_sample(sample):
+ try:
+ scores, bboxes, labels = sample.pred_instances.scores, sample.pred_instances.bboxes, sample.pred_instances.labels
+ bboxes_scores_cat_tensor = torch.cat([bboxes, scores.unsqueeze(1)], dim=1)
+ labels_long_tensor = labels.long()
+ return bboxes_scores_cat_tensor.unsqueeze(0), labels_long_tensor.unsqueeze(0)
+ except:
+ raise TypeError("not a mmdetection data sample")
def _create_interpreter_name_lookup_fn(frames_up=1):
def _get_interpreter_name_for_var(var):
@@ -115,7 +123,22 @@ class ONNXTracedModule(torch.nn.Module):
)
if self._return_inputs_states:
inputs_states.append(_unflatten(in_args, in_desc))
- outs.append(self.inner(*trace_inputs))
+ # ---------------------------------------------------------------------------------------------------------
+ # ------------------START OF PATCH-------------------------------------------------------------------------
+ # -------------------------------------------------------------------------------------------------------------
+
+ result_ = self.inner(*trace_inputs)
+ try:
+ if result_[0].__class__.__name__ == "DetDataSample":
+ result_ = _unpack_mmdet_det_data_sample(result_[0])
+ except Exception as e:
+ warnings.warn("Failed to unpack mmdet det data sample, using standard torch output tracing")
+ finally:
+ outs.append(result_)
+
+ # ---------------------------------------------------------------------------------------------------------
+ # ------------------END OF PATCH---------------------------------------------------------------------------
+ # ---------------------------------------------------------------------------------------------------------
if self._return_inputs_states:
inputs_states[0] = (inputs_states[0], trace_inputs)
out_vars, _ = _flatten(outs) |
@sirolf-otrebla I'm sorry for the interruption again. Initially, I encountered the error mentioned by @bibekyess . Then, I made modifications according to the file you provided, which resulted in the following error:
Do you happen to know how to resolve this issue? I would greatly appreciate it. |
@Sunny20236 Can you read the entire message I wrote for solving this issue. The solution to your issue of softnms is already there. Thanks! :) |
@Sunny20236 you have to remove the soft nms module from you model config file. You are not going to need it anyway for inference, and you can always add it later as a post-processing step. EDIT: when i say "model config file" I mean, for example:
of course, the specific file depends on your specific case. |
@sirolf-otrebla I'm sorry to bother you again. I encountered a new problem during the conversion process. The detection boxes in the ONNX image are particularly far away from the target.Have you encountered this problem before, or do you know how to handle it? Thank you very much.The following content is a set of warning messages.
|
@Sunny20236 i'm deploying against TensorRT and the model performs almost exactly the same as the pure pytorch one. |
Hi @Sunny20236! As far as I have understood, the example input passed during the torch-to-onnx export is used to trace the model through the forward pass and the traced graph is converted into the ONNX format. This results
Long story short, if you resize the inference image to the same size of the example image used to convert torch2onnx, it should work. In my case, it worked like that. |
Finally! Finally! Finally! I have finally achieved the result I wanted. Thank you for your response, and also thank you to everyone who answered my questions. Thank you very much! @bibekyess ,@sirolf-otrebla |
By the way, did you guys have any speedup when using onnx model compared to the torch? I converted codino to onnx and I found performance is very similar. Specifically,
I found a significant improvement when converting to TensorRT and running on GPU but with CPU, I didn't have any gain. So, please let me know if you guys have improvement on running in CPU. |
did anyone use the onnx for tensorrt? I get this error when using TVM. |
|
@sirolf-otrebla The bbox is out of bounds, the drawn bbox may not be in the image I used the same image from train dataset. |
I have read through the above issue and comments and have been trying to follow the same. In my case, I trained Co-DETR model (ViT backbone- which gives 66AP on COCO test dataset) using this repo. I am using
Could you please kindly let me know if you can spot something that's going wrong here? Thanks for your help Note:
|
Hi @caraevangeline Mmm it looks like a familiar dimension issue. I would like to recommend these things:
Hope it helps and Happy Learning!! 🙂 |
Thanks for your quick reply, I will try it out |
Can I consult how to use codetr's onnx for inference? @Sunny20236 |
To answer my own question: The following call creates both a .onnx and .engine file, if some changes are made as described in #26 (comment).
But, I was not able to properly visualize the output, unsure if it is an issue with the model or the post/preprocessing of the data, since I only want to investigate the inference speed I don't care about this right now. FYI, on a laptop gpu NVIDIA RTX 2000 Ada, I get ~300ms/image, the model size decreased from 2.64GB -> 898MB(onnx) & 868MB(.engine) |
I've implemented co-dino's onnx-based end-to-end TensorRT model acceleration inference:https://github.com/DataXujing/Co-DETR-TensorRT |
/home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py
/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py:284: UserWarning: Arguments like
--mean
,--std
,--dataset
would be parsed directly from config file and are deprecated and will be removed in future releases.warnings.warn('Arguments like
--mean
,--std
,--dataset
would be/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/onnx/symbolic.py:481: UserWarning: DeprecationWarning: This function will be deprecated in future. Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy
warnings.warn(msg)
/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2023-08-08 16:30:39,106 - mmcv - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01}
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_conv.weight - torch.Size([256, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_conv.bias - torch.Size([256]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_cls.weight - torch.Size([9, 256, 1, 1]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_cls.bias - torch.Size([9]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_reg.weight - torch.Size([36, 256, 1, 1]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,108 - mmcv - INFO -
rpn_reg.bias - torch.Size([36]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,185 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.fc_cls.weight - torch.Size([81, 1024]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.fc_cls.bias - torch.Size([81]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.fc_reg.weight - torch.Size([320, 1024]):
NormalInit: mean=0, std=0.001, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.fc_reg.bias - torch.Size([320]):
NormalInit: mean=0, std=0.001, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.shared_fcs.0.weight - torch.Size([1024, 12544]):
XavierInit: gain=1, distribution=uniform, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.shared_fcs.0.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=uniform, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.shared_fcs.1.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=uniform, bias=0
2023-08-08 16:30:39,236 - mmcv - INFO -
bbox_head.shared_fcs.1.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=uniform, bias=0
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning:
num_anchors
is deprecated, for consistency or also usenum_base_priors
insteadwarnings.warn('DeprecationWarning:
num_anchors
is deprecated, '2023-08-08 16:30:39,248 - mmcv - INFO - initialize CoATSSHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'atss_cls', 'std': 0.01, 'bias_prob': 0.01}}
2023-08-08 16:30:39,255 - mmcv - INFO -
cls_convs.0.conv.weight - torch.Size([256, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
cls_convs.0.gn.weight - torch.Size([256]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
cls_convs.0.gn.bias - torch.Size([256]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
reg_convs.0.conv.weight - torch.Size([256, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
reg_convs.0.gn.weight - torch.Size([256]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
reg_convs.0.gn.bias - torch.Size([256]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
atss_cls.weight - torch.Size([80, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=-4.59511985013459
2023-08-08 16:30:39,255 - mmcv - INFO -
atss_cls.bias - torch.Size([80]):
NormalInit: mean=0, std=0.01, bias=-4.59511985013459
2023-08-08 16:30:39,255 - mmcv - INFO -
atss_reg.weight - torch.Size([4, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
atss_reg.bias - torch.Size([4]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
atss_centerness.weight - torch.Size([1, 256, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
atss_centerness.bias - torch.Size([1]):
NormalInit: mean=0, std=0.01, bias=0
2023-08-08 16:30:39,255 - mmcv - INFO -
scales.0.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
scales.1.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
scales.2.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
scales.3.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
scales.4.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHead2023-08-08 16:30:39,255 - mmcv - INFO -
scales.5.scale - torch.Size([]):
The value is the same before and after calling
init_weights
of CoATSSHeadload checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:423: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if W % self.patch_size[1] != 0:
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:425: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if H % self.patch_size[0] != 0:
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:362: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
Hp = int(np.ceil(H / self.window_size)) * self.window_size
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:363: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
Wp = int(np.ceil(W / self.window_size)) * self.window_size
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert L == H * W, "input feature has wrong size"
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:66: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
B = int(windows.shape[0] / (H * W / window_size / window_size))
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:241: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if pad_r > 0 or pad_b > 0:
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert L == H * W, "input feature has wrong size"
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
pad_input = (H % 2 == 1) or (W % 2 == 1)
/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:278: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if pad_input:
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 320, in
pytorch2onnx(
File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 90, in pytorch2onnx
torch.onnx.export(
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 1274, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 133, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 124, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func
return old_func(*args, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/base.py", line 169, in forward
return self.onnx_export(img[0], img_metas[0])
File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/co_detr.py", line 382, in onnx_export
outs = self.query_head(x)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'img_metas'
The text was updated successfully, but these errors were encountered: