Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert yolact to ONNX #74

Open
sdimantsd opened this issue Jun 23, 2019 · 65 comments
Open

convert yolact to ONNX #74

sdimantsd opened this issue Jun 23, 2019 · 65 comments

Comments

@sdimantsd
Copy link

sdimantsd commented Jun 23, 2019

Hello again,
I'm try to convert yolact to ONNX with the following code:

weights_path = '/home/ws/DL/yolact/weights/yolact_im700_54_800000.pth'

import torch
import torch.onnx
import yolact
import torchvision

model = yolact.Yolact()

# state_dict = torch.load(weights_path)
# model.load_state_dict(state_dict)

model.load_weights(weights_path)

dummy_input = torch.randn(1, 3, 640, 480)

torch.onnx.export(model, dummy_input, "onnx_model_name.onnx")

error msg:

/home/ws/DL/yolact/yolact.py:256: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for j, i in product(range(conv_h), range(conv_w)):
/home/ws/DL/yolact/yolact.py:279: TracerWarning: torch.Tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  self.priors = torch.Tensor(prior_data).view(-1, 4)
/home/ws/DL/yolact/yolact.py:279: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  self.priors = torch.Tensor(prior_data).view(-1, 4)
/home/ws/DL/yolact/layers/functions/detection.py:74: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for batch_idx in range(batch_size):
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-2-a796dc0eef97> in <module>
     13 dummy_input = torch.randn(1, 3, 700, 700)
     14 
---> 15 torch.onnx.export(model, dummy_input, "onnx_model_name.onnx")

~/.local/lib/python3.6/site-packages/torch/onnx/__init__.py in export(*args, **kwargs)
     23 def export(*args, **kwargs):
     24     from torch.onnx import utils
---> 25     return utils.export(*args, **kwargs)
     26 
     27 

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
    129             operator_export_type=operator_export_type, opset_version=opset_version,
    130             _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
--> 131             strip_doc_string=strip_doc_string)
    132 
    133 

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
    361                                                         output_names, operator_export_type,
    362                                                         example_outputs, propagate,
--> 363                                                         _retain_param_name, do_constant_folding)
    364 
    365         # TODO: Don't allocate a in-memory string for the protobuf

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _model_to_graph(model, args, verbose, training, input_names, output_names, operator_export_type, example_outputs, propagate, _retain_param_name, do_constant_folding, _disable_torch_constant_prop)
    264             model.graph, tuple(args), example_outputs, False, propagate)
    265     else:
--> 266         graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
    267         state_dict = _unique_state_dict(model)
    268         params = list(state_dict.values())

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _trace_and_get_graph_from_model(model, args, training)
    223     # training mode was.)
    224     with set_training(model, training):
--> 225         trace, torch_out = torch.jit.get_trace_graph(model, args, _force_outplace=True)
    226 
    227     if orig_state_dict_keys != _unique_state_dict(model).keys():

~/.local/lib/python3.6/site-packages/torch/jit/__init__.py in get_trace_graph(f, args, kwargs, _force_outplace, return_inputs)
    229     if not isinstance(args, tuple):
    230         args = (args,)
--> 231     return LegacyTracedModule(f, _force_outplace, return_inputs)(*args, **kwargs)
    232 
    233 

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

~/.local/lib/python3.6/site-packages/torch/jit/__init__.py in forward(self, *args)
    292         try:
    293             trace_inputs = _unflatten(all_trace_inputs[:len(in_vars)], in_desc)
--> 294             out = self.inner(*trace_inputs)
    295             out_vars, _ = _flatten(out)
    296             torch._C._tracer_exit(tuple(out_vars))

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    489             hook(self, input)
    490         if torch._C._get_tracing_state():
--> 491             result = self._slow_forward(*input, **kwargs)
    492         else:
    493             result = self.forward(*input, **kwargs)

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
    479         tracing_state._traced_module_stack.append(self)
    480         try:
--> 481             result = self.forward(*input, **kwargs)
    482         finally:
    483             tracing_state.pop_scope()

~/DL/yolact/yolact.py in forward(self, x)
    615                 pred_outs['conf'] = F.softmax(pred_outs['conf'], -1)
    616 
--> 617             return self.detect(pred_outs)
    618 
    619 

~/DL/yolact/layers/functions/detection.py in __call__(self, predictions)
     73 
     74             for batch_idx in range(batch_size):
---> 75                 decoded_boxes = decode(loc_data[batch_idx], prior_data)
     76                 result = self.detect(batch_idx, conf_preds, decoded_boxes, mask_data, inst_data)
     77 

RuntimeError: isTensor() ASSERT FAILED at /pytorch/aten/src/ATen/core/ivalue.h:209, please report a bug to PyTorch. (toTensor at /pytorch/aten/src/ATen/core/ivalue.h:209)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f721e0ac441 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f721e0abd7a in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x979ad2 (0x7f721d130ad2 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #3: torch::jit::tracer::getNestedValueTrace(c10::IValue const&) + 0x41 (0x7f721d3939a1 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #4: <unknown function> + 0xa7651b (0x7f721d22d51b in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #5: <unknown function> + 0xa766db (0x7f721d22d6db in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #6: <unknown function> + 0x457942 (0x7f725d6d2942 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x130cfc (0x7f725d3abcfc in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #8: _PyCFunction_FastCallDict + 0x35c (0x56204c in /usr/bin/python3)
frame #9: /usr/bin/python3() [0x5a1501]
frame #10: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #11: /usr/bin/python3() [0x5136c6]
frame #12: _PyObject_FastCallKeywords + 0x19c (0x57ec0c in /usr/bin/python3)
frame #13: /usr/bin/python3() [0x4f88ba]
frame #14: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #15: _PyFunction_FastCallDict + 0xf5 (0x4f4065 in /usr/bin/python3)
frame #16: /usr/bin/python3() [0x5a1481]
frame #17: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #18: /usr/bin/python3() [0x513601]
frame #19: _PyObject_FastCallKeywords + 0x19c (0x57ec0c in /usr/bin/python3)
frame #20: /usr/bin/python3() [0x4f88ba]
frame #21: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #22: /usr/bin/python3() [0x4f6128]
frame #23: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #24: /usr/bin/python3() [0x5a1481]
frame #25: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #26: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #27: /usr/bin/python3() [0x4f6128]
frame #28: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #29: /usr/bin/python3() [0x5a1481]
frame #30: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #31: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #32: /usr/bin/python3() [0x4f6128]
frame #33: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #34: /usr/bin/python3() [0x5a1481]
frame #35: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #36: /usr/bin/python3() [0x513601]
frame #37: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #38: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #39: /usr/bin/python3() [0x4f6128]
frame #40: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #41: /usr/bin/python3() [0x5a1481]
frame #42: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #43: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #44: /usr/bin/python3() [0x4f6128]
frame #45: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #46: /usr/bin/python3() [0x5a1481]
frame #47: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #48: /usr/bin/python3() [0x513601]
frame #49: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #50: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #51: /usr/bin/python3() [0x4f6128]
frame #52: /usr/bin/python3() [0x4f7d60]
frame #53: /usr/bin/python3() [0x4f876d]
frame #54: _PyEval_EvalFrameDefault + 0x1260 (0x4fa6c0 in /usr/bin/python3)
frame #55: /usr/bin/python3() [0x4f7a28]
frame #56: /usr/bin/python3() [0x4f876d]
frame #57: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #58: /usr/bin/python3() [0x4f6128]
frame #59: /usr/bin/python3() [0x4f7d60]
frame #60: /usr/bin/python3() [0x4f876d]
frame #61: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #62: /usr/bin/python3() [0x4f6128]
frame #63: /usr/bin/python3() [0x4f7d60]
@dbolya
Copy link
Owner

dbolya commented Jun 23, 2019

See #59. You'll have to put some elbow grease in if you want to get YOLACT traceable (i.e., exportable to ONNX) since I use a lot of pythonic code. I hear @Wilber529 was able to do it following these steps: #59 (comment). You have to rewrite how I pass around variables (dictionaries are not supported I think) and you'll have to rewrite anything after Yolact's forward function (starting with self.detect) in your target language because I wrote it in a super pythonic way to make the model faster.

@sdimantsd
Copy link
Author

Hi @dbolya thanks for you'r answer!
I'm not sure I understood you, Can you please expand?

@dbolya
Copy link
Owner

dbolya commented Jun 23, 2019

Yolact does not support conversion to ONNX, which is why you get an error. You'd need to change a lot of things to get conversion to ONNX to work, as outlined by @Wilber529 in #59 (comment). I'm not making these changes to the main branch because they'd make the Python version run slower and make it harder to develop.

@sdimantsd
Copy link
Author

thx

@Ma-Dan
Copy link

Ma-Dan commented Jul 12, 2019

I have converted yolact to onnx without Detect part, and also modified some upsampling code.
https://github.com/Ma-Dan/yolact/tree/onnx
Onnx model can get output of loc, conf, mask and proto, and detect process should be implemented with other methods.
I also converted onnx model to CoreML model, 4 custom layers need to be implemented to make it work.

@abhigoku10
Copy link

@Ma-Dan thanks for sharing the reference code ,i shalll look into this process and get back to if i have queries

@aweissen1
Copy link

@Ma-Dan thank you very much for sharing your work.
I am wondering, what needs to be implemented to execute the onnx model again.
What does this mean? "Onnx model can get output of loc, conf, mask and proto, and detect process should be implemented with other methods."
Thanks for your help!

@ABlueLight
Copy link

@Ma-Dan Thank you for your code! I convert the model to onnx ,but the results is different from pytorch outpus,such loc , mask and proto, but conf is same! Do you see the problem?

@aweissen1
Copy link

@abhigoku10 actually I just used the onnx branch from Ma-Dan to create an onnx file. Do you get an error while converting?

@abhigoku10
Copy link

@aweissen1 i was facing some package issues i shall look into to more in depth and solve it , where there any difference i the output generated

@ABlueLight
Copy link

@Ma-Dan Hi, i convert to onnx ssuccessfully,but i found results is not corrent . can you share the version of the pytorch and onnxruntime are you using? Thx

@sicarioakki
Copy link

@Ma-Dan Can you give more information about the package dependencies for your Yolact-ONNX implementation?
And also, have compared the results of Yolact and that of your Yolact-ONNX implementations? If so, please give us insight on it.

@abhigoku10
Copy link

abhigoku10 commented Jul 22, 2019

@ABlueLight and @aweissen1 should us the base code given by @Ma-Dan and train the model , or just load the trained model with this code what is the command to be used . Please share the process
Can i run it on gpu how much fps r u getting

@ABlueLight
Copy link

ABlueLight commented Jul 22, 2019

i convert to onnx successfully and results is correct, today. Thx @Ma-Dan
@sicarioakki @abhigoku10 My package dependencies include pytorch1.0.0 torchvision0.2.1 onnx-tf1.3.0 onnxruntime0.4.0 onnx1.5.0 tensorflow-gpu1.14.0.
Just use @Ma-Dan code is ok , i don't modify the codes,just replaced my trained model.
Mayby the package version is a Important factors.

@abhigoku10
Copy link

@ABlueLight after conversion to onnx which platform are you going to deploy it . and did u convert to tensorflow based model

@ABlueLight
Copy link

@abhigoku10 TensorFlow and it can run correctly

@Ma-Dan
Copy link

Ma-Dan commented Jul 22, 2019

Sorry for the delayed reply, I just fixed code on my repo to use correct onnx output.
Ma-Dan@a064897
The previous version move prior constant output to a separate file to make CoreML file correct, and I forgot to fix onnx output index. Sorry again!
And also notice that to make conversion to onnx correct, I hard coded sizes here.
https://github.com/Ma-Dan/yolact/blob/onnx/yolact.py#L344
So this code could not work correctly on yolact_im700_54_800000.pth weight, you need to fix the size here.

@Ma-Dan
Copy link

Ma-Dan commented Jul 22, 2019

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1

Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

@abhigoku10
Copy link

abhigoku10 commented Jul 22, 2019

@Ma-Dan thanks for the response, i have few queries

  1. the onnx model which you obtained can be used with C++
  2. can i convert that model to other framework like tf or caffe
  3. In the command shared u have mentioned as --cuda=False , does it mean that it can run only on CPU and not on GPU , i wanted to run it on GPU

@aweissen1
Copy link

@Ma-Dan Thank you! Great job.

@ridasalam
Copy link

@ABlueLight how did you import it to Tensorflow?

@sicarioakki
Copy link

file_name= yolact_base_0_4000.onnx
params= ['yolact', 'base', '0', '4000']
model_name= yolact_base
epoch= 0
iteration= 4000
Config not specified. Parsed yolact_base_config from the file name.

Loading model...Traceback (most recent call last):
File "onnxeval.py", line 1035, in
net.load_weights(args.trained_model)
File "/home/aeye/yolact-onnx/yolact_onnx_1/yolact.py", line 469, in load_weights
state_dict = torch.load(path, map_location='cpu')
File "/home/aeye/yolact-onnx/Yolact_ONNX/lib/python3.6/site-packages/torch/serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "/home/aeye/yolact-onnx/Yolact_ONNX/lib/python3.6/site-packages/torch/serialization.py", line 532, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '\x08'.

I was able to covert the model to .onnx format.
But while inferencing, i am facing the above issue.

@ABlueLight
Copy link

@ABlueLight how did you import it to Tensorflow?
https://github.com/onnx/onnx-tensorflow

@abhigoku10
Copy link

@aweissen1 @ABlueLight hi guys , i am facing the same issue as above in my inference after conversion

file_name= yolact_base_0_4000.onnx
params= ['yolact', 'base', '0', '4000']
model_name= yolact_base
epoch= 0
iteration= 4000
Config not specified. Parsed yolact_base_config from the file name.

Loading model...Traceback (most recent call last):
File "onnxeval.py", line 1035, in
net.load_weights(args.trained_model)
File "/home/aeye/yolact-onnx/yolact_onnx_1/yolact.py", line 469, in load_weights
state_dict = torch.load(path, map_location='cpu')
File "/home/aeye/yolact-onnx/Yolact_ONNX/lib/python3.6/site-packages/torch/serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "/home/aeye/yolact-onnx/Yolact_ONNX/lib/python3.6/site-packages/torch/serialization.py", line 532, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '\x08'.

Any suggestions

@sicarioakki
Copy link

@Ma-Dan @aweissen1 @ABlueLight How are guys able to load the ONNX model using torch.load() function? Only onnx.load() can be used right?

@ridasalam
Copy link

ridasalam commented Jul 23, 2019

@ABlueLight, do you have a huge difference in speed of inference?

I used @Ma-Dan 's helpful work to generate yolact.onnx and I load it through onnx load and onnx_tf.backend import prepare. All other post processing is still torch based. It takes 2 mins per image inference (compared to a couple of seconds in Pytorch)

Also, were you able to convert it to pure Tensorflow? (use Tensorflow pb file instead of onnx)

@ABlueLight
Copy link

@ridasalam I convert it to pure tensorflow and it const about 400~500ms on i5 cpu。
On GPU,pytorch and tensorflow cost time are almost equal.

@ABlueLight
Copy link

@sicarioakki ONNX model should be loaded by onnx.load(),i think..

@sdimantsd
Copy link
Author

@ridasalam I convert it to pure tensorflow and it const about 400~500ms on i5 cpu。
On GPU,pytorch and tensorflow cost time are almost equal.

Can you share the project of tensorflow?

@saisubramani
Copy link

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1
Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

hi i am trying to start the custom training, while starting the training it shows the error, i am running the script with all dependency which you mentioned.
the Error is :

home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU0 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU1 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU2 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU3 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
Traceback (most recent call last):
File "train.py", line 382, in
train()
File "train.py", line 143, in train
yolact_net = Yolact()
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/yolact.py", line 395, in init
self.backbone = construct_backbone(cfg.backbone)
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/backbone.py", line 437, in construct_backbone
backbone = cfg.type(*cfg.args)
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/backbone.py", line 64, in init
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 332, in init
False, pair(0), groups, bias, padding_mode)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 46, in init
self.reset_parameters()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 49, in reset_parameters
init.kaiming_uniform
(self.weight, a=math.sqrt(5))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/init.py", line 315, in kaiming_uniform_
return tensor.uniform_(-bound, bound)
RuntimeError: CUDA error: no kernel image is available for execution on the device

Hi! The modification I made is only useful when converting trained model to onnx, please use original code in your custom training.

Can you Please tell the steps for converting the yolact.pth model to .onnx model, and mention which script should be used for converting,so that it can be helpful to me. My idea is to convert the model into tensorRT, so i am trying to convert the [yolact to onnx to tensorRT].

@saisubramani
Copy link

saisubramani commented Feb 29, 2020

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1
Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

hi i am trying to start the custom training, while starting the training it shows the error, i am running the script with all dependency which you mentioned.
the Error is :

home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU0 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU1 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU2 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:134: UserWarning:
Found GPU3 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
Traceback (most recent call last):
File "train.py", line 382, inhow can
train()
File "train.py", line 143, in train
yolact_net = Yolact()
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/yolact.py", line 395, in init
self.backbone = construct_backbone(cfg.backbone)
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/backbone.py", line 437, in construct_backbone
backbone = cfg.type(*cfg.args)
File "/home/ubuntu/efs_model/models/YOLACT/Modified_Yolact/backbone.py", line 64, in init
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 332, in init
False, pair(0), groups, bias, padding_mode)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 46, in init
self.reset_parameters()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 49, in reset_parameters
init.kaiming_uniform
(self.weight, a=math.sqrt(5))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/init.py", line 315, in kaiming_uniform_
return tensor.uniform_(-bound, bound)
RuntimeError: CUDA error: no kernel image is available for execution on the device

Hi! The modification I made is only useful when converting trained model to onnx, please use original code in your custom training.

hi i had founded that you are using eval.py script for converting the yolact model to onnx model i having a doubt
pred_outs = net(batch)
This give a list which having an size of 1, how you are using the index in ,
preds = detect({'loc': pred_outs[0], 'conf': pred_outs[1], 'mask':pred_outs[2], 'priors': pred_outs[3], 'proto': pred_outs[4]})

it showing

IndexError: list index out of range

so what i did is , i just added few lines
pred_outs = dict(pred_outs[0]) pred_outs=pred_outs['detection']

now its in dictionary format by using the key value i can take the values of detection, but when i cross checked the detection which is in (dictionary format) it having a key values of

('mask','class','score','proto','net')

what value can i assign for the

pred_out[0],pred_outs[1],pred_outs[2], pred_outs[3],pred_outs[4]

In my understanding 'conf' ':mean score,'mask':mask,'proto': means proto what about 'loc' and 'priors'

preds = detect({'loc': pred_outs[0], 'conf': pred_outs[1], 'mask':pred_outs[2], 'priors': pred_outs[3], 'proto': pred_outs[4]})

i tried like this
preds = detect({'loc': pred_outs['box'], 'conf': pred_outs['score'], 'mask':pred_outs['mask'], 'priors': pred_outs['class'], 'proto': pred_outs['proto']})
It showing Error:
TypeError: __call__() missing 1 required positional argument: 'net'
can you help me to sort this issue ? if i am wrong please tell me.@Ma-Dan

@AlexanderSlav
Copy link

Hi,@Ma-Dan I'm trying to convert yolact model to TensorRT and facing number of issues.

Here is my working environment :

  • pytorch == 1.4.0

  • TensorRT == 7.0.0(official docker release 20.01)

There are links to yolact model in onnx format opset version == 9,

The Error is :
UNSUPPORTED_NODE: Assertion failed: scales_input.is_weights()

opset version == 11
The Error is :
INVALID_GRAPH: Assertion failed: ctx->tensors().count(inputName)

Thank you in advance for your help

@dzyjjpy
Copy link

dzyjjpy commented Mar 12, 2020

I have converted yolact to onnx without Detect part, and also modified some upsampling code.
https://github.com/Ma-Dan/yolact/tree/onnx
Onnx model can get output of loc, conf, mask and proto, and detect process should be implemented with other methods.
@Ma-Dan I also converted onnx model to CoreML model, 4 custom layers need to be implemented to make it work.

Thanks for your gread job. I follow your code to convert onnx success, but convert onnx to coreml, it shows errors about upsample layer(you mentioned you modify some upsampling code, could you pls share the modification part? you men the function: def _convert_upsample(builder, node, graph, err): in /home/jiapy/virtualEnv/py3.6torch1.2/lib/python3.6/site-packages/onnx_coreml/_operators.py" ):
175/308: Converting Node Type Upsample
176/308: Converting Node Type Conv
177/308: Converting Node Type Add
178/308: Converting Node Type Upsample
Traceback (most recent call last):
File "/home/jiapy/workspace/segmentation/yolact-coreml/onnx_to_coreml.py", line 15, in
minimum_ios_deployment_target='12' # TypeError: 'set' object is not callable
File "/home/jiapy/virtualEnv/py3.6torch1.2/lib/python3.6/site-packages/onnx_coreml/converter.py", line 629, in convert
_convert_node(builder, node, graph, err)
File "/home/jiapy/virtualEnv/py3.6torch1.2/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 2017, in _convert_node
return converter_fn(builder, node, graph, err)
File "/home/jiapy/virtualEnv/py3.6torch1.2/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 1654, in _convert_upsample
input_shape = graph.shape_dict[node.inputs[0]]
KeyError: '533'

Process finished with exit code 1

@bbico
Copy link

bbico commented May 7, 2020

I have converted yolact to onnx without Detect part, and also modified some upsampling code.
https://github.com/Ma-Dan/yolact/tree/onnx
Onnx model can get output of loc, conf, mask and proto, and detect process should be implemented with other methods.
I also converted onnx model to CoreML model, 4 custom layers need to be implemented to make it work.

Hi.
I'm pretty newbie of ML.
I followed Ma-Dan's work for a while, and finally I got yolact.onnx model.
But my ultimate goal is to run Unity program with yolact.
And now, I got another error dealing with onnx model.

Unexpected error while evaluating model output 783. System.ArgumentException: Cannot reshape array of size 62208 into shape with multiple of 1024 elements
at Barracuda.TensorExtensions.Reshape

I used opset=9, input=(1,550,550,3), model=resnet50-54
Someone posted simillar issues about importing onnx, but I couldn't find the exact solution.
So, if anyone had same issues or solved it, please give me a tip.

@biyuehuang
Copy link

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1

Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

Hi @Ma-Dan , Do you know how to convert yolact_plus_base_54_800000.pth to ONNX. I run $python eval.py --trained_model=weights/yolact_plus_base_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg, got Eorror:
Multiple GPUs detected! Turning off JIT.
Config not specified. Parsed yolact_plus_base_config from the file name.

Traceback (most recent call last):
File "eval.py", line 980, in
set_cfg(args.config)
File "/home/username/Document/yolact/yolact/data/config.py", line 676, in set_cfg
cfg.replace(eval(config_name))
File "", line 1, in
NameError: name 'yolact_plus_base_config' is not defined

@amitkumar-delhivery
Copy link

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1

Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

Thank you so much @Ma-Dan , have you tried converting it to tflite?

@amitkumar-delhivery
Copy link

amitkumar-delhivery commented Jun 9, 2020

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1
Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

Hi @Ma-Dan , Do you know how to convert yolact_plus_base_54_800000.pth to ONNX. I run $python eval.py --trained_model=weights/yolact_plus_base_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg, got Eorror:
Multiple GPUs detected! Turning off JIT.
Config not specified. Parsed yolact_plus_base_config from the file name.

Traceback (most recent call last):
File "eval.py", line 980, in
set_cfg(args.config)
File "/home/username/Document/yolact/yolact/data/config.py", line 676, in set_cfg
cfg.replace(eval(config_name))
File "", line 1, in
NameError: name 'yolact_plus_base_config' is not defined

give config file --config=custom_config as argument when executing eval.py as you might have provided while running the train.py!

@amitkumar-delhivery
Copy link

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1
Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

Thank you so much @Ma-Dan , have you tried converting it to tflite?

Done :)

@bmabir17
Copy link

The environment I used:
onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1
Run
python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to generate onnx file.
And run
python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg
to evaluate with onnx.

Thank you so much @Ma-Dan , have you tried converting it to tflite?

Done :)

@amitkumar-delhivery were you able to convert it into tflite? if so did you used it on mobile devices(Android) ?

@Chase2816
Copy link

@ridasalam 我将其转换为纯tensorflow,在i5 cpu上约400〜500ms。
在GPU上,pytorch和tensorflow的花费时间几乎相等。
yolact转为纯tensorflow得项目能共享吗?

@carlsummer
Copy link

RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type Yolact

@h-aboutalebi
Copy link

@ABlueLight you said you were successful in converting the yolact to onnx and deploy it on TensorFlow. I was wondering if you could share your code? I am still trying to figure out how to convert Yolact to ONNX and then deploy it on TensorFlow. Thanks!

@areebsyed
Copy link

areebsyed commented Oct 26, 2020

HI all I used @Ma-Dan repo to convert the .pth file to .onnx file. Now I want to run this using C++. What should I do next? Can someone give me a good direction? What I needed initially was to convert .pt file to .pth file but from this issue I realized that I can convert it to ONNX and then C++. Am I thinking rightly so?

Should I now look for solution to convert .onnx to .pth or .onnx can be called from C++ (in C++ implementation)?

@yaoyh
Copy link

yaoyh commented Dec 9, 2020

Thank you very much for your work.
I have successfully transferred to ONNX. However, when using connxeval.py to verify the inference effed,
it is found that the starting and ending coordinates of all boxes are the same(x1=x2,y1=y2).
Draw out the image under the reference.

Can anyone give me some advice?
@Ma-Dan
000000397133

In addition, if I don't think about the above problem, I successfully converted onnx to TensorRT.
And quantified to INT8

@rbgreenway
Copy link

Has anyone successfully converted a Yolact++ model to onnx? For example, yolact_plus_base_54_800000.pth to yolact_plus_base_54_800000.onnx? I'm not sure the @Ma-Dan has updated this repo to support the Yolact++ models. When I try to run eval.py as

python3 eval.py --trained_model=yolact_plus_base_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=image.jpg

I get:

Multiple GPUs detected! Turning off JIT.
Config not specified. Parsed yolact_plus_base_config from the file name.

Traceback (most recent call last):
  File "eval.py", line 990, in <module>
    set_cfg(args.config)
  File "yolact/data/config.py", line 676, in set_cfg
    cfg.replace(eval(config_name))
  File "<string>", line 1, in <module>
NameError: name 'yolact_plus_base_config' is not defined

@amitkumar-delhivery suggested above that for this error, add: --config=custom_config, but this leads to

NameError: name 'custom_config' is not defined

Any suggestions would be appreciated.

@sdimantsd
Copy link
Author

@rbgreenway
It's look like you don't have thus conifgs.
Take a look over data/config.py if you have "yolact_plus_base_config" in it.
Also "custom_config" it's just an example for config. you should use the name you have.

@rbgreenway
Copy link

@sdimantsd
So, I basically took what I needed from the data/config.py in Ma-Dan's main branch, and put it in the onnx branch to eliminate the "yolact_plus_base_config" issue; however, now there is something very weird:

python3 eval.py --trained_model=./weights/yolact_plus_base_54_800000.pth --score_threshold=0.15 --top_k=15 --cuda=False --image=/home/bryan/Pictures/stef.jpg
Multiple GPUs detected! Turning off JIT.
Config not specified. Parsed yolact_plus_base_config from the file name.

Loading model...Traceback (most recent call last):
  File "eval.py", line 1023, in <module>
    net = Yolact()
  File "/home/bryan/python_projects/yolact_onnx/yolact/yolact.py", line 398, in __init__
    self.backbone = construct_backbone(cfg.backbone)
  File "/home/bryan/python_projects/yolact_onnx/yolact/backbone.py", line 437, in construct_backbone
    backbone = cfg.type(*cfg.args)
  File "/home/bryan/python_projects/yolact_onnx/yolact/backbone.py", line 69, in __init__
    self._make_layer(block, 64, layers[0])
  File "/home/bryan/python_projects/yolact_onnx/yolact/backbone.py", line 87, in _make_layer
    if stride != 1 or self.inplanes != planes * block.expansion:
AttributeError: 'int' object has no attribute 'expansion'

I thought it might be a version issue, but I've replicated what people have mentioned above with no change.
The environment I used:

onnx 1.4.1
onnxruntime 0.4.0
torch 1.0.1
torchvision 0.2.1

That seems like a pytorch problem, but I'm still digging....

@namnguyenphuong10
Copy link

namnguyenphuong10 commented Feb 1, 2021

File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_converter.py", line 600, in convert graph = _prepare_onnx_graph(onnx_model.graph, transformers, onnx_model.ir_version) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_converter.py", line 464, in _prepare_onnx_graph graph_ = graph_.transformed(transformers) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_graph.py", line 232, in transformed return _apply_graph_transformations(graph, transformers) # type: ignore File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_graph.py", line 73, in _apply_graph_transformations graph = transformer(graph) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_transformers.py", line 842, in __call__ output = np.take(x, range(s, e), axis=a) # type: ignore File "<__array_function__ internals>", line 6, in take File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 191, in take return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc return bound(*args, **kwds) MemoryError

my computer: RAM 32gb, GeForce GTX 1060 6GB.
i have an above trouble when i attempt to convert yolact.onnx model into yolact.mlmodel. who can help me, thanks alot.

@chingi071
Copy link

@Ma-Dan Thank you very much for your open source and help. I used your github to convert the model to ncnn: https://github.com/Ma-Dan/yolact/tree/onnx.
I would like to ask if you have tried to convert onnx to ncnn? Because segmentation fault (core dumped) appears when I execute ncnnoptimize.
I am thinking about whether it is related to the error message that appears when converting onnx. The following is my error message. I will trouble you again, thank you.

The environment I used: python 3.7.9, torch1.5.0, torchvision 0.6.0, onnx 1.8.1

TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.last_img_size != (cfg._tmp_img_w, cfg._tmp_img_h):

@biyuehuang
Copy link

It work for me. Convert yolact to onnx following this way: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT.html

@stereomatchingkiss
Copy link

I convert the yolact model to onnx follow the instructions of https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT.html and https://github.com/Ma-Dan/yolact/tree/onnx., both of them are much slower compare with pytorch because the output need to copy from cuda to cpu(125ms vs 30ms), tried to perform nms by torchvision.nms and convert to onnx, the performance even slower, do anyone know how to speed things up if want to run yolact by onnx? Thanks

@rgkannan676
Copy link

For Resnet101-FPN change yolact/yolact.py line (https://github.com/Ma-Dan/yolact/blob/onnx/yolact.py#L344 ) "sizes = [(69, 69), (35, 35)]" to "sizes = [(88, 88), (44, 44)]" to match the output shapes.

@artes14
Copy link

artes14 commented Apr 13, 2022

hi i had founded that you are using eval.py script for converting the yolact model to onnx model i having a doubt pred_outs = net(batch) This give a list which having an size of 1, how you are using the index in , preds = detect({'loc': pred_outs[0], 'conf': pred_outs[1], 'mask':pred_outs[2], 'priors': pred_outs[3], 'proto': pred_outs[4]})

it showing

IndexError: list index out of range

so what i did is , i just added few lines pred_outs = dict(pred_outs[0]) pred_outs=pred_outs['detection']

now its in dictionary format by using the key value i can take the values of detection, but when i cross checked the detection which is in (dictionary format) it having a key values of

('mask','class','score','proto','net')

what value can i assign for the

pred_out[0],pred_outs[1],pred_outs[2], pred_outs[3],pred_outs[4]

In my understanding 'conf' ':mean score,'mask':mask,'proto': means proto what about 'loc' and 'priors'

preds = detect({'loc': pred_outs[0], 'conf': pred_outs[1], 'mask':pred_outs[2], 'priors': pred_outs[3], 'proto': pred_outs[4]})

i tried like this preds = detect({'loc': pred_outs['box'], 'conf': pred_outs['score'], 'mask':pred_outs['mask'], 'priors': pred_outs['class'], 'proto': pred_outs['proto']}) It showing Error: TypeError: __call__() missing 1 required positional argument: 'net' can you help me to sort this issue ? if i am wrong please tell me.@Ma-Dan

I figured this came up due to different yolact version I guess?
in line where detect() is called there is no 'net' argument involved so
preds= detect({'loc': pred_outs['box'], 'conf': pred_outs['score'], 'mask': pred_outs['mask'], 'priors': pred_outs['proto'], 'proto': pred_outs['proto']})

should be changed to

preds= detect({'loc': pred_outs['box'], 'conf': pred_outs['score'], 'mask': pred_outs['mask'], 'priors': pred_outs['proto'], 'proto': pred_outs['proto']}, net)

but Now I'm getting this problem here can anyone help?

File "D:/yolact-master/evalonnx.py", line 1055, in <module>  
    evaluate(net, dataset)  
  File "D:/yolact-master/evalonnx.py", line 832, in evaluate  
    evalimage(net, args.image)  
  File "D:/yolact-master/evalonnx.py", line 587, in evalimage  
    preds= detect({'loc': pred_outs['box'], 'conf': pred_outs['score'], 'mask': pred_outs['mask'], 'priors': pred_outs['proto'],  
  File "D:\yolact-master\layers\functions\detection.py", line 67, in __call__  
    conf_preds = conf_data.view(batch_size, num_priors, self.num_classes).transpose(2, 1).contiguous()  
RuntimeError: shape '[2, 112, 2]' is invalid for input of size 2

@apanand14
Copy link

if I use onnxeval.py then I produce this error

File "onnxeval.py", line 1065, in
evaluate(net, dataset)
File "onnxeval.py", line 842, in evaluate
evalimage(net, args.image)
File "onnxeval.py", line 599, in evalimage
preds = detect({'loc': torch.from_numpy(pred_onx[0]), 'conf': torch.from_numpy(pred_onx[1]),
TypeError: call() missing 1 required positional argument: 'net'

@apanand14
Copy link

After converting to ONNX my model looks like this: I don't know is it correct or not? Please help me out in it. Thank you.

7767517
240 273
Input input.1 0 1 input.1
MemoryData onnx::Add_528 0 1 onnx::Add_528 0=1
Convolution Conv_0 1 1 input.1 input.4 0=64 1=7 11=7 2=1 12=1 3=2 13=2 4=3 14=3 15=3 16=3 5=1 6=9408
ReLU Relu_1 1 1 input.4 onnx::MaxPool_357
Pooling MaxPool_2 1 1 onnx::MaxPool_357 input.8 0=0 1=3 11=3 2=2 12=2 3=1 13=1 14=1 15=1 5=1
Split splitncnn_0 1 2 input.8 input.8_splitncnn_0 input.8_splitncnn_1
Convolution Conv_3 1 1 input.8_splitncnn_1 input.16 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=4096
ReLU Relu_4 1 1 input.16 onnx::Conv_361
Convolution Conv_5 1 1 onnx::Conv_361 input.24 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
ReLU Relu_6 1 1 input.24 onnx::Conv_364
Convolution Conv_7 1 1 onnx::Conv_364 onnx::Add_775 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
Convolution Conv_8 1 1 input.8_splitncnn_0 onnx::Add_778 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
BinaryOp Add_9 2 1 onnx::Add_775 onnx::Add_778 onnx::Relu_369 0=0
ReLU Relu_10 1 1 onnx::Relu_369 input.36
Split splitncnn_1 1 2 input.36 input.36_splitncnn_0 input.36_splitncnn_1
Convolution Conv_11 1 1 input.36_splitncnn_1 input.44 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
ReLU Relu_12 1 1 input.44 onnx::Conv_373
Convolution Conv_13 1 1 onnx::Conv_373 input.52 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
ReLU Relu_14 1 1 input.52 onnx::Conv_376
Convolution Conv_15 1 1 onnx::Conv_376 onnx::Add_787 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
BinaryOp Add_16 2 1 onnx::Add_787 input.36_splitncnn_0 onnx::Relu_379 0=0
ReLU Relu_17 1 1 onnx::Relu_379 input.60
Split splitncnn_2 1 2 input.60 input.60_splitncnn_0 input.60_splitncnn_1
Convolution Conv_18 1 1 input.60_splitncnn_1 input.68 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
ReLU Relu_19 1 1 input.68 onnx::Conv_383
Convolution Conv_20 1 1 onnx::Conv_383 input.76 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
ReLU Relu_21 1 1 input.76 onnx::Conv_386
Convolution Conv_22 1 1 onnx::Conv_386 onnx::Add_796 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
BinaryOp Add_23 2 1 onnx::Add_796 input.60_splitncnn_0 onnx::Relu_389 0=0
ReLU Relu_24 1 1 onnx::Relu_389 input.84
Split splitncnn_3 1 2 input.84 input.84_splitncnn_0 input.84_splitncnn_1
Convolution Conv_25 1 1 input.84_splitncnn_1 input.92 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=32768
ReLU Relu_26 1 1 input.92 onnx::Conv_393
Convolution Conv_27 1 1 onnx::Conv_393 input.100 0=128 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=147456
ReLU Relu_28 1 1 input.100 onnx::Conv_396
Convolution Conv_29 1 1 onnx::Conv_396 onnx::Add_805 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
Convolution Conv_30 1 1 input.84_splitncnn_0 onnx::Add_808 0=512 1=1 11=1 2=1 12=1 3=2 13=2 4=0 14=0 15=0 16=0 5=1 6=131072
BinaryOp Add_31 2 1 onnx::Add_805 onnx::Add_808 onnx::Relu_401 0=0
ReLU Relu_32 1 1 onnx::Relu_401 input.112
Split splitncnn_4 1 2 input.112 input.112_splitncnn_0 input.112_splitncnn_1
Convolution Conv_33 1 1 input.112_splitncnn_1 input.120 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
ReLU Relu_34 1 1 input.120 onnx::Conv_405
Convolution Conv_35 1 1 onnx::Conv_405 input.128 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
ReLU Relu_36 1 1 input.128 onnx::Conv_408
Convolution Conv_37 1 1 onnx::Conv_408 onnx::Add_817 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
BinaryOp Add_38 2 1 onnx::Add_817 input.112_splitncnn_0 onnx::Relu_411 0=0
ReLU Relu_39 1 1 onnx::Relu_411 input.136
Split splitncnn_5 1 2 input.136 input.136_splitncnn_0 input.136_splitncnn_1
Convolution Conv_40 1 1 input.136_splitncnn_1 input.144 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
ReLU Relu_41 1 1 input.144 onnx::Conv_415
Convolution Conv_42 1 1 onnx::Conv_415 input.152 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
ReLU Relu_43 1 1 input.152 onnx::Conv_418
Convolution Conv_44 1 1 onnx::Conv_418 onnx::Add_826 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
BinaryOp Add_45 2 1 onnx::Add_826 input.136_splitncnn_0 onnx::Relu_421 0=0
ReLU Relu_46 1 1 onnx::Relu_421 input.160
Split splitncnn_6 1 2 input.160 input.160_splitncnn_0 input.160_splitncnn_1
Convolution Conv_47 1 1 input.160_splitncnn_1 input.168 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
ReLU Relu_48 1 1 input.168 onnx::Conv_425
Convolution Conv_49 1 1 onnx::Conv_425 input.176 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
ReLU Relu_50 1 1 input.176 onnx::Conv_428
Convolution Conv_51 1 1 onnx::Conv_428 onnx::Add_835 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
BinaryOp Add_52 2 1 onnx::Add_835 input.160_splitncnn_0 onnx::Relu_431 0=0
ReLU Relu_53 1 1 onnx::Relu_431 input.184
Split splitncnn_7 1 3 input.184 input.184_splitncnn_0 input.184_splitncnn_1 input.184_splitncnn_2
Convolution Conv_54 1 1 input.184_splitncnn_2 input.192 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=131072
ReLU Relu_55 1 1 input.192 onnx::Conv_435
Convolution Conv_56 1 1 onnx::Conv_435 input.200 0=256 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_57 1 1 input.200 onnx::Conv_438
Convolution Conv_58 1 1 onnx::Conv_438 onnx::Add_844 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
Convolution Conv_59 1 1 input.184_splitncnn_1 onnx::Add_847 0=1024 1=1 11=1 2=1 12=1 3=2 13=2 4=0 14=0 15=0 16=0 5=1 6=524288
BinaryOp Add_60 2 1 onnx::Add_844 onnx::Add_847 onnx::Relu_443 0=0
ReLU Relu_61 1 1 onnx::Relu_443 input.212
Split splitncnn_8 1 2 input.212 input.212_splitncnn_0 input.212_splitncnn_1
Convolution Conv_62 1 1 input.212_splitncnn_1 input.220 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
ReLU Relu_63 1 1 input.220 onnx::Conv_447
Convolution Conv_64 1 1 onnx::Conv_447 input.228 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_65 1 1 input.228 onnx::Conv_450
Convolution Conv_66 1 1 onnx::Conv_450 onnx::Add_856 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_67 2 1 onnx::Add_856 input.212_splitncnn_0 onnx::Relu_453 0=0
ReLU Relu_68 1 1 onnx::Relu_453 input.236
Split splitncnn_9 1 2 input.236 input.236_splitncnn_0 input.236_splitncnn_1
Convolution Conv_69 1 1 input.236_splitncnn_1 input.244 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
ReLU Relu_70 1 1 input.244 onnx::Conv_457
Convolution Conv_71 1 1 onnx::Conv_457 input.252 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_72 1 1 input.252 onnx::Conv_460
Convolution Conv_73 1 1 onnx::Conv_460 onnx::Add_865 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_74 2 1 onnx::Add_865 input.236_splitncnn_0 onnx::Relu_463 0=0
ReLU Relu_75 1 1 onnx::Relu_463 input.260
Split splitncnn_10 1 2 input.260 input.260_splitncnn_0 input.260_splitncnn_1
Convolution Conv_76 1 1 input.260_splitncnn_1 input.268 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
ReLU Relu_77 1 1 input.268 onnx::Conv_467
Convolution Conv_78 1 1 onnx::Conv_467 input.276 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_79 1 1 input.276 onnx::Conv_470
Convolution Conv_80 1 1 onnx::Conv_470 onnx::Add_874 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_81 2 1 onnx::Add_874 input.260_splitncnn_0 onnx::Relu_473 0=0
ReLU Relu_82 1 1 onnx::Relu_473 input.284
Split splitncnn_11 1 2 input.284 input.284_splitncnn_0 input.284_splitncnn_1
Convolution Conv_83 1 1 input.284_splitncnn_1 input.292 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
ReLU Relu_84 1 1 input.292 onnx::Conv_477
Convolution Conv_85 1 1 onnx::Conv_477 input.300 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_86 1 1 input.300 onnx::Conv_480
Convolution Conv_87 1 1 onnx::Conv_480 onnx::Add_883 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_88 2 1 onnx::Add_883 input.284_splitncnn_0 onnx::Relu_483 0=0
ReLU Relu_89 1 1 onnx::Relu_483 input.308
Split splitncnn_12 1 2 input.308 input.308_splitncnn_0 input.308_splitncnn_1
Convolution Conv_90 1 1 input.308_splitncnn_1 input.316 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
ReLU Relu_91 1 1 input.316 onnx::Conv_487
Convolution Conv_92 1 1 onnx::Conv_487 input.324 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_93 1 1 input.324 onnx::Conv_490
Convolution Conv_94 1 1 onnx::Conv_490 onnx::Add_892 0=1024 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_95 2 1 onnx::Add_892 input.308_splitncnn_0 onnx::Relu_493 0=0
ReLU Relu_96 1 1 onnx::Relu_493 input.332
Split splitncnn_13 1 3 input.332 input.332_splitncnn_0 input.332_splitncnn_1 input.332_splitncnn_2
Convolution Conv_97 1 1 input.332_splitncnn_2 input.340 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=524288
ReLU Relu_98 1 1 input.340 onnx::Conv_497
Convolution Conv_99 1 1 onnx::Conv_497 input.348 0=512 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU Relu_100 1 1 input.348 onnx::Conv_500
Convolution Conv_101 1 1 onnx::Conv_500 onnx::Add_901 0=2048 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1048576
Convolution Conv_102 1 1 input.332_splitncnn_1 onnx::Add_904 0=2048 1=1 11=1 2=1 12=1 3=2 13=2 4=0 14=0 15=0 16=0 5=1 6=2097152
BinaryOp Add_103 2 1 onnx::Add_901 onnx::Add_904 onnx::Relu_505 0=0
ReLU Relu_104 1 1 onnx::Relu_505 input.360
Split splitncnn_14 1 2 input.360 input.360_splitncnn_0 input.360_splitncnn_1
Convolution Conv_105 1 1 input.360_splitncnn_1 input.368 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1048576
ReLU Relu_106 1 1 input.368 onnx::Conv_509
Convolution Conv_107 1 1 onnx::Conv_509 input.376 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU Relu_108 1 1 input.376 onnx::Conv_512
Convolution Conv_109 1 1 onnx::Conv_512 onnx::Add_913 0=2048 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1048576
BinaryOp Add_110 2 1 onnx::Add_913 input.360_splitncnn_0 onnx::Relu_515 0=0
ReLU Relu_111 1 1 onnx::Relu_515 input.384
Split splitncnn_15 1 2 input.384 input.384_splitncnn_0 input.384_splitncnn_1
Convolution Conv_112 1 1 input.384_splitncnn_1 input.392 0=512 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1048576
ReLU Relu_113 1 1 input.392 onnx::Conv_519
Convolution Conv_114 1 1 onnx::Conv_519 input.400 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU Relu_115 1 1 input.400 onnx::Conv_522
Convolution Conv_116 1 1 onnx::Conv_522 onnx::Add_922 0=2048 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1048576
BinaryOp Add_117 2 1 onnx::Add_922 input.384_splitncnn_0 onnx::Relu_525 0=0
ReLU Relu_118 1 1 onnx::Relu_525 input.408
Convolution Conv_119 1 1 input.408 onnx::Add_527 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=524288
BinaryOp Add_121 2 1 onnx::Add_528 onnx::Add_527 x 0=0
Split splitncnn_16 1 2 x x_splitncnn_0 x_splitncnn_1
Interp Upsample_128 1 1 x_splitncnn_1 onnx::Add_542 0=2 1=1.944444e+00 2=1.944444e+00 6=0
Convolution Conv_129 1 1 input.332_splitncnn_0 onnx::Add_543 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=262144
BinaryOp Add_130 2 1 onnx::Add_542 onnx::Add_543 x.3 0=0
Split splitncnn_17 1 2 x.3 x.3_splitncnn_0 x.3_splitncnn_1
Interp Upsample_137 1 1 x.3_splitncnn_1 onnx::Add_557 0=2 1=1.971429e+00 2=1.971429e+00 6=0
Convolution Conv_138 1 1 input.184_splitncnn_0 onnx::Add_558 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=131072
BinaryOp Add_139 2 1 onnx::Add_557 onnx::Add_558 input.412 0=0
Convolution Conv_140 1 1 x_splitncnn_0 onnx::Relu_560 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_141 1 1 onnx::Relu_560 onnx::Conv_561
Split splitncnn_18 1 2 onnx::Conv_561 onnx::Conv_561_splitncnn_0 onnx::Conv_561_splitncnn_1
Convolution Conv_142 1 1 x.3_splitncnn_0 onnx::Relu_562 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_143 1 1 onnx::Relu_562 onnx::Conv_563
Convolution Conv_144 1 1 input.412 onnx::Relu_564 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_145 1 1 onnx::Relu_564 onnx::Conv_565
Split splitncnn_19 1 2 onnx::Conv_565 onnx::Conv_565_splitncnn_0 onnx::Conv_565_splitncnn_1
Convolution Conv_146 1 1 onnx::Conv_561_splitncnn_1 input.416 0=256 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=589824
Split splitncnn_20 1 2 input.416 input.416_splitncnn_0 input.416_splitncnn_1
Convolution Conv_147 1 1 input.416_splitncnn_1 input.420 0=256 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=589824
Convolution Conv_148 1 1 onnx::Conv_565_splitncnn_1 input.424 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_149 1 1 input.424 onnx::Conv_569
Convolution Conv_150 1 1 onnx::Conv_569 input.428 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_151 1 1 input.428 onnx::Conv_571
Convolution Conv_152 1 1 onnx::Conv_571 input.432 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_153 1 1 input.432 onnx::Upsample_573
Interp Upsample_154 1 1 onnx::Upsample_573 input.436 0=2 1=2.000000e+00 2=2.000000e+00 6=0
ReLU Relu_155 1 1 input.436 onnx::Conv_578
Convolution Conv_156 1 1 onnx::Conv_578 input.440 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_157 1 1 input.440 onnx::Conv_580
Convolution Conv_158 1 1 onnx::Conv_580 x.7 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=8192
ReLU Relu_159 1 1 x.7 onnx::Transpose_582
Permute Transpose_160 1 1 onnx::Transpose_582 583 0=3
Convolution Conv_161 1 1 onnx::Conv_565_splitncnn_0 input.444 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_162 1 1 input.444 onnx::Conv_585
Split splitncnn_21 1 3 onnx::Conv_585 onnx::Conv_585_splitncnn_0 onnx::Conv_585_splitncnn_1 onnx::Conv_585_splitncnn_2
Convolution Conv_163 1 1 onnx::Conv_585_splitncnn_2 onnx::Transpose_586 0=12 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=27648
Permute Transpose_164 1 1 onnx::Transpose_586 onnx::Reshape_587 0=3
Reshape Reshape_170 1 1 onnx::Reshape_587 onnx::Concat_597 0=4 1=-1
Convolution Conv_171 1 1 onnx::Conv_585_splitncnn_1 onnx::Transpose_598 0=9 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=20736
Permute Transpose_172 1 1 onnx::Transpose_598 onnx::Reshape_599 0=3
Reshape Reshape_178 1 1 onnx::Reshape_599 onnx::Concat_609 0=3 1=-1
Convolution Conv_179 1 1 onnx::Conv_585_splitncnn_0 onnx::Transpose_610 0=96 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=221184
Permute Transpose_180 1 1 onnx::Transpose_610 onnx::Reshape_611 0=3
Reshape Reshape_186 1 1 onnx::Reshape_611 onnx::Tanh_621 0=32 1=-1
UnaryOp Tanh_187 1 1 onnx::Tanh_621 onnx::Concat_622 0=16
Convolution Conv_188 1 1 onnx::Conv_563 input.448 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_189 1 1 input.448 onnx::Conv_624
Split splitncnn_22 1 3 onnx::Conv_624 onnx::Conv_624_splitncnn_0 onnx::Conv_624_splitncnn_1 onnx::Conv_624_splitncnn_2
Convolution Conv_190 1 1 onnx::Conv_624_splitncnn_2 onnx::Transpose_625 0=12 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=27648
Permute Transpose_191 1 1 onnx::Transpose_625 onnx::Reshape_626 0=3
Reshape Reshape_197 1 1 onnx::Reshape_626 onnx::Concat_636 0=4 1=-1
Convolution Conv_198 1 1 onnx::Conv_624_splitncnn_1 onnx::Transpose_637 0=9 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=20736
Permute Transpose_199 1 1 onnx::Transpose_637 onnx::Reshape_638 0=3
Reshape Reshape_205 1 1 onnx::Reshape_638 onnx::Concat_648 0=3 1=-1
Convolution Conv_206 1 1 onnx::Conv_624_splitncnn_0 onnx::Transpose_649 0=96 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=221184
Permute Transpose_207 1 1 onnx::Transpose_649 onnx::Reshape_650 0=3
Reshape Reshape_213 1 1 onnx::Reshape_650 onnx::Tanh_660 0=32 1=-1
UnaryOp Tanh_214 1 1 onnx::Tanh_660 onnx::Concat_661 0=16
Convolution Conv_215 1 1 onnx::Conv_561_splitncnn_0 input.452 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_216 1 1 input.452 onnx::Conv_663
Split splitncnn_23 1 3 onnx::Conv_663 onnx::Conv_663_splitncnn_0 onnx::Conv_663_splitncnn_1 onnx::Conv_663_splitncnn_2
Convolution Conv_217 1 1 onnx::Conv_663_splitncnn_2 onnx::Transpose_664 0=12 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=27648
Permute Transpose_218 1 1 onnx::Transpose_664 onnx::Reshape_665 0=3
Reshape Reshape_219 1 1 onnx::Reshape_665 onnx::Concat_673 0=4 1=-1
Convolution Conv_220 1 1 onnx::Conv_663_splitncnn_1 onnx::Transpose_674 0=9 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=20736
Permute Transpose_221 1 1 onnx::Transpose_674 onnx::Reshape_675 0=3
Reshape Reshape_222 1 1 onnx::Reshape_675 onnx::Concat_683 0=3 1=-1
Convolution Conv_223 1 1 onnx::Conv_663_splitncnn_0 onnx::Transpose_684 0=96 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=221184
Permute Transpose_224 1 1 onnx::Transpose_684 onnx::Reshape_685 0=3
Reshape Reshape_225 1 1 onnx::Reshape_685 onnx::Tanh_693 0=32 1=-1
UnaryOp Tanh_226 1 1 onnx::Tanh_693 onnx::Concat_694 0=16
Convolution Conv_227 1 1 input.416_splitncnn_0 input.456 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_228 1 1 input.456 onnx::Conv_696
Split splitncnn_24 1 3 onnx::Conv_696 onnx::Conv_696_splitncnn_0 onnx::Conv_696_splitncnn_1 onnx::Conv_696_splitncnn_2
Convolution Conv_229 1 1 onnx::Conv_696_splitncnn_2 onnx::Transpose_697 0=12 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=27648
Permute Transpose_230 1 1 onnx::Transpose_697 onnx::Reshape_698 0=3
Reshape Reshape_231 1 1 onnx::Reshape_698 onnx::Concat_706 0=4 1=-1
Convolution Conv_232 1 1 onnx::Conv_696_splitncnn_1 onnx::Transpose_707 0=9 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=20736
Permute Transpose_233 1 1 onnx::Transpose_707 onnx::Reshape_708 0=3
Reshape Reshape_234 1 1 onnx::Reshape_708 onnx::Concat_716 0=3 1=-1
Convolution Conv_235 1 1 onnx::Conv_696_splitncnn_0 onnx::Transpose_717 0=96 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=221184
Permute Transpose_236 1 1 onnx::Transpose_717 onnx::Reshape_718 0=3
Reshape Reshape_237 1 1 onnx::Reshape_718 onnx::Tanh_726 0=32 1=-1
UnaryOp Tanh_238 1 1 onnx::Tanh_726 onnx::Concat_727 0=16
Convolution Conv_239 1 1 input.420 input.460 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU Relu_240 1 1 input.460 onnx::Conv_729
Split splitncnn_25 1 3 onnx::Conv_729 onnx::Conv_729_splitncnn_0 onnx::Conv_729_splitncnn_1 onnx::Conv_729_splitncnn_2
Convolution Conv_241 1 1 onnx::Conv_729_splitncnn_2 onnx::Transpose_730 0=12 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=27648
Permute Transpose_242 1 1 onnx::Transpose_730 onnx::Reshape_731 0=3
Reshape Reshape_243 1 1 onnx::Reshape_731 onnx::Concat_739 0=4 1=-1
Convolution Conv_244 1 1 onnx::Conv_729_splitncnn_1 onnx::Transpose_740 0=9 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=20736
Permute Transpose_245 1 1 onnx::Transpose_740 onnx::Reshape_741 0=3
Reshape Reshape_246 1 1 onnx::Reshape_741 onnx::Concat_749 0=3 1=-1
Convolution Conv_247 1 1 onnx::Conv_729_splitncnn_0 onnx::Transpose_750 0=96 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=221184
Permute Transpose_248 1 1 onnx::Transpose_750 onnx::Reshape_751 0=3
Reshape Reshape_249 1 1 onnx::Reshape_751 onnx::Tanh_759 0=32 1=-1
UnaryOp Tanh_250 1 1 onnx::Tanh_759 onnx::Concat_760 0=16
Concat Concat_251 5 1 onnx::Concat_597 onnx::Concat_636 onnx::Concat_673 onnx::Concat_706 onnx::Concat_739 761 0=-2
Concat Concat_252 5 1 onnx::Concat_609 onnx::Concat_648 onnx::Concat_683 onnx::Concat_716 onnx::Concat_749 onnx::Softmax_762 0=-2
Concat Concat_253 5 1 onnx::Concat_622 onnx::Concat_661 onnx::Concat_694 onnx::Concat_727 onnx::Concat_760 763 0=-2
Softmax Softmax_255 1 1 onnx::Softmax_762 765 0=1 1=1

@apanand14
Copy link

@Ma-Dan Thank you very much for your open source and help. I used your github to convert the model to ncnn: https://github.com/Ma-Dan/yolact/tree/onnx. I would like to ask if you have tried to convert onnx to ncnn? Because segmentation fault (core dumped) appears when I execute ncnnoptimize. I am thinking about whether it is related to the error message that appears when converting onnx. The following is my error message. I will trouble you again, thank you.

The environment I used: python 3.7.9, torch1.5.0, torchvision 0.6.0, onnx 1.8.1

TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.last_img_size != (cfg._tmp_img_w, cfg._tmp_img_h):

I'm facing the same error. Did you solve it @chingi071 then please help me out? Thank you in advance.

@TommyW427
Copy link

File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_converter.py", line 600, in convert graph = _prepare_onnx_graph(onnx_model.graph, transformers, onnx_model.ir_version) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_converter.py", line 464, in _prepare_onnx_graph graph_ = graph_.transformed(transformers) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_graph.py", line 232, in transformed return _apply_graph_transformations(graph, transformers) # type: ignore File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_graph.py", line 73, in _apply_graph_transformations graph = transformer(graph) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/coremltools/converters/onnx/_transformers.py", line 842, in __call__ output = np.take(x, range(s, e), axis=a) # type: ignore File "<__array_function__ internals>", line 6, in take File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 191, in take return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode) File "/home/anlab/anaconda3/envs/yolact-env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc return bound(*args, **kwds) MemoryError

my computer: RAM 32gb, GeForce GTX 1060 6GB. i have an above trouble when i attempt to convert yolact.onnx model into yolact.mlmodel. who can help me, thanks alot.

@PhuowngNam I had the same issue. Were you ever able to resolve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests