-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Official Retinaface mnet25 models conversion #4
Comments
can you show me the link retinaface_mnet025_v1, retinaface_mnet025_v2 |
Absolute links: And just in case original R50 model: BTW, wouldn't you mind if I include part of your code in my InsightFace-REST repo for MXNet->ONNX->TensorRT convertion? |
I will try these models. just cite them. |
i found a bug, |
Thanks! I'll try to fix it. |
Still can't find how to fix it. Should I work with resulting ONNX, or somehow change gamma weights in MXNet graph? |
I‘ll try to fix this. |
I have upload fixed bn gamma models. @SthPhoenix |
Great work! Now both nets work as intended. |
read params file, and change the value of gamma to 1. |
Oh, I get it now, thanks! |
Here's my code to fix MXNet model before export: def mxnet_model_fix(input_symbol, input_params, write=True):
names = []
fix_gamma_layers = []
with open(input_symbol, 'r') as _input_symbol:
fixed_sym = json.load(_input_symbol)
for e in fixed_sym['nodes']:
if e['op'] == 'SoftmaxActivation':
e['op'] = 'softmax'
e['attrs'] = {"axis": "1"}
# Fix for "Graph must be in single static assignment (SSA) form"
if e['name'] in names:
e['name'] = f"{e['name']}_1"
names.append(e['name'])
if e.get('attrs', {}).get('fix_gamma') == 'True' and e['name'].endswith('_gamma'):
fix_gamma_layers.append(e['name'])
_input_symbol.close()
fixed_params = mxnet_fixgamma_params(input_params, layers=fix_gamma_layers)
if write is True:
mx.nd.save(input_params, fixed_params)
with open(input_symbol, 'w') as sym_temp:
json.dump(fixed_sym, sym_temp, indent=2)
def mxnet_fixgamma_params(input_param, layers):
net_param = mx.nd.load(input_param)
for layer in layers:
name = f'arg:{layer}'
gamma = net_param[name].asnumpy()
gamma *= 0
gamma += 1
net_param[name] = mx.nd.array(gamma)
return net_param |
nice! |
Hi, thank you for shared codes and converted models, but mnet025_fix_gamma_v2.onnx model gives wrong landmarks and different head rectangle than gamma_v1 (I used the same code for postprocessing for gamma_v1 and gamma_v2). Please, check the resulting model, maybe I did something wrong. |
v1 and v2 models are different, and gives different results even in MXNet, v2 seems to have worse accuracy in tests |
Oh sorry, I just tested the original mnet25_v2 in mxnet and the landmarks are also going haywire and head rectangle is different. |
Hi, I've tested the original mnet25_v1 and converted mnet025_fix_gamma_v1.onnx model, but they give different accuracy. Original model was able to find all faces on the photo, but converted model missed some people. P.S. pre and postprocessing is similar to original retinaface.py (except that we changed scales to calculate each axis to draw outputs on original image). |
I haven't tested models provided by @zheshipinyinMc, I have converted models a bit different way. Also ensure that in both cases images have right channel order, either BGR or RGB, not sure which one is used by Retina. I have once lost many time debugging when missed this. |
onnx models in this repo, i have compared their ouput, onnx ouput is almost same with mxnet. |
Hi In the orginial version of retinaface the input shape is (3,112,112) but here is (3,640,640), can you provide the onnx version with (3,112,112) since the I got error when I run the onnx conversion |
Hi! Great work converting RetinaFace mnet25 model!
I am trying to convert models from official InsightFace python package model zoo (retinaface_mnet025_v1, retinaface_mnet025_v2) using your code. Conversion works fine, but inference output is totally different from MXNet model, making it incompatible with following post processing.
Converted original @yangfly model works as intended.
The text was updated successfully, but these errors were encountered: