Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf and #param #6

Closed
junhu-svd opened this issue Feb 24, 2021 · 3 comments
Closed

perf and #param #6

junhu-svd opened this issue Feb 24, 2021 · 3 comments

Comments

@junhu-svd
Copy link

Hi, Thanks for the great works. I have one question about perf and #param mentioned in the paper(Table 7). I use torchsummary to generate the GMACS and #param for MPRNET(2-stage), it turns out that #param is 53.66MB(>>11.3MB in paper), from GMACS side, it is about 10x more than DMPHN(1-2-4). Although torchsummary isn't perfect and there exists a delta between real performance and GMACS, I wonder any hint/idea to explain the gap ?

@adityac8
Copy link
Collaborator

Hi @junhu-svd
We use the following snippet to calculate the model parameters.

def params_count(model):
    """Computes the number of parameters."""
    return np.sum([p.numel() for p in model.parameters()]).item()

For our 2-Stage MPRNet Deblurring model, the number of trainable parameters are 11.3 Million. For DMPHN, it is 21.7 Million.

Thanks

@junhu-svd
Copy link
Author

Thanks, adityac8,

I can almost reproduce your # after some corrections, and in my latest profiler, MPRNET(2-stages) uses 11.8(not far from 11.3M) Million parameters ~ 45MB (if saved float32)

fyi, DMPHN is 5.4 Million parameters ~ 21.7MB (if saved float32) and this is consistent with the # reported in DMPHN paper

From perf aspect, do you have any official statistics for MPRNET(2-stages) in terms of GMACs? what I got from torch-summary is 231.54G(mult-adds) for 256x256 input ? which is a very higher number.

@swz30
Copy link
Owner

swz30 commented Mar 1, 2021

Hi @junhu-svd

Here is a table from the DMPHN paper.

For evaluation, we use Stack(4)-DMPHN version which is publicly released by the original authors and provides their best results. This Stack(4)-DMPHN has 21.7 Million parameters ~ 86.8 MB.

If you are looking for an efficient model in terms of compute and parameters, you can retrain our MPRNet model by setting
n_feat=40, scale_unetfeats=40, scale_orsnetfeats=32

def __init__(self, in_c=3, out_c=3, n_feat=96, scale_unetfeats=48, scale_orsnetfeats=32, num_cab=8, kernel_size=3, reduction=4, bias=False):

This will provide you with a 6 Million parameter model that yields ~31.8 dB PSNR on GoPro testset, as shown in Figure 1 of our paper.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants