Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train with PACT but the value for cliping weights and activations which denoted as alpha seems not change. #72

Closed
jianyin2016 opened this issue Apr 7, 2022 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@jianyin2016
Copy link

the value for cliping weights and activations which denoted as alpha is initialized to 6.0, In my opinion, this value should be updated during training, but I found it not, I am training with the imagenet_example just adding such following configs to make PACT working.

if args.quant:
        extra_params = {
            'extra_qconfig_dict': {
                'w_observer': "MinMaxObserver",
                'a_observer': "EMAMinMaxObserver",
                'w_fakequantize': "PACTFakeQuantize",
                'a_fakequantize': "PACTFakeQuantize",
                'a_fakeq_params': {},
                'w_qscheme': {
                    'bit': 8,
                    'symmetry': True,
                    'per_channel': False,
                    'pot_scale': False
                },
                'a_qscheme': {
                    'bit': 8,
                    'symmetry': True,
                    'per_channel': False,
                    'pot_scale': False
                }
            },
            'extra_quantizer_dict': {},
            'preserve_attr': {},
            'concrete_args': {},
            'extra_fuse_dict': {}
        }
        print("==> config with extra params", extra_params)
        model = prepare_by_platform(model, args.backend, extra_params)
@Tracin Tracin added the bug Something isn't working label Apr 7, 2022
@zhiwei-dong
Copy link
Contributor

Can you give us a minimal reproducible code tar file?

@jianyin2016
Copy link
Author

Can you give us a minimal reproducible code tar file?

I am afraid not due to the restriction of information security policy of our company.
but the minimal reproduction is not hard to reproduce following below steps.

  • (necessary) in application/imagenet_example/main.py ,around line 167, before calling of prepare_by_platform, add the PACT configs which has been presented.
  • (optinal) for more obvious watch of alpha, you could print the alpha in PACTFakeQuantize.forward.
  • (optinal and maybe wrong) in mqbench/deploy/deploy_linear.py change line 99 from if scale.shape[0] > 1: to if len(scale.shape) != 0 and scale.shape[0] > 1:
  • (necessary) python setup.py install

@zhiwei-dong
Copy link
Contributor

OK,I get.

@Tracin
Copy link
Contributor

Tracin commented Apr 12, 2022

@jianyin2016 Problem fixed? Anything wrong in code or not?

@jianyin2016
Copy link
Author

I think there is something wrong with the code for PACT, but I have to validate the correction first.

@jianyin2016 jianyin2016 reopened this Apr 12, 2022
@jianyin2016
Copy link
Author

Closed for finally found the reason why the performance was not as expected is that I wrongly use of some configs.

@feizi
Copy link

feizi commented Oct 31, 2022

I got the same error.
@jianyin2016 , How did you fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants