Skip to content

Conversation

z-a-f
Copy link

@z-a-f z-a-f commented Jun 25, 2021

Stack from ghstack:

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

python test/test_ao_sparsity.py

Differential Revision: D29465899

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 25, 2021

💊 CI failures summary and remediations

As of commit 0b6ca79 (more details on the Dr. CI page and at hud.pytorch.org/pr/60728):


  • 6/6 failures possibly* introduced in this PR
    • 1/6 non-scanned failure(s)

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build Windows CI (pytorch-win-vs2019-cpu-py3) / test (default, 1, 2, windows.4xlarge) (1/3)

Step: "Run test scripts" (full log | diagnosis details | 🔁 rerun)

2021-07-02T20:37:48.6438318Z ERROR [0.205s]: te...aunch_user_script (__main__.TestDistirbutedLaunch)
2021-07-02T20:37:48.6165351Z Note that --use_env is set by default in torch.distributed.run.
2021-07-02T20:37:48.6165906Z If your script expects `--local_rank` argument to be set, please
2021-07-02T20:37:48.6166409Z change it to read from `os.environ('LOCAL_RANK')` instead. See 
2021-07-02T20:37:48.6167037Z https://pytorch.org/docs/stable/distributed.html#launch-utility for 
2021-07-02T20:37:48.6167547Z further instructions
2021-07-02T20:37:48.6167764Z 
2021-07-02T20:37:48.6168028Z   warnings.warn(
2021-07-02T20:37:48.6168297Z ERROR (0.205s)
2021-07-02T20:37:48.6437511Z 
2021-07-02T20:37:48.6437860Z ======================================================================
2021-07-02T20:37:48.6438318Z ERROR [0.205s]: test_launch_user_script (__main__.TestDistirbutedLaunch)
2021-07-02T20:37:48.6438873Z ----------------------------------------------------------------------
2021-07-02T20:37:48.6439320Z Traceback (most recent call last):
2021-07-02T20:37:48.6440792Z   File "distributed/test_launcher.py", line 49, in test_launch_user_script
2021-07-02T20:37:48.6441254Z     launch.main(args)
2021-07-02T20:37:48.6441960Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\distributed\launch.py", line 187, in main
2021-07-02T20:37:48.6442504Z     launch(args)
2021-07-02T20:37:48.6443192Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\distributed\launch.py", line 173, in launch
2021-07-02T20:37:48.6443749Z     run(args)
2021-07-02T20:37:48.6444401Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\distributed\run.py", line 688, in run
2021-07-02T20:37:48.6444927Z     elastic_launch(

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (2/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
HEAD is now at 0b6ca7958e Update on "[quant][sparsity] Generic convert function"
+ git reset --hard 0b6ca7958e1b760ee883b3dc412c8cfd92f6e722
HEAD is now at 0b6ca7958e Update on "[quant][sparsity] Generic convert function"
+ git merge --allow-unrelated-histories --no-edit --no-ff 80cab10534e1a9ee6d2493113f169a2570979651
Auto-merging torch/ao/sparsity/__init__.py
Auto-merging test/test_ao_sparsity.py
CONFLICT (content): Merge conflict in test/test_ao_sparsity.py
CONFLICT (add/add): Merge conflict in test/ao/sparsity/test_parametrization.py
Auto-merging test/ao/sparsity/test_parametrization.py
Removing .github/workflows/quantization_triage.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (3/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
HEAD is now at 0b6ca7958e Update on "[quant][sparsity] Generic convert function"
+ git reset --hard 0b6ca7958e1b760ee883b3dc412c8cfd92f6e722
HEAD is now at 0b6ca7958e Update on "[quant][sparsity] Generic convert function"
+ git merge --allow-unrelated-histories --no-edit --no-ff 80cab10534e1a9ee6d2493113f169a2570979651
Auto-merging torch/ao/sparsity/__init__.py
Auto-merging test/test_ao_sparsity.py
CONFLICT (content): Merge conflict in test/test_ao_sparsity.py
CONFLICT (add/add): Merge conflict in test/ao/sparsity/test_parametrization.py
Auto-merging test/ao/sparsity/test_parametrization.py
Removing .github/workflows/quantization_triage.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


2 failures not recognized by patterns:

Job Step Action
GitHub Actions Lint / quick-checks Ensure correct trailing newlines 🔁 rerun
GitHub Actions Lint / flake8-py3 Fail if there were any warnings 🔁 rerun

Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

z-a-f pushed a commit that referenced this pull request Jun 25, 2021
ghstack-source-id: 2e29d21
Pull Request resolved: #60728
@z-a-f z-a-f changed the title [quant, sparsity] Generic convert function [quant][sparsity] Generic convert function Jun 28, 2021
z-a-f pushed a commit that referenced this pull request Jun 28, 2021
ghstack-source-id: 991f31b
Pull Request resolved: #60728
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jun 28, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: c8f38b2
Pull Request resolved: #60728
return model


def convert(model: nn.Module, mapping: dict = None, config: dict = None) -> nn.Module:
Copy link
Contributor

@raghuramank100 raghuramank100 Jun 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For eager mode, the API should only take the config for sparsity, the qconfig is an attribute of the module. The config is identical to the config arg that is passed to the prepare function of the sparsifier. This way, the arguments to convert do not need a joint sparsity+quantization config.

config = {
        ...         'seq.0.linear1': {
        ...             'zero_block_shape': (1, 4)
        ...         'seq.0.linear2: {
        ...             'zero_block_shape': (1, 4)
        ...         }
        ... }

raise AttributeError('FP sparse convert is not yet supported')

quant_config = config.get('quantized', dict())
for mode in config.keys():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets do this instead. We can reuse the logic in swap_module for eager mode quantization and keep the configs for
sparsity and quantization separate

def convert(model: nn.Module, mapping: dict = None, config: dict = None, convert_custom_config_dict:dict=None) -> nn.Module:
if mapping is None:
        mapping = dict()
    if config is None:
        raise AttributeError('Currently, you have to specify the convert config')
    mapping.setdefault('quantized', torch.quantization.get_static_quant_module_class())
    mapping.setdefault(
        'sparse_quantized',
        {
            'static': {nn.Linear, torch.ao.nn.sparse.quantized.Linear},
            'dynamic': {nn.Linear, torch.ao.nn.sparse.quantized.dynamic.Linear},
        })
   if 'sparse' in config:
        raise AttributeError('FP sparse convert is not yet supported')

    if convert_custom_config_dict is None:
        convert_custom_config_dict = {}
    custom_module_class_mapping = convert_custom_config_dict.get("observed_to_quantized_custom_module_class", {})

  model = _joint_convert(model, mapping, config, custom_module_class_mapping)

 def _joint_convert(model, mapping, config, custom_module_class_mapping):
       reassign = {}
       for name, mod in module.named_children():
        # both fused modules and observed custom modules are
        # swapped as one unit
        if not isinstance(mod, _FusedModule) and \
           type(mod) not in custom_module_class_mapping:
            _joint_convert(mod, mapping, config)
            # if mod has qconfig attribute and is present in sparsifier config:
            if hasattr(mod, 'qconfig') 
                 if _module_to_path(mod) in config:
                    mapping = mapping['sparse_quantized']
                 else: 
                    mapping = mapping['quantized']
               # custom module class mapping will not work with sparsity, for now we could set it to None.
                 torch.quantization.swap_module(mod, mapping, custom_module_class_mapping)
                
            
        reassign[name] = swap_module(mod, mapping, custom_module_class_mapping)

    for key, value in reassign.items():
        module._modules[key] = value
 return module

Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see comments

@z-a-f
Copy link
Author

z-a-f commented Jun 29, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jun 29, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: 5e7e51c
Pull Request resolved: #60728
@z-a-f
Copy link
Author

z-a-f commented Jun 29, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jul 1, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: 881ffd5
Pull Request resolved: #60728
@z-a-f
Copy link
Author

z-a-f commented Jul 1, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jul 1, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: 881ffd5
Pull Request resolved: #60728
@z-a-f
Copy link
Author

z-a-f commented Jul 1, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jul 1, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: 0d42055
Pull Request resolved: #60728
@z-a-f
Copy link
Author

z-a-f commented Jul 1, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jul 1, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: e7cd227
Pull Request resolved: #60728
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
@z-a-f
Copy link
Author

z-a-f commented Jul 2, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

Differential Revision: [D29465899](https://our.internmc.facebook.com/intern/diff/D29465899)

[ghstack-poisoned]
z-a-f pushed a commit that referenced this pull request Jul 2, 2021
This is a wrapper that implements a common convert function for use with quantization and sparsity.
Currently, only the whole model/module conversion is supported.

Test Plan:

```
python test/test_ao_sparsity.py
```

ghstack-source-id: 235c9c7
Pull Request resolved: #60728
@z-a-f
Copy link
Author

z-a-f commented Jul 2, 2021

@zafartahirov has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Apr 13, 2022
@github-actions github-actions bot closed this May 13, 2022
@facebook-github-bot facebook-github-bot deleted the gh/z-a-f/112/head branch June 12, 2022 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants