Skip to content

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Oct 24, 2024

Stack from ghstack (oldest at bottom):

Changes

Move the following files to the root directory of Vulkan backend:

  • backends/vulkan/partitioner/supported_ops.py -> backends/vulkan/op_registry.py
  • backends/vulkan/_passes/custom_ops_defs.py -> backends/vulkan/custom_ops_lib.py

In the new op_registry.py file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new OpFeatures class for more details. An example of registering a new operator to the export flow is

@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features

Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. is_valid_to_copy) and graph transforms do not have to maintain their own operator metadata (USES_WEIGHTS in insert_prepack_nodes).

Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Oct 24, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6488

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit aa0d67f with merge base 16b633b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 24, 2024
SS-JIA added a commit that referenced this pull request Oct 24, 2024
## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

ghstack-source-id: 249892740
Pull Request resolved: #6488
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250098667
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250109412
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250122044
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250128087
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250136695
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

SS-JIA added a commit that referenced this pull request Oct 25, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250165806
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Oct 26, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250279709
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64915640

@facebook-github-bot facebook-github-bot merged commit 610dac2 into gh/SS-JIA/128/base Oct 27, 2024
45 of 48 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/SS-JIA/128/head branch October 27, 2024 01:11
SS-JIA added a commit that referenced this pull request Oct 28, 2024
Pull Request resolved: #6488

## Changes

Move the following files to the root directory of Vulkan backend:

* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`

In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is

```
@update_features(
    [
        exir_ops.edge.aten._log_softmax.default,
        exir_ops.edge.aten._softmax.default,
        exir_ops.edge.aten.mean.dim,
        exir_ops.edge.aten.sum.dim_IntList,
        exir_ops.edge.aten.amax.default,
        exir_ops.edge.aten.amin.default,
    ]
)
def register_reduce_op(features: OpFeatures):
    features.texture_impl = TextureImplFeatures(
        uses_packed_dim=True,
    )
    features.resize_fn = True

    def check_reduce_node(node: torch.fx.Node) -> bool:
        dim_list = node.args[1]
        assert isinstance(dim_list, list)
        if len(dim_list) != 1:
            return False

        keepdim = node.args[2]
        assert isinstance(keepdim, bool)
        if not keepdim:
            return False

        return True

    features.check_node_fn = check_reduce_node
    return features
```

## Rationale

The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250279709
@exported-using-ghexport

Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)

Co-authored-by: Stephen Jia <ssjia@meta.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants