Skip to content

Conversation

@davidberard98
Copy link
Contributor

@davidberard98 davidberard98 commented Feb 7, 2022

Stack from ghstack:

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: D34057522

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 7, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/59b4fc231e2b68758ae4e0a87645afd1213ad1a7/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-manywheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk, ciflow/xla ✅ triggered
linux-bionic-rocm4.5-py3.7 ciflow/all, ciflow/default, ciflow/linux, ciflow/rocm, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 7, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit bed4c84 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

davidberard98 added a commit that referenced this pull request Feb 7, 2022
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

ghstack-source-id: a843358
Pull Request resolved: #72478
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Feb 7, 2022
@davidberard98
Copy link
Contributor Author

@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 8, 2022
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

ghstack-source-id: a04d942
Pull Request resolved: #72478
@davidberard98
Copy link
Contributor Author

@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@davidberard98
Copy link
Contributor Author

RUN_TORCHBENCH: ALL

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 9, 2022
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

ghstack-source-id: 2abac42
Pull Request resolved: #72478
@davidberard98
Copy link
Contributor Author

@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@davidberard98 davidberard98 marked this pull request as ready for review February 9, 2022 18:03
Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with one change

return false;
}
}
// cant support non-constant pin_memory or pin_memory = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add the autocast nodes here:

if (n->kind() != aten::to) {

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
@davidberard98
Copy link
Contributor Author

@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Differential Revision: [D34057522](https://our.internmc.facebook.com/intern/diff/D34057522)

[ghstack-poisoned]
davidberard98 added a commit that referenced this pull request Feb 9, 2022
aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

ghstack-source-id: bbd0b5f
Pull Request resolved: #72478
@davidberard98
Copy link
Contributor Author

@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Feb 10, 2022
Summary:
Pull Request resolved: #72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
@github-actions
Copy link
Contributor

Hey @davidberard98.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@davidberard98 davidberard98 added release notes: jit release notes category topic: performance topic category labels Feb 10, 2022
@facebook-github-bot facebook-github-bot deleted the gh/davidberard98/43/head branch February 14, 2022 15:16
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 15, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 15, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 15, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 16, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 16, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 21, 2022
Summary:
Pull Request resolved: pytorch/pytorch#72478

aten::_autocast_to_reduced_precision and `aten::_autocast_to_full_precision are essentially just aten::to operations, so they can be fused the same way aten::to is fused.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34057522

Pulled By: davidberard98

fbshipit-source-id: f3b53641415702a4ac56460587801b9c76d81b3c
(cherry picked from commit 838ce5542ebd1f95be1ab57ed496f16db4fb7d68)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed oncall: jit Add this issue/PR to JIT oncall triage queue release notes: jit release notes category topic: performance topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants