-
Notifications
You must be signed in to change notification settings - Fork 22.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding fused uint4x2_mixed_mm to inductor #106516
Commits on Aug 3, 2023
-
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 7892259 - Browse repository at this point
Copy the full SHA 7892259View commit details -
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 590d45c - Browse repository at this point
Copy the full SHA 590d45cView commit details -
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues. Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 59ad453 - Browse repository at this point
Copy the full SHA 59ad453View commit details
Commits on Aug 10, 2023
-
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 9d43739 - Browse repository at this point
Copy the full SHA 9d43739View commit details -
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 546a286 - Browse repository at this point
Copy the full SHA 546a286View commit details
Commits on Aug 11, 2023
-
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 55797a6 - Browse repository at this point
Copy the full SHA 55797a6View commit details
Commits on Aug 14, 2023
-
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for df08d2c - Browse repository at this point
Copy the full SHA df08d2cView commit details -
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for aae1aad - Browse repository at this point
Copy the full SHA aae1aadView commit details -
Update on "adding fused uint4x2_mixed_mm to inductor"
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for a49ff9d - Browse repository at this point
Copy the full SHA a49ff9dView commit details