Skip to content

Commit

Permalink
Revert D56685840: Multisect successfully blamed "D56685840: [fbgemm] …
Browse files Browse the repository at this point in the history
…Change model transform fp8 linear op to fbgemm quantize ops" for one test failure

Summary:
This diff reverts D56685840
D56685840: [fbgemm] Change model transform fp8 linear op to fbgemm quantize ops by jianyuh causes the following test failure:

Tests affected:
- [cogwheel:cogwheel_gpu_ait_lowering_latency_regression_test#main](https://www.internalfb.com/intern/test/281475067301657/)

Here's the Multisect link:
https://www.internalfb.com/multisect/4966282
Here are the tasks that are relevant to this breakage:
T174133180: 10+ tests unhealthy for oncall_model_processing_components_infra

The backout may land if someone accepts it.

If this diff has been generated in error, you can Commandeer and Abandon it.

Reviewed By: jianyuh

Differential Revision: D56714397
  • Loading branch information
Dark Knight authored and facebook-github-bot committed Apr 29, 2024
1 parent dff5bc2 commit 275dfcb
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion fbgemm_gpu/experimental/gen_ai/src/quantize/quantize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ at::Tensor get_fp8_per_tensor_scale(
c10::optional<at::Tensor> bs,
c10::optional<at::Tensor> scale_ub); // scale upperbound

TORCH_LIBRARY_FRAGMENT(fbgemm, m) {
TORCH_LIBRARY(fbgemm, m) {
#ifndef USE_ROCM
// TODO: on AMD this throws "Undefined symbol" when loading
// quantize_ops with
Expand Down

0 comments on commit 275dfcb

Please sign in to comment.