Skip to content

Aarch64 and X64 have different behaviour in u8s8 reorder for convolution #2412

Open
@robert-hardwick

Description

@robert-hardwick

Summary

This issue has been identified from these pytorch unit test failures pytorch/pytorch#144770 and it appears to me that the issue is a result of the weight prepacking such that the weights are not in the expected memory locations.

https://github.com/pytorch/pytorch/blob/dc8692b0eb093d5af150ae0f3a29a0957c3e4c0d/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp#L437-L458

The code relating to the test failure is above, and I can confirm that by commenting out this line, the tests pass.

https://github.com/pytorch/pytorch/blob/dc8692b0eb093d5af150ae0f3a29a0957c3e4c0d/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp#L415
op_attr.set_zero_points_mask(DNNL_ARG_SRC, /* zero_points_mask= */0);


Reproducer is in comments below.

But it seems that doing a reorder with reverse memory format ab->ba of a 135x2 int8 matrix seems to have zeros in the resulting memory on Aarch64, but not on x64.

memory::dims dims = {135, 2};

dnnl::memory::desc src_desc(dims, dnnl::memory::data_type::s8, dnnl::memory::format_tag::ab);
dnnl::memory src_tensor(src_desc, engine, ( void * ) input_data.data());

dnnl::memory::desc weights_desc(dims, dnnl::memory::data_type::s8, dnnl::memory::format_tag::ba);
weights_desc.get()->extra.flags |= dnnl::impl::memory_extra_flags::compensation_conv_asymmetric_src;
weights_desc.get()->extra.asymm_compensation_mask = (1 << 0);
dnnl::memory dst_tensor(weights_desc, engine);

followed by

dnnl::primitive_attr op_attr = dnnl::primitive_attr();
op_attr.set_scales_mask(DNNL_ARG_DST, 0);
op_attr.set_scratchpad_mode(dnnl::scratchpad_mode::user);
auto pd = dnnl::reorder::primitive_desc(src_tensor, dst_tensor, op_attr);

dnnl::memory scratchpad(pd.scratchpad_desc(), engine);
std::unordered_map<int, memory> args;
args.insert({DNNL_ARG_FROM, src_tensor});
args.insert({DNNL_ARG_TO, dst_tensor});
args.insert({DNNL_ARG_SCRATCHPAD, scratchpad});

// add scales argument
float scale = 1.f;
dnnl::memory scale_tensor(dnnl::memory::desc({1}, dnnl::memory::data_type::f32, dnnl::memory::format_tag::a), engine, ( void * ) &scale );
args.insert({DNNL_ARG_ATTR_SCALES | DNNL_ARG_DST, scale_tensor});

dnnl::reorder(pd).execute(
    engine_stream,
    args);

See Observed behaviour below

Note - This seems to follow the reference primitives of reorder ( jit_unit_reorder) and convolution ( gemm_x8s8s32x_convolution_fwd_t ).

Version

Ideep = 77f18a4a44492e79b8c738e4fcd2698699b1acba
oneDNN = 66f0cb9 ( v3.5.3 )

Environment

$ gcc --version
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
$ lscpu
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs pa
ca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng

Steps to reproduce

See comment below with reproducer attached.

Observed behavior

Tensor dimensions are : 135 ( 27 groups x 5 input channels ) , 2, 1, 1

src tensor @ 0 = -2
src tensor @ 1 = 3
src tensor @ 2 = -4
src tensor @ 3 = 1
dst_tensor @ 0 = -2
dst_tensor @ 1 = -4
dst_tensor @ 2 = -4
dst_tensor @ 3 = -1
dst_tensor @ 135 = 0  <---- THIS IS UNEXPECTED, should be the same as src tensor @ 1
dst_tensor @ 136 = 0      <---- THIS IS UNEXPECTED, should be the same as src tensor @ 2

Expected behavior

Document behavior you expect.

dst_tensor @ 0 = -2
dst_tensor @ 1 = -4
dst_tensor @ 2 = -4
dst_tensor @ 3 = -1
dst_tensor @ 135 = 3  <---- THIS IS THE EXPECTED OUTPUT
dst_tensor @ 136 = 1  

If you comment out these lines https://github.com/oneapi-src/oneDNN/blob/381134e4fdbdcb1099fc584291ee2805fafdb5f0/src/cpu/cpu_convolution_list.cpp#L592C1-L612C82 to force x86 to use gemm_x8s8s32x_convolution_fwd_t then we get the correct output on x86.

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedplatform:cpu-aarch64Codeowner: @oneapi-src/onednn-cpu-aarch64sightingSuspicious library behavior. Should be promoted to a bug when confirmed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions