Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

softshrink lowering #105603

Closed
wants to merge 4 commits into from
Closed

softshrink lowering #105603

wants to merge 4 commits into from

Conversation

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 19, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/105603

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d635b44:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@msaroufim msaroufim changed the title msaroufim/softshrink softshrink lowering Jul 19, 2023
@msaroufim
Copy link
Member Author

@XiaobingSuper just as FYI

I'm hitting this failure with this PR if I change this line to self.common(fn, (torch.randn(1), grad_output, lambd),)

In my PR I changed to torch.randn(10) - I only see this failure with the cpp backend

(sourcetorch) ubuntu@ip-172-31-1-136:~/pytorch$ python test/inductor/test_torchinductor_codegen_dynamic_shapes.py -k test_softshrink_backward_dynamic_shapes_cpu
/home/ubuntu/.conda/envs/sourcetorch/lib/python3.10/site-packages/numpy/core/getlimits.py:518: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/home/ubuntu/.conda/envs/sourcetorch/lib/python3.10/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
F
======================================================================
FAIL: test_softshrink_backward_dynamic_shapes_cpu (__main__.DynamicShapesCodegenCpuTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/ubuntu/pytorch/torch/testing/_internal/common_utils.py", line 2356, in wrapper
    method(*args, **kwargs)
  File "/home/ubuntu/pytorch/test/inductor/test_torchinductor.py", line 6767, in new_test
    return value(self)
  File "/home/ubuntu/pytorch/torch/_dynamo/testing.py", line 312, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/pytorch/test/inductor/test_torchinductor.py", line 2026, in test_softshrink_backward
    self.common(fn, (torch.randn(1), grad_output, lambd),)
  File "/home/ubuntu/pytorch/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 305, in common
    return check_codegen(
  File "/home/ubuntu/pytorch/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 92, in check_codegen
    self.assertTrue(
AssertionError: False is not true : Failed to find dynamic for loop variable
Output code written to: /tmp/torchinductor_ubuntu/gc/cgc5rihs3dxlu37yo3xyzu2mrnss6iv2bti6mqwyrdgz4zmjroya.py
Output code: 

from ctypes import c_void_p, c_long
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile

from torch import empty_strided, as_strided, device
from torch._inductor.codecache import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels

aten = torch.ops.aten
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
async_compile = AsyncCompile()


cpp_fused_cos_softshrink_backward_0 = async_compile.cpp('''
#include "/tmp/torchinductor_ubuntu/zr/czrrhd67iy62iqdam5uwroq4ibq3i5oo4yzl6euetoa7k25vfk35.h"
extern "C" void kernel(const float* in_ptr0,
                       const float* in_ptr1,
                       float* out_ptr0)
{
    {
        auto tmp0 = in_ptr0[static_cast<long>(0L)];
        auto tmp7 = in_ptr1[static_cast<long>(0L)];
        auto tmp1 = std::cos(tmp0);
        auto tmp2 = static_cast<float>(-0.5);
        auto tmp3 = tmp1 >= tmp2;
        auto tmp4 = static_cast<float>(0.5);
        auto tmp5 = tmp1 <= tmp4;
        auto tmp6 = decltype(tmp3)(tmp3 & tmp5);
        auto tmp8 = static_cast<float>(0.0);
        auto tmp9 = tmp6 ? tmp8 : tmp7;
        out_ptr0[static_cast<long>(0L)] = tmp9;
    }
}
''')


async_compile.wait(globals())
del async_compile

def call(args):
    arg0_1, arg1_1 = args
    args.clear()
    assert_size_stride(arg0_1, (1, ), (1, ))
    assert_size_stride(arg1_1, (1, ), (1, ))
    buf0 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32)
    cpp_fused_cos_softshrink_backward_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()))
    del arg0_1
    del arg1_1
    return (buf0, )


def benchmark_compiled_module(times=10, repeat=10):
    from torch._dynamo.testing import rand_strided
    from torch._inductor.utils import print_performance
    arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32)
    arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32)
    return print_performance(lambda: call([arg0_1, arg1_1]), times=times, repeat=repeat)


if __name__ == "__main__":
    from torch._inductor.utils import compiled_module_main
    compiled_module_main('None', benchmark_compiled_module)



To execute this test, run the following from the base repo dir:
     python test/inductor/test_torchinductor_codegen_dynamic_shapes.py -k test_softshrink_backward_dynamic_shapes_cpu

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

----------------------------------------------------------------------
Ran 1 test in 1.557s

FAILED (failures=1)

It goes away if I remove the 2 asserts but that's not probably not safe to do so, so curious

@msaroufim
Copy link
Member Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 20, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

aten.softshrink_backward
3 participants