Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix auto exponent issue for torch.pow #47024

Closed
wants to merge 11 commits into from

Conversation

anjali411
Copy link
Contributor

@anjali411 anjali411 commented Oct 28, 2020

Fixes #46936

Stack from ghstack:

Differential Revision: D24698027

anjali411 added a commit that referenced this pull request Oct 28, 2020
ghstack-source-id: e57f2011fed8add87a0f2ba818f7cbf1d790390f
Pull Request resolved: #47024
@dr-ci
Copy link

dr-ci bot commented Oct 28, 2020

💊 CI failures summary and remediations

As of commit cf7edfa (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_clang5_asan_test2 (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Nov 15 07:41:07 SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/build/aten/src/ATen/native/cpu/PowKernel.cpp.AVX2.cpp:41:5 in
Nov 15 07:41:07     #33 0x55d1ce2b1d4a in testing::Test::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c1bd4a) 
Nov 15 07:41:07     #34 0x55d1ce2b4692 in testing::TestInfo::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c1e692) 
Nov 15 07:41:07     #35 0x55d1ce2b65ba in testing::TestCase::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c205ba) 
Nov 15 07:41:07     #36 0x55d1ce2d89c4 in testing::internal::UnitTestImpl::RunAllTests() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c429c4) 
Nov 15 07:41:07     #37 0x55d1ce308591 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/var/lib/jenkins/workspace/build/bin/test_api+0x1c72591) 
Nov 15 07:41:07     #38 0x55d1ce2d7469 in testing::UnitTest::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c41469) 
Nov 15 07:41:07     #39 0x55d1cd174da7 in main (/var/lib/jenkins/workspace/build/bin/test_api+0xadeda7) 
Nov 15 07:41:07     #40 0x7f4aef08383f in __libc_start_main /build/glibc-e6zv40/glibc-2.23/csu/../csu/libc-start.c:291 
Nov 15 07:41:07     #41 0x55d1cd174758 in _start (/var/lib/jenkins/workspace/build/bin/test_api+0xade758) 
Nov 15 07:41:07  
Nov 15 07:41:07 SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/build/aten/src/ATen/native/cpu/PowKernel.cpp.AVX2.cpp:41:5 in  
Nov 15 07:41:07 + cleanup 
Nov 15 07:41:07 + retcode=1 
Nov 15 07:41:07 + set +x 
Nov 15 07:41:07 =================== sccache compilation log =================== 
Nov 15 07:41:07 ERROR 2020-11-15T05:40:07Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "/var/lib/jenkins/.cache/torch_extensions/test_compilation_error_formatting/main.cpp: In function \'int main()\':\n/var/lib/jenkins/.cache/torch_extensions/test_compilation_error_formatting/main.cpp:2:23: error: expected \';\' before \'}\' token\n int main() { return 0 }\n                       ^\n" } 
Nov 15 07:41:07  
Nov 15 07:41:07 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Nov 15 07:41:07 Compile requests                      0 
Nov 15 07:41:07 Compile requests executed             0 
Nov 15 07:41:07 Cache hits                            0 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 81 times.

Copy link
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Fixes #46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

[ghstack-poisoned]
Fixes #46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 29, 2020
ghstack-source-id: e664eaa9159db964a980f8386d532f9dbf7c15df
Pull Request resolved: #47024
Fixes #46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 29, 2020
ghstack-source-id: 3cfe280c6cdeff1cb7984ab79fb6e415a197e2df
Pull Request resolved: #47024
@anjali411 anjali411 requested a review from ezyang October 29, 2020 21:38
@@ -88,6 +88,17 @@ class C10_API Scalar {

Scalar operator-() const;

template<typename T>
bool equal(T num) const {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc. @ezyang to verify if it's ok to add equal for Scalar.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems fine.

Fixes #46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 30, 2020
ghstack-source-id: 62abe07430dfca7524ff6ca6e47e4ce899714c68
Pull Request resolved: #47024
auto grad_lambda = [](Tensor a, Scalar b) {
return AT_DISPATCH_DOUBLE_COMPLEXDOUBLE(b.type(), "scalar_val", ([&] {
scalar_t val = b.to<scalar_t>();
return (a * std::log(val)).conj();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a little goofy. Why not also add a log() method on Scalar?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Scalar have more operations defined on it? For example, in pow_backward, we would need to define an operator- for Scalar to avoid the dispatch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang added log.
I think it makes sense to support more math operations for Scalar. For now, I added log, and equal in Scalar.h but we should add a new file ScalarMath.{h,cpp} if we plan to add more ops for scalar

Fixes #46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Nov 3, 2020
ghstack-source-id: 0300183c1941c0287ba4334524c8b79a4df20a95
Pull Request resolved: #47024
return at::zeros_like(self, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
} else {
auto out = grad * (exponent * self.pow(exponent - 1)).conj();
auto grad_lambda = [&](auto exp) { return grad * (exp * self.pow(exp - 1)).conj(); };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time to read up on auto type deduction...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to https://en.cppreference.com/w/cpp/language/lambda, grad_lambda is a generic lambda, which behaves like a template with one parameter. So this should work.

It's too bad that we can't actually write a test for this

anjali411 added a commit that referenced this pull request Nov 4, 2020
ghstack-source-id: 6a7db7a10a0a0debf9762892a6b14ee70f85982d
Pull Request resolved: #47024
anjali411 added a commit that referenced this pull request Nov 11, 2020
ghstack-source-id: 5f99cd686fd30520ce16d8e5d1b6599169402dd4
Pull Request resolved: #47024
anjali411 added a commit that referenced this pull request Nov 13, 2020
ghstack-source-id: f6df8f5fc1f841490eef810e0e413524dc2f1824
Pull Request resolved: #47024
// auto y = x.pow(1.5);
// auto gr =
// grad({y}, {x}, {}, /*retain_graph=*/true, /*create_backward=*/true);
// ASSERT_THROWS_WITH(grad({gr[0]}, {x});, "returned nan");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you try adding the grad_output as torch::tensor({0.0}) here? That should make it nan as you expect.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup will do

anjali411 added a commit that referenced this pull request Nov 15, 2020
ghstack-source-id: b68bb0477d6bf582852136b497d012ec001dc583
Pull Request resolved: #47024
@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 8ef7ccd.

@mruberry
Copy link
Collaborator

Unlanding. This appears to have broken pytorch_linux_xenial_py3_clang5_asan_test2. Relevant snippet:

Nov 15 08:59:54 /var/lib/jenkins/workspace/build/aten/src/ATen/native/cpu/PowKernel.cpp.AVX2.cpp:41:5: runtime error: division by zero
Nov 15 08:59:55     #0 0x7f18784fe544 in void at::native::(anonymous namespace)::basic_loop<at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(float)#4}>(char* restrict*, long const*, long, long, at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(float)#4}&&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x15241544)
Nov 15 08:59:55     #1 0x7f18784fdcad in void at::native::(anonymous namespace)::vectorized_loop<at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(float)#4}, at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(at::vec256::(anonymous namespace)::Vec256<float>)#4}>(char**, long, long, at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(float)#4}&&, at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const::{lambda()#2}::operator()() const::{lambda(at::vec256::(anonymous namespace)::Vec256<float>)#4}&&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x15240cad)
Nov 15 08:59:55     #2 0x7f186f71de0e in void c10::function_ref<void (char**, long const*, long, long)>::callback_fn<at::TensorIterator::for_each(c10::function_ref<void (char**, long const*, long)>, long)::$_5>(long, char**, long const*, long, long) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xc460e0e)
Nov 15 08:59:55     #3 0x7f186f70d580 in at::TensorIterator::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xc450580)
Nov 15 08:59:55     #4 0x7f186f70c71a in at::TensorIterator::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xc44f71a)
Nov 15 08:59:55     #5 0x7f186f70c4b3 in at::TensorIterator::for_each(c10::function_ref<void (char**, long const*, long)>, long) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xc44f4b3)
Nov 15 08:59:55     #6 0x7f18784e3ee4 in at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar)::$_2::operator()() const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x15226ee4)
Nov 15 08:59:55     #7 0x7f18784dfce0 in at::native::(anonymous namespace)::pow_tensor_scalar_kernel(at::TensorIterator&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x15222ce0)
Nov 15 08:59:55     #8 0x7f186f29fb84 in void at::native::DispatchStub<void (*)(at::TensorIterator&, c10::Scalar), at::native::pow_tensor_scalar_stub>::operator()<at::TensorIterator&, c10::Scalar&>(c10::DeviceType, at::TensorIterator&, c10::Scalar&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xbfe2b84)
Nov 15 08:59:55     #9 0x7f186f29cc58 in at::native::pow_out(at::Tensor&, at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xbfdfc58)
Nov 15 08:59:55     #10 0x7f186f29e461 in at::native::pow(at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xbfe1461)
Nov 15 08:59:55     #11 0x7f18703e0b4a in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::Scalar), &at::(anonymous namespace)::pow_Tensor_Scalar>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::Scalar> >, at::Tensor (at::Tensor const&, c10::Scalar)>::call(c10::OperatorKernel*, at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xd123b4a)
Nov 15 08:59:55     #12 0x7f186fd9c2fc in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::Scalar>(c10::OperatorHandle const&, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcadf2fc)
Nov 15 08:59:56     #13 0x7f186fd9b906 in at::Tensor c10::Dispatcher::callWithDispatchKey<at::Tensor, at::Tensor const&, c10::Scalar>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::Scalar)> const&, c10::DispatchKey, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcade906)
Nov 15 08:59:56     #14 0x7f186fd9b4ef in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::Scalar>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::Scalar)> const&, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcade4ef)
Nov 15 08:59:56     #15 0x7f186fc0ce83 in at::pow(at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xc94fe83)
Nov 15 08:59:56     #16 0x7f18749b9629 in torch::autograd::VariableType::(anonymous namespace)::pow_Tensor_Scalar(at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x116fc629)
Nov 15 08:59:56     #17 0x7f18749b8f7a in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::Scalar), &torch::autograd::VariableType::(anonymous namespace)::pow_Tensor_Scalar>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::Scalar> >, at::Tensor (at::Tensor const&, c10::Scalar)>::call(c10::OperatorKernel*, at::Tensor const&, c10::Scalar) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x116fbf7a)
Nov 15 08:59:56     #18 0x7f186fd9c2fc in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::Scalar>(c10::OperatorHandle const&, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcadf2fc)
Nov 15 08:59:56     #19 0x7f186fd9b906 in at::Tensor c10::Dispatcher::callWithDispatchKey<at::Tensor, at::Tensor const&, c10::Scalar>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::Scalar)> const&, c10::DispatchKey, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcade906)
Nov 15 08:59:56     #20 0x7f186fd9b4ef in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::Scalar>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::Scalar)> const&, at::Tensor const&, c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xcade4ef)
Nov 15 08:59:56     #21 0x7f18709773ca in at::Tensor::pow(c10::Scalar) const (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0xd6ba3ca)
Nov 15 08:59:56     #22 0x7f187643da1e in torch::autograd::generated::details::pow_backward(at::Tensor, at::Tensor const&, c10::Scalar const&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x13180a1e)
Nov 15 08:59:56     #23 0x7f18743c1267 in torch::autograd::generated::PowBackward0::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x11104267)
Nov 15 08:59:56     #24 0x7f1875636bf8 in torch::autograd::Node::operator()(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x12379bf8)
Nov 15 08:59:56     #25 0x7f187561e83b in torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x1236183b)
Nov 15 08:59:56     #26 0x7f187561c86f in torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x1235f86f)
Nov 15 08:59:56     #27 0x7f187562e527 in torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x12371527)
Nov 15 08:59:56     #28 0x7f187562a134 in torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x1236d134)
Nov 15 08:59:56     #29 0x7f187560ad6a in torch::autograd::run_backward(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x1234dd6a)
Nov 15 08:59:56     #30 0x7f187560c0b1 in torch::autograd::grad(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, c10::optional<bool>, bool, bool) (/var/lib/jenkins/workspace/build/lib/libtorch_cpu.so+0x1234f0b1)
Nov 15 08:59:56     #31 0x55651c46d554 in AutogradAPITests_AnomalyMode_Test::TestBody() (/var/lib/jenkins/workspace/build/bin/test_api+0xaf1554)
Nov 15 08:59:56     #32 0x55651d5e906a in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/var/lib/jenkins/workspace/build/bin/test_api+0x1c6d06a)
Nov 15 08:59:56     #33 0x55651d597d4a in testing::Test::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c1bd4a)
Nov 15 08:59:56     #34 0x55651d59a692 in testing::TestInfo::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c1e692)
Nov 15 08:59:56     #35 0x55651d59c5ba in testing::TestCase::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c205ba)
Nov 15 08:59:56     #36 0x55651d5be9c4 in testing::internal::UnitTestImpl::RunAllTests() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c429c4)
Nov 15 08:59:56     #37 0x55651d5ee591 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/var/lib/jenkins/workspace/build/bin/test_api+0x1c72591)
Nov 15 08:59:56     #38 0x55651d5bd469 in testing::UnitTest::Run() (/var/lib/jenkins/workspace/build/bin/test_api+0x1c41469)
Nov 15 08:59:56     #39 0x55651c45ada7 in main (/var/lib/jenkins/workspace/build/bin/test_api+0xadeda7)
Nov 15 08:59:56     #40 0x7f18594b683f in __libc_start_main /build/glibc-e6zv40/glibc-2.23/csu/../csu/libc-start.c:291
Nov 15 08:59:56     #41 0x55651c45a758 in _start (/var/lib/jenkins/workspace/build/bin/test_api+0xade758)

@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by 013e6a3.

@facebook-github-bot facebook-github-bot deleted the gh/anjali411/70/head branch November 18, 2020 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants