Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compillation breaks with clang8: no matching constructor for initialization of 'fbgemm::ReQuantizeOutput<false, fbgemm::QuantizationGranularity::TENSOR, float>' #28337

Closed
yurivict opened this issue Oct 19, 2019 · 4 comments
Assignees
Labels
oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@yurivict
Copy link

yurivict commented Oct 19, 2019

/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/native/quantized/cpu/qlinear.cpp:155:15: error: no matching constructor for initialization of 'fbgemm::ReQuantizeOutput<false, fbgemm::QuantizationGranularity::TENSOR, float>'
              outputProcObj(
              ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/native/quantized/cpu/qlinear.cpp:337:14: note: in instantiation of member function 'at::native::(anonymous namespace)::QLinearInt8<false>::fbgemm_linear' requested here
      return fbgemm_linear(
             ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/core/boxing/kernel_functor.h:194:12: note: in instantiation of member function 'at::native::(anonymous namespace)::QLinearInt8<false>::operator()' requested here
    return (*functor)(ivalue_to_arg<guts::remove_cv_t<guts::remove_reference_t<guts::typelist::element_t<ivalue_arg_indices, IValueArgTypes>>>, AllowDeprecatedTypes>(
           ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/core/boxing/kernel_functor.h:202:12: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack_<at::native::(anonymous namespace)::QLinearInt8<false>, false, 0, 1, 2, 3>' requested here
    return call_functor_with_args_from_stack_<Functor, AllowDeprecatedTypes>(functor, stack, guts::make_index_sequence<num_ivalue_args>());
           ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/core/boxing/kernel_functor.h:234:21: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack<at::native::(anonymous namespace)::QLinearInt8<false>, false>' requested here
      auto output = call_functor_with_args_from_stack<KernelFunctor, AllowDeprecatedTypes>(functor_, stack);
                    ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/core/boxing/KernelFunction.h:205:76: note: in instantiation of member function 'c10::detail::wrap_kernel_functor_boxed<at::native::(anonymous namespace)::QLinearInt8<false>, false, void>::call' requested here
      &detail::wrap_kernel_functor_boxed<KernelFunctor, AllowLegacyTypes>::call,
                                                                           ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/core/op_registration/op_registration.h:154:25: note: in instantiation of function template specialization 'c10::KernelFunction::makeFromUnboxedFunctorFactory<at::native::(anonymous namespace)::QLinearInt8<false>, false>' requested here
        KernelFunction::makeFromUnboxedFunctorFactory<KernelFunctor>(detail::KernelFactory<KernelFunctor, guts::decay_t<ConstructorParameters>...>(std::forward<ConstructorParameters>(constructorParameters)...)),
                        ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/aten/src/ATen/native/quantized/cpu/qlinear.cpp:357:49: note: in instantiation of function template specialization 'c10::RegisterOperators::Options::kernel<at::native::(anonymous namespace)::QLinearInt8<false>>' requested here
            torch::RegisterOperators::options().kernel<QLinearInt8<false>>(
                                                ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/third_party/fbgemm/include/fbgemm/Fbgemm.h:1158:3: note: candidate constructor not viable: requires at most 10 arguments, but 11 were provided
  ReQuantizeOutput(
  ^
/usr/ports/science/py-pytorch/work/pytorch-1.3.0/third_party/fbgemm/include/fbgemm/Fbgemm.h:1134:18: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 11 were provided
class FBGEMM_API ReQuantizeOutput {
                 ^

Version 1.3.0
clang8
FreeBSD 12

cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100

@lly-zero-one lly-zero-one added oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Oct 21, 2019
@lly-zero-one
Copy link
Contributor

@dskhudia @jianyuh Could you help to take a look? Seems to be building issue.

@yurivict
Copy link
Author

This issue stops me from creating a FreeBSD port for PyTorch.

@jianyuh
Copy link
Member

jianyuh commented Oct 25, 2019

@lly-zero-one @dskhudia We currently don't have a FreeBSD 12 for reproduce. It might take some time to look at this issue. @ezyang Do we have the OSS CI test for FreeBSD?

For now, you might disable FBGEMM for building PyTorch on FreeBSD:
USE_FBGEMM=0 python setup.py install .

@ezyang
Copy link
Contributor

ezyang commented Oct 25, 2019

We don't have any FreeBSD CI, and there are no plans in our roadmap for it.

@jianyuh jianyuh closed this as completed Feb 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants