Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] Encountered bug when using Torch-TensorRT #791

Closed
ruoqianguo opened this issue Dec 30, 2021 · 0 comments · Fixed by #792
Closed

🐛 [Bug] Encountered bug when using Torch-TensorRT #791

ruoqianguo opened this issue Dec 30, 2021 · 0 comments · Fixed by #792
Labels
bug Something isn't working

Comments

@ruoqianguo
Copy link
Contributor

ruoqianguo commented Dec 30, 2021

Bug Description

When using "aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)", the results of Torch-TensorRT and PyTorch are not equal.

To Reproduce

Steps to reproduce the behavior:

  1. Add a uTest in test_pooling.cpp. The uTest is below and will use GlobalPoolingConverter function.
TEST(Converters, ATenAdaptiveAvgPool1DGlobalPoolingConvertsCorrectly) {
  const auto graph =
      R"IR(
      graph(%0 : Tensor):
        %2 : int = prim::Constant[value=1]()
        %6 : int[] = prim::ListConstruct(%2)
        %10 : Tensor = aten::adaptive_avg_pool1d(%0, %6)
        return (%10))IR";

  auto g = std::make_shared<torch::jit::Graph>();
  torch::jit::parseIR(graph, g.get());

  // PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input
  auto in = at::randint(-5, 5, {3, 16}, at::kCUDA);

  auto jit_in = at::clone(in);
  auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
  auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in});

  auto trt_in = at::clone(in);
  params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
  auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in});

  ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6));
}

image
aten::adaptive_avg_pool1d's input can be (N, C, L) or (C, L). We should update reduceAxes variable.

  1. Add another uTest in test_pooling.cpp. Here is uTest. It will not use GlobalPoolingConverter function, but will use Interpolate plugin.
TEST(Converters, ATenAdaptiveAvgPool1DUsingPluginConvertsCorrectly) {
  const auto graph =
      R"IR(
      graph(%0 : Tensor):
        %2 : int = prim::Constant[value=3]()
        %6 : int[] = prim::ListConstruct(%2)
        %10 : Tensor = aten::adaptive_avg_pool1d(%0, %6)
        return (%10))IR";

  auto g = std::make_shared<torch::jit::Graph>();
  torch::jit::parseIR(graph, g.get());

  // PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input
  auto in = at::randint(-5, 5, {1, 3, 16}, at::kCUDA);

  auto jit_in = at::clone(in);
  auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
  auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in});

  auto trt_in = at::clone(in);
  params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
  auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in});

  ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6));
}

image
The Torch-TensorRT ouput shape doesn't match PyTorch output shape.

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@ruoqianguo ruoqianguo added the bug Something isn't working label Dec 30, 2021
ruoqianguo added a commit to ruoqianguo/TRTorch that referenced this issue Dec 31, 2021
…h#791

Signed-off-by: Ruoqian Guo <ruoqiang@nvidia.com>
narendasan added a commit that referenced this issue Jan 31, 2022
Feat: support aten::adaptive_max_pool1d, aten::adaptive_avg_pool3d and aten::adaptive_max_pool3d operators and fix issue #791
bowang007 pushed a commit that referenced this issue Apr 5, 2022
Signed-off-by: Ruoqian Guo <ruoqiang@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
1 participant