We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using "aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)", the results of Torch-TensorRT and PyTorch are not equal.
Steps to reproduce the behavior:
GlobalPoolingConverter
TEST(Converters, ATenAdaptiveAvgPool1DGlobalPoolingConvertsCorrectly) { const auto graph = R"IR( graph(%0 : Tensor): %2 : int = prim::Constant[value=1]() %6 : int[] = prim::ListConstruct(%2) %10 : Tensor = aten::adaptive_avg_pool1d(%0, %6) return (%10))IR"; auto g = std::make_shared<torch::jit::Graph>(); torch::jit::parseIR(graph, g.get()); // PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input auto in = at::randint(-5, 5, {3, 16}, at::kCUDA); auto jit_in = at::clone(in); auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {}); auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in}); auto trt_in = at::clone(in); params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {}); auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in}); ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6)); }
aten::adaptive_avg_pool1d's input can be (N, C, L) or (C, L). We should update reduceAxes variable.
aten::adaptive_avg_pool1d
reduceAxes
Interpolate
TEST(Converters, ATenAdaptiveAvgPool1DUsingPluginConvertsCorrectly) { const auto graph = R"IR( graph(%0 : Tensor): %2 : int = prim::Constant[value=3]() %6 : int[] = prim::ListConstruct(%2) %10 : Tensor = aten::adaptive_avg_pool1d(%0, %6) return (%10))IR"; auto g = std::make_shared<torch::jit::Graph>(); torch::jit::parseIR(graph, g.get()); // PyTorch adaptive_avg_pool1d needs a 3D input or a 2D input auto in = at::randint(-5, 5, {1, 3, 16}, at::kCUDA); auto jit_in = at::clone(in); auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {}); auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in}); auto trt_in = at::clone(in); params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {}); auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in}); ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0], 2e-6)); }
The Torch-TensorRT ouput shape doesn't match PyTorch output shape.
Build information about Torch-TensorRT can be found by turning on debug messages
conda
pip
libtorch
The text was updated successfully, but these errors were encountered:
fix: add at::adaptive_avg_pool1d in interpolate plugin and fix pytorc…
deb9f74
…h#791 Signed-off-by: Ruoqian Guo <ruoqiang@nvidia.com>
Merge pull request #792 from guoruoqian/fix_pooling
0ac503e
Feat: support aten::adaptive_max_pool1d, aten::adaptive_avg_pool3d and aten::adaptive_max_pool3d operators and fix issue #791
fix: add at::adaptive_avg_pool1d in interpolate plugin and fix #791
be51dad
Signed-off-by: Ruoqian Guo <ruoqiang@nvidia.com>
Successfully merging a pull request may close this issue.
Bug Description
When using "aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)", the results of Torch-TensorRT and PyTorch are not equal.
To Reproduce
Steps to reproduce the behavior:
GlobalPoolingConverter
function.aten::adaptive_avg_pool1d
's input can be (N, C, L) or (C, L). We should updatereduceAxes
variable.GlobalPoolingConverter
function, but will useInterpolate
plugin.The Torch-TensorRT ouput shape doesn't match PyTorch output shape.
Expected behavior
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: