Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FX] Changes done internally at Facebook #1204

Merged
merged 1 commit into from
Jul 25, 2022
Merged

[FX] Changes done internally at Facebook #1204

merged 1 commit into from
Jul 25, 2022

Conversation

frank-wei
Copy link
Contributor

@frank-wei frank-wei commented Jul 25, 2022

6703b98dff0695d91026f057b951dba1355825fa Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.prod
c822345d6d673e1653c2208435e34ab400bada3d Jason Park jasonjk@fb.com Add support for generic torch ops to be used in training.
e5758602a0592d6c2b71d6d66a0398c4dd9b5e20 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for repeat interleave
c13c633f04df162500eed477c0569eb2b81eb070 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for reduce ops
863476cf43b210922b88585b8f196dd84fbebb56 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_op.convolution
5b6a8d2d6be979983a52ac96225fefb510c3817c Andrew Or andrewor@fb.com [Quant][fx] Rename convert_to_reference to convert_to_reference_fx
996a0e080b8a8bc0b292a7c2ac92f41f6db33a2e Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_op.expand
084631fe74b304fbb9481ca15fd452a3714fb1b8 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_op.to_dtype
b3195e76329ccddbb5c4640cfa884d0e457d2d34 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for std
a5d964e62bdf769cf8c2e67321138b33e1f524a7 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_op.tile
3d33d45b2fc7f10f25c22946ba474b227e4b6529 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for squeeze
09085abf63d7e7732e2cd66e600e8afc6d58964f Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_op.topk
65edc7ea12899e9bd2af42c890a64de853d9b7fe Huamin Li huaminli@fb.com temporarily skip gelu tests
d11e521f9b90554ca86912a49920afa4406bb40d Shirong Wu shirong@fb.com Suppress accuracy check for remove_reshape_with_batch_size_change
6d948298b2327d229e010a34f1c221b11d2eb504 Ankur Singla ankursingla@fb.com [GPULowering] Suppress accuracy check for fuse_unsqueeze_cat_sum
e780b647fc9571b77d9f41c963041a6ac3d66f33 Janet Yang qxy11@fb.com Lower xrayvideo2022 to fx2trt
433c7207fef16b1fdff985546ea969c39fa83e7c generatedunixname89002005287564 generatedunixname89002005287564@fb.com [Codemod][Remove @noautodeps and @autodeps-skip tags] deeplearning/trt 1/2
66fdb65cffa925660c77b4758388399db3cbfe48 Scott Wolchok swolchok@fb.com [fx2ait] Minor Python cleanup in acc_ops_getitem
188132ecb2c19bcbf83cb2dc381f6e3798629f87 generatedunixname89002005324833 generatedunixname89002005324833@fb.com [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily arc lint --take BUCKFORMAT
4536bae4686dd01f2149541ea7fb330e178a4969 Wei Wei wwei6@fb.com [fx2trt] support sub
064602e666f86c110d931cd90a8536112a19b4ad Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.interpolate
9dfd0ee0cecb1975e3f53c44de237d67ca443ec5 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for unary_ops
39b9efad8d5d82463a2016d135c0cf277de1c3c6 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for unsqueeze
2bb17667d1dabc95391950426fc1f921eb3d0959 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.split
64dfb7b096686cb2fd33197340dc72f30d525456 Shirong Wu shirong@fb.com Group LN trt plugin
438f670e28df59b0734baa092a514fba3d75eb4f Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.avgpool
df0fe32dae4343827bd9b37b72daae761b02f228 Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops masked fill
44fe735d3493ea2d05a56b49093e4a23dd63a98e Shreyansh Prajapati shreyanshp@fb.com Test dynamic shaope support for acc_ops.pad
4f931acca706d8ce79045ceafef2ea0486609149 Wei Wei wwei6@fb.com [fx2trt] torch.max dynamic shape test
bf6f6cbe217d26a95ca9122574adf7de3966db9e Shreyansh Prajapati shreyanshp@fb.com Change the name of the test from full_reduce to dim_reduce
1c5680ed107d9206f3514eff4069a3f6c870ba8c Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.type_as
33e4c175a4f5fec78ac0b1c8eb262ca777c7aaba Shreyansh Prajapati shreyanshp@fb.com Test dynamic shape support for acc_ops.min
f37be34bcef9716080b8bafbd1f4ad72e412c44c Wei Wei wwei6@fb.com [fx2trt] plugin for grid_sample
57b5cc6a0f4839686ae360361a3a13b424794ee7 generatedunixname89002005367269 generatedunixname89002005367269@fb.com [AutoAccept][Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK
eb741cc5e5a7babdc94e72d411670905f54da3e0 Shreyansh Prajapati shreyanshp@fb.com Updated the dynamic shape support for narrow op
521c36b96a14741ae89d7af6cbb658120bcec2ea Shreyansh Prajapati shreyanshp@fb.com Removing the comment for 4 dims dynamic shape support after analysis
e947343375967fe9efb0a16fdb9f63bff1449328 Shreyansh Prajapati shreyanshp@fb.com Updated the pad test for dynamic batch for analysis
3d64087014e91bc301a315eae43683b1aa2b66bc Oleg Khabinov khabinov@fb.com [trt_bc] Some improvements
dfd937a56fa01aca88a89b46176befdac4c202c4 Shreyansh Prajapati shreyanshp@fb.com Updated the test for as_strided op for analysis
11d76d0420dcaa4bb8890dcdeb86b6e534af831c Bangsheng Tang bangsheng@fb.com [gpu][infer] replace fx2trt_layer_norm with fbgemm layer_norm
932046ff6ea6dead114c0222b23ca3854690cffa Wei Wei wwei6@fb.com [fx2trt] bridge the dynamic batch and fixed shape
f911463393d8a671cfee6de6d1b5ef4d4f3991a6 Shirong Wu shirong@fb.com group swish LN plugin
ea65970f23dd7a468e5bc43240f2a9bfa07c9b3b Shirong Wu shirong@fb.com Create backend specific lower pass
38183e4a724e5514db2be7193cf4897b59759252 Alex Beloi alexbeloi@fb.com [fx] run acc_linter.lint in acc_tracer.trace

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

6703b98dff0695d91026f057b951dba1355825fa Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.prod
c822345d6d673e1653c2208435e34ab400bada3d Jason Park <jasonjk@fb.com> Add support for generic torch ops to be used in training.
e5758602a0592d6c2b71d6d66a0398c4dd9b5e20 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for repeat interleave
c13c633f04df162500eed477c0569eb2b81eb070 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for reduce ops
863476cf43b210922b88585b8f196dd84fbebb56 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_op.convolution
68dff39793e5c30c20010919a855bb3d984015d7 Ruichao Xiao <xiaoruichao@fb.com> [fbcode][GPU][DHEN]fuse split squeeze cat as reshape
f8b920769507ebd2ff02419b4aece25451298a95 Ruichao Xiao <xiaoruichao@fb.com> [fbcode][DHEN][GPU] reorder and merge cats whose input is a sublist of another cat
5b6a8d2d6be979983a52ac96225fefb510c3817c Andrew Or <andrewor@fb.com> [Quant][fx] Rename convert_to_reference to convert_to_reference_fx
996a0e080b8a8bc0b292a7c2ac92f41f6db33a2e Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_op.expand
084631fe74b304fbb9481ca15fd452a3714fb1b8 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_op.to_dtype
b3195e76329ccddbb5c4640cfa884d0e457d2d34 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for std
a5d964e62bdf769cf8c2e67321138b33e1f524a7 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_op.tile
3d33d45b2fc7f10f25c22946ba474b227e4b6529 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for squeeze
09085abf63d7e7732e2cd66e600e8afc6d58964f Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_op.topk
65edc7ea12899e9bd2af42c890a64de853d9b7fe Huamin Li <huaminli@fb.com> temporarily skip gelu tests
d11e521f9b90554ca86912a49920afa4406bb40d Shirong Wu <shirong@fb.com> Suppress accuracy check for remove_reshape_with_batch_size_change
6d948298b2327d229e010a34f1c221b11d2eb504 Ankur Singla <ankursingla@fb.com> [GPULowering] Suppress accuracy check for fuse_unsqueeze_cat_sum
e780b647fc9571b77d9f41c963041a6ac3d66f33 Janet Yang <qxy11@fb.com> Lower xrayvideo2022 to fx2trt
433c7207fef16b1fdff985546ea969c39fa83e7c generatedunixname89002005287564 <generatedunixname89002005287564@fb.com> [Codemod][Remove @noautodeps and @autodeps-skip tags] deeplearning/trt 1/2
66fdb65cffa925660c77b4758388399db3cbfe48 Scott Wolchok <swolchok@fb.com> [fx2ait] Minor Python cleanup in acc_ops_getitem
188132ecb2c19bcbf83cb2dc381f6e3798629f87 generatedunixname89002005324833 <generatedunixname89002005324833@fb.com> [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily `arc lint --take BUCKFORMAT`
4536bae4686dd01f2149541ea7fb330e178a4969 Wei Wei <wwei6@fb.com> [fx2trt] support sub
064602e666f86c110d931cd90a8536112a19b4ad Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.interpolate
9dfd0ee0cecb1975e3f53c44de237d67ca443ec5 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for unary_ops
39b9efad8d5d82463a2016d135c0cf277de1c3c6 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for unsqueeze
2bb17667d1dabc95391950426fc1f921eb3d0959 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.split
64dfb7b096686cb2fd33197340dc72f30d525456 Shirong Wu <shirong@fb.com> Group LN trt plugin
438f670e28df59b0734baa092a514fba3d75eb4f Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.avgpool
df0fe32dae4343827bd9b37b72daae761b02f228 Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops masked fill
44fe735d3493ea2d05a56b49093e4a23dd63a98e Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shaope support for acc_ops.pad
4f931acca706d8ce79045ceafef2ea0486609149 Wei Wei <wwei6@fb.com> [fx2trt] torch.max dynamic shape test
bf6f6cbe217d26a95ca9122574adf7de3966db9e Shreyansh Prajapati <shreyanshp@fb.com> Change the name of the test from full_reduce to dim_reduce
1c5680ed107d9206f3514eff4069a3f6c870ba8c Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.type_as
33e4c175a4f5fec78ac0b1c8eb262ca777c7aaba Shreyansh Prajapati <shreyanshp@fb.com> Test dynamic shape support for acc_ops.min
f37be34bcef9716080b8bafbd1f4ad72e412c44c Wei Wei <wwei6@fb.com> [fx2trt] plugin for grid_sample
57b5cc6a0f4839686ae360361a3a13b424794ee7 generatedunixname89002005367269 <generatedunixname89002005367269@fb.com> [AutoAccept][Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
eb741cc5e5a7babdc94e72d411670905f54da3e0 Shreyansh Prajapati <shreyanshp@fb.com> Updated the dynamic shape support for narrow op
521c36b96a14741ae89d7af6cbb658120bcec2ea Shreyansh Prajapati <shreyanshp@fb.com> Removing the comment for 4 dims dynamic shape support after analysis
e947343375967fe9efb0a16fdb9f63bff1449328 Shreyansh Prajapati <shreyanshp@fb.com> Updated the pad test for dynamic batch for analysis
3d64087014e91bc301a315eae43683b1aa2b66bc Oleg Khabinov <khabinov@fb.com> [trt_bc] Some improvements
dfd937a56fa01aca88a89b46176befdac4c202c4 Shreyansh Prajapati <shreyanshp@fb.com> Updated the test for as_strided op for analysis
11d76d0420dcaa4bb8890dcdeb86b6e534af831c Bangsheng Tang <bangsheng@fb.com> [gpu][infer] replace fx2trt_layer_norm with fbgemm layer_norm
932046ff6ea6dead114c0222b23ca3854690cffa Wei Wei <wwei6@fb.com> [fx2trt] bridge the dynamic batch and fixed shape
f911463393d8a671cfee6de6d1b5ef4d4f3991a6 Shirong Wu <shirong@fb.com> group swish LN plugin
ea65970f23dd7a468e5bc43240f2a9bfa07c9b3b Shirong Wu <shirong@fb.com> Create backend specific lower pass
38183e4a724e5514db2be7193cf4897b59759252 Alex Beloi <alexbeloi@fb.com> [fx] run acc_linter.lint in acc_tracer.trace
d5e749f9bef8157f33fa36ce59b7e1693fdff942 Wei Wei <wwei6@fb.com> "(uncommitted/untracked changes)"
292bba27ebe69c1d3e05f6a3130c810035508118 Wei Wei <wwei6@fb.com> [self] kefei test
9a26bab1bb87a3895613e6de4175537ac1ec1447 Wei Wei <wwei6@fb.com> [self] test kefei 2
6656b13fccf5ae24a167144896b015e6b8c9137d wwei6 <wwei6@fb.com> [self] modify mts benchmarck
731e93868617ca9521f85a5cc37cdb47fb4ca0bc wwei6 <wwei6@fb.com> verify on benchmark
@frank-wei frank-wei changed the title Changes done internally at Facebook [FX] Changes done internally at Facebook Jul 25, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/py/torch_tensorrt/csrc/tensorrt_classes.cpp b/tmp/changes.txt
index 5aeac3b..775c71d 100644
--- a/workspace/py/torch_tensorrt/csrc/tensorrt_classes.cpp
+++ b/tmp/changes.txt
@@ -225,11 +225,17 @@ core::CompileSpec CompileSpec::toInternalCompileSpec() {
  info.convert_info.engine_settings.num_avg_timing_iters = num_avg_timing_iters;
  TORCHTRT_CHECK(workspace_size >= 0, "workspace_size must be 0 or greater");
  info.convert_info.engine_settings.workspace_size = workspace_size;
-  TORCHTRT_CHECK(dla_sram_size >= 4096, "DLA managed SRAM size must be at least 4 KiB and must be a power of 2. This defaults to 1 MiB");
+  TORCHTRT_CHECK(
+      dla_sram_size >= 4096,
+      "DLA managed SRAM size must be at least 4 KiB and must be a power of 2. This defaults to 1 MiB");
  info.convert_info.engine_settings.dla_sram_size = dla_sram_size;
-  TORCHTRT_CHECK(dla_local_dram_size >= 4096, "DLA Local DRAM size must be at least 4 KiB and must be a power of 2. This defaults to 1 GiB");
+  TORCHTRT_CHECK(
+      dla_local_dram_size >= 4096,
+      "DLA Local DRAM size must be at least 4 KiB and must be a power of 2. This defaults to 1 GiB");
  info.convert_info.engine_settings.dla_local_dram_size = dla_local_dram_size;
-  TORCHTRT_CHECK(dla_global_dram_size >= 4096, "DLA Global DRAM size must be at least 4 KiB and must be a power of 2. This defaults to 512 MiB");
+  TORCHTRT_CHECK(
+      dla_global_dram_size >= 4096,
+      "DLA Global DRAM size must be at least 4 KiB and must be a power of 2. This defaults to 512 MiB");
  info.convert_info.engine_settings.dla_global_dram_size = dla_global_dram_size;
  return info;
}
diff --git a/workspace/py/torch_tensorrt/csrc/register_tensorrt_classes.cpp b/tmp/changes.txt
index 9165b21..ba2e168 100644
--- a/workspace/py/torch_tensorrt/csrc/register_tensorrt_classes.cpp
+++ b/tmp/changes.txt
@@ -65,7 +65,8 @@ void RegisterTRTCompileSpec() {
  ADD_FIELD_GET_SET_REGISTRATION(TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, workspace_size);
  ADD_FIELD_GET_SET_REGISTRATION(TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, dla_sram_size);
  ADD_FIELD_GET_SET_REGISTRATION(TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, dla_local_dram_size);
-  ADD_FIELD_GET_SET_REGISTRATION(TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, dla_global_dram_size);
+  ADD_FIELD_GET_SET_REGISTRATION(
+      TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, dla_global_dram_size);
  ADD_FIELD_GET_SET_REGISTRATION(
      TRTCompileSpecTSRegistration, torch_tensorrt::pyapi::CompileSpec, truncate_long_and_double);
}
diff --git a/workspace/core/conversion/conversionctx/ConversionCtx.cpp b/tmp/changes.txt
index a24a159..71159eb 100644
--- a/workspace/core/conversion/conversionctx/ConversionCtx.cpp
+++ b/tmp/changes.txt
@@ -107,7 +107,7 @@ ConversionCtx::ConversionCtx(BuilderSettings build_settings)
  }

  cfg->setAvgTimingIterations(settings.num_avg_timing_iters);
-  if (settings.workspace_size != 0){
+  if (settings.workspace_size != 0) {
    cfg->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kWORKSPACE, settings.workspace_size);
  }

@@ -124,13 +124,13 @@ ConversionCtx::ConversionCtx(BuilderSettings build_settings)
        settings.enabled_precisions.find(nvinfer1::DataType::kFLOAT) == settings.enabled_precisions.end(),
        "DLA supports only fp16 or int8 precision");
    cfg->setDLACore(settings.device.dla_core);
-    if (settings.dla_sram_size != 1048576){
+    if (settings.dla_sram_size != 1048576) {
      cfg->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kDLA_MANAGED_SRAM, settings.dla_sram_size);
    }
-    if (settings.dla_local_dram_size != 1073741824){
+    if (settings.dla_local_dram_size != 1073741824) {
      cfg->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kDLA_LOCAL_DRAM, settings.dla_local_dram_size);
    }
-    if (settings.dla_global_dram_size != 536870912){
+    if (settings.dla_global_dram_size != 536870912) {
      cfg->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kDLA_GLOBAL_DRAM, settings.dla_global_dram_size);
    }
  }
diff --git a/workspace/core/conversion/converters/converter_util.cpp b/tmp/changes.txt
index a6a2bbd..7452615 100644
--- a/workspace/core/conversion/converters/converter_util.cpp
+++ b/tmp/changes.txt
@@ -207,13 +207,13 @@ nvinfer1::ITensor* clamp(
    nvinfer1::ITensor* lower_bound,
    nvinfer1::ITensor* upper_bound,
    std::string const& name) {
-
  auto max_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kMAX, x, lower_bound, "max layer for " + name);
  TORCHTRT_CHECK(max_layer, "Unable to create max layer for clamp");
  LOG_DEBUG(ctx->logger, "Create " << max_layer->getName() << " for clamp");
  auto max_itensor = max_layer->getOutput(0);

-  auto min_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kMIN, max_itensor, upper_bound, "min layer for " + name);
+  auto min_layer =
+      add_elementwise(ctx, nvinfer1::ElementWiseOperation::kMIN, max_itensor, upper_bound, "min layer for " + name);
  TORCHTRT_CHECK(min_layer, "Unable to create min layer for clamp");
  LOG_DEBUG(ctx->logger, "Create " << min_layer->getName() << " for clamp");
  auto min_itensor = min_layer->getOutput(0);
@@ -227,13 +227,13 @@ nvinfer1::ITensor* clamp_to_input_dim(
    nvinfer1::ITensor* input_dim,
    int nbdims,
    std::string const& name) {
-
  auto zero = torch::zeros({nbdims}).to(torch::kI32);
  auto zero_itensor = tensor_to_const(ctx, zero);
  auto one = torch::ones({nbdims}).to(torch::kI32);
  auto one_itensor = tensor_to_const(ctx, one);

-  auto upper_bound_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kSUB, input_dim, one_itensor, "sub layer for " + name);
+  auto upper_bound_layer =
+      add_elementwise(ctx, nvinfer1::ElementWiseOperation::kSUB, input_dim, one_itensor, "sub layer for " + name);
  TORCHTRT_CHECK(upper_bound_layer, "Unable to create sub layer for clamp to inputDim");
  LOG_DEBUG(ctx->logger, "Create " << upper_bound_layer->getName() << " for clamp to inputDim");
  auto upper_bound = upper_bound_layer->getOutput(0);
@@ -243,7 +243,8 @@ nvinfer1::ITensor* clamp_to_input_dim(
  LOG_DEBUG(ctx->logger, "Create " << max_layer->getName() << " for clamp to inputDim");
  auto max_itensor = max_layer->getOutput(0);

-  auto min_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kMIN, max_itensor, upper_bound, "min layer for " + name);
+  auto min_layer =
+      add_elementwise(ctx, nvinfer1::ElementWiseOperation::kMIN, max_itensor, upper_bound, "min layer for " + name);
  TORCHTRT_CHECK(min_layer, "Unable to create min_layer for clamp to inputDim");
  LOG_DEBUG(ctx->logger, "Create " << min_layer->getName() << " for clamp to inputDim");
  auto min_itensor = min_layer->getOutput(0);
@@ -257,7 +258,6 @@ nvinfer1::ITensor* normalize_indices(
    nvinfer1::ITensor* indices,
    int nbdims,
    std::string const& name) {
-
  auto zero = torch::zeros({nbdims}).to(torch::kI32);
  auto neg = -torch::ones({nbdims}).to(torch::kI32);
  auto zero_itensor = tensor_to_const(ctx, zero);
@@ -307,17 +307,20 @@ nvinfer1::ITensor* get_slice_size(
  at::Tensor one_tensor = torch::ones({nbdims}).to(torch::kI32);
  auto one_itensor = tensor_to_const(ctx, one_tensor);

-  auto sub_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kSUB, end, start, "get_slice_size sub layer for " + name);
+  auto sub_layer =
+      add_elementwise(ctx, nvinfer1::ElementWiseOperation::kSUB, end, start, "get_slice_size sub layer for " + name);
  TORCHTRT_CHECK(sub_layer, "Unable to create sub layer in calculate_output_size");
  LOG_DEBUG(ctx->logger, "Create " << sub_layer->getName() << " for calculate_output_size");
  auto sub_itensor = sub_layer->getOutput(0);

-  auto div_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kDIV, sub_itensor, stride, "get_slice_size div layer for " + name);
+  auto div_layer = add_elementwise(
+      ctx, nvinfer1::ElementWiseOperation::kDIV, sub_itensor, stride, "get_slice_size div layer for " + name);
  TORCHTRT_CHECK(div_layer, "Unable to create div layer in calculate_output_size");
  LOG_DEBUG(ctx->logger, "Create " << div_layer->getName() << " for calculate_output_size");
  auto div_itensor = div_layer->getOutput(0);

-  auto add_layer = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kSUM, div_itensor, one_itensor, "get_slice_size sum layer for " + name);
+  auto add_layer = add_elementwise(
+      ctx, nvinfer1::ElementWiseOperation::kSUM, div_itensor, one_itensor, "get_slice_size sum layer for " + name);
  TORCHTRT_CHECK(add_layer, "Unable to create add layer in calculate_output_size");
  LOG_DEBUG(ctx->logger, "Create " << add_layer->getName() << " for calculate_output_size");
  auto size_itensor = add_layer->getOutput(0);
diff --git a/workspace/core/conversion/converters/impl/select.cpp b/tmp/changes.txt
index 3599ab9..d33f09a 100644
--- a/workspace/core/conversion/converters/impl/select.cpp
+++ b/tmp/changes.txt
@@ -103,121 +103,118 @@ nvinfer1::ITensor* roll(

auto select_registrations TORCHTRT_UNUSED =
    RegisterNodeConversionPatterns()
-        .pattern(
-            {"aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a))",
-             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
-               auto in = args[0].ITensorOrFreeze(ctx);
-               auto maxDim = static_cast<int64_t>(in->getDimensions().nbDims);
-               auto dim = args[1].unwrapToInt();
-               // Handle negative axis by refering to nbDims of input Tensor
-               dim = dim < 0 ? dim + maxDim : dim;
-               auto ind = (int32_t)args[2].unwrapToInt();
-               // Along the specified dimension, handle negative index by subtracting along length of dimension.
-               ind = ind < 0 ? ind + in->getDimensions().d[dim] : ind;
-               LOG_DEBUG("Gather input dimensions: " << in->getDimensions());
-               LOG_DEBUG("Dimension to select: " << dim);
-               LOG_DEBUG("Index: " << ind);
-
-               // index to access needs to be an at::Tensor
-               at::Tensor indices = torch::tensor({ind}).to(torch::kI32);
-               auto const_out = tensor_to_const(ctx, indices);
-
-               // IGatherLayer takes in input tensor, the indices, and the axis
-               // of input tensor to take indices from
-               auto gather_layer = ctx->net->addGather(*in, *const_out, dim);
-               TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
-               auto out = gather_layer->getOutput(0);
+        .pattern({"aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a))",
+                  [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
+                    auto in = args[0].ITensorOrFreeze(ctx);
+                    auto maxDim = static_cast<int64_t>(in->getDimensions().nbDims);
+                    auto dim = args[1].unwrapToInt();
+                    // Handle negative axis by refering to nbDims of input Tensor
+                    dim = dim < 0 ? dim + maxDim : dim;
+                    auto ind = (int32_t)args[2].unwrapToInt();
+                    // Along the specified dimension, handle negative index by subtracting along length of dimension.
+                    ind = ind < 0 ? ind + in->getDimensions().d[dim] : ind;
+                    LOG_DEBUG("Gather input dimensions: " << in->getDimensions());
+                    LOG_DEBUG("Dimension to select: " << dim);
+                    LOG_DEBUG("Index: " << ind);
+
+                    // index to access needs to be an at::Tensor
+                    at::Tensor indices = torch::tensor({ind}).to(torch::kI32);
+                    auto const_out = tensor_to_const(ctx, indices);
+
+                    // IGatherLayer takes in input tensor, the indices, and the axis
+                    // of input tensor to take indices from
+                    auto gather_layer = ctx->net->addGather(*in, *const_out, dim);
+                    TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
+                    auto out = gather_layer->getOutput(0);
+
+                    LOG_DEBUG("Gather tensor shape: " << out->getDimensions());
+
+                    if (out->getDimensions().nbDims != 1) {
+                      // IShuffleLayer removes redundant dimensions
+                      auto shuffle_layer = ctx->net->addShuffle(*out);
+                      TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
+                      shuffle_layer->setReshapeDimensions(util::squeezeDims(out->getDimensions(), dim));
+                      shuffle_layer->setName(util::node_info(n).c_str());
+                      out = shuffle_layer->getOutput(0);
+                    }
+
+                    out = ctx->AssociateValueAndTensor(n->outputs()[0], out);
+
+                    LOG_DEBUG("Output tensor shape: " << out->getDimensions());

-               LOG_DEBUG("Gather tensor shape: " << out->getDimensions());
+                    return true;
+                  }})
+        .pattern({"aten::narrow(Tensor(a) self, int dim, int start, int length) -> Tensor(a)",
+                  [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
+                    auto in = args[0].ITensor();
+                    auto axis = args[1].unwrapToInt();
+                    auto start = (int32_t)args[2].unwrapToInt();
+                    auto length = (int32_t)args[3].unwrapToInt();

-               if (out->getDimensions().nbDims != 1) {
-                 // IShuffleLayer removes redundant dimensions
-                 auto shuffle_layer = ctx->net->addShuffle(*out);
-                 TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
-                 shuffle_layer->setReshapeDimensions(util::squeezeDims(out->getDimensions(), dim));
-                 shuffle_layer->setName(util::node_info(n).c_str());
-                 out = shuffle_layer->getOutput(0);
-               }
+                    // index to access needs to be an at::Tensor
+                    at::Tensor indices = torch::arange(start, start + length, 1).to(torch::kI32);
+                    auto weights = Weights(ctx, indices);

-               out = ctx->AssociateValueAndTensor(n->outputs()[0], out);
+                    // IConstantLayer to convert indices from Weights to ITensor
+                    auto const_layer = ctx->net->addConstant(weights.shape, weights.data);
+                    TORCHTRT_CHECK(const_layer, "Unable to create constant layer from node: " << *n);
+                    auto const_out = const_layer->getOutput(0);

-               LOG_DEBUG("Output tensor shape: " << out->getDimensions());
+                    // IGatherLayer takes in input tensor, the indices, and the axis
+                    // of input tensor to take indices from
+                    auto gather_layer = ctx->net->addGather(*in, *const_out, axis);
+                    TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
+                    auto gather_out = gather_layer->getOutput(0);

-               return true;
-             }})
-        .pattern(
-            {"aten::narrow(Tensor(a) self, int dim, int start, int length) -> Tensor(a)",
-             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
-               auto in = args[0].ITensor();
-               auto axis = args[1].unwrapToInt();
-               auto start = (int32_t)args[2].unwrapToInt();
-               auto length = (int32_t)args[3].unwrapToInt();
-
-               // index to access needs to be an at::Tensor
-               at::Tensor indices = torch::arange(start, start + length, 1).to(torch::kI32);
-               auto weights = Weights(ctx, indices);
-
-               // IConstantLayer to convert indices from Weights to ITensor
-               auto const_layer = ctx->net->addConstant(weights.shape, weights.data);
-               TORCHTRT_CHECK(const_layer, "Unable to create constant layer from node: " << *n);
-               auto const_out = const_layer->getOutput(0);
-
-               // IGatherLayer takes in input tensor, the indices, and the axis
-               // of input tensor to take indices from
-               auto gather_layer = ctx->net->addGather(*in, *const_out, axis);
-               TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
-               auto gather_out = gather_layer->getOutput(0);
-
-               // IShuffleLayer removes redundant dimensions
-               auto shuffle_layer = ctx->net->addShuffle(*gather_out);
-               TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
-               shuffle_layer->setReshapeDimensions(util::unpadDims(gather_out->getDimensions()));
-               shuffle_layer->setName(util::node_info(n).c_str());
-               auto shuffle_out = shuffle_layer->getOutput(0);
+                    // IShuffleLayer removes redundant dimensions
+                    auto shuffle_layer = ctx->net->addShuffle(*gather_out);
+                    TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
+                    shuffle_layer->setReshapeDimensions(util::unpadDims(gather_out->getDimensions()));
+                    shuffle_layer->setName(util::node_info(n).c_str());
+                    auto shuffle_out = shuffle_layer->getOutput(0);

-               auto out = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle_out);
+                    auto out = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle_out);

-               LOG_DEBUG("Output tensor shape: " << out->getDimensions());
+                    LOG_DEBUG("Output tensor shape: " << out->getDimensions());

-               return true;
-             }})
-        .pattern(
-            {"aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> Tensor(a)",
-             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
-               auto in = args[0].ITensor();
-               auto axis = args[1].unwrapToInt();
-               torch::Tensor start = args[2].IValue()->toTensor().to(torch::kI32);
-               int32_t startIdx = start.item().to<int32_t>();
-               auto length = (int32_t)args[3].unwrapToInt();
-
-               // index to access needs to be an at::Tensor
-               at::Tensor indices = torch::arange(startIdx, startIdx + length, 1).to(torch::kI32);
-               auto weights = Weights(ctx, indices);
-
-               // IConstantLayer to convert indices from Weights to ITensor
-               auto const_layer = ctx->net->addConstant(weights.shape, weights.data);
-               TORCHTRT_CHECK(const_layer, "Unable to create constant layer from node: " << *n);
-               auto const_out = const_layer->getOutput(0);
-
-               // IGatherLayer takes in input tensor, the indices, and the axis
-               // of input tensor to take indices from
-               auto gather_layer = ctx->net->addGather(*in, *const_out, axis);
-               TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
-               auto gather_out = gather_layer->getOutput(0);
-
-               // IShuffleLayer removes redundant dimensions
-               auto shuffle_layer = ctx->net->addShuffle(*gather_out);
-               TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
-               shuffle_layer->setReshapeDimensions(util::unpadDims(gather_out->getDimensions()));
-               shuffle_layer->setName(util::node_info(n).c_str());
-               auto shuffle_out = shuffle_layer->getOutput(0);
-
-               auto out = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle_out);
-
-               LOG_DEBUG("Output tensor shape: " << out->getDimensions());
+                    return true;
+                  }})
+        .pattern({"aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> Tensor(a)",
+                  [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
+                    auto in = args[0].ITensor();
+                    auto axis = args[1].unwrapToInt();
+                    torch::Tensor start = args[2].IValue()->toTensor().to(torch::kI32);
+                    int32_t startIdx = start.item().to<int32_t>();
+                    auto length = (int32_t)args[3].unwrapToInt();
+
+                    // index to access needs to be an at::Tensor
+                    at::Tensor indices = torch::arange(startIdx, startIdx + length, 1).to(torch::kI32);
+                    auto weights = Weights(ctx, indices);
+
+                    // IConstantLayer to convert indices from Weights to ITensor
+                    auto const_layer = ctx->net->addConstant(weights.shape, weights.data);
+                    TORCHTRT_CHECK(const_layer, "Unable to create constant layer from node: " << *n);
+                    auto const_out = const_layer->getOutput(0);
+
+                    // IGatherLayer takes in input tensor, the indices, and the axis
+                    // of input tensor to take indices from
+                    auto gather_layer = ctx->net->addGather(*in, *const_out, axis);
+                    TORCHTRT_CHECK(gather_layer, "Unable to create gather layer from node: " << *n);
+                    auto gather_out = gather_layer->getOutput(0);
+
+                    // IShuffleLayer removes redundant dimensions
+                    auto shuffle_layer = ctx->net->addShuffle(*gather_out);
+                    TORCHTRT_CHECK(shuffle_layer, "Unable to create shuffle layer from node: " << *n);
+                    shuffle_layer->setReshapeDimensions(util::unpadDims(gather_out->getDimensions()));
+                    shuffle_layer->setName(util::node_info(n).c_str());
+                    auto shuffle_out = shuffle_layer->getOutput(0);
+
+                    auto out = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle_out);
+
+                    LOG_DEBUG("Output tensor shape: " << out->getDimensions());

-               return true;
-             }})
+                    return true;
+                  }})
        .pattern(
            {"aten::embedding(Tensor weight, Tensor indices, int padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> (Tensor)",
             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
@@ -239,30 +236,29 @@ auto select_registrations TORCHTRT_UNUSED =

               return true;
             }})
-        .pattern(
-            {"aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor)",
-             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
-               auto in = args[0].ITensor();
-               auto shifts = args[1].unwrapToIntList().vec();
-               auto dims = args[2].unwrapToIntList().vec();
-
-               TORCHTRT_CHECK(dims.size() == shifts.size(), "dims.size() should be equal to shifts.size()");
-               if (ctx->input_is_dynamic) {
-                 TORCHTRT_THROW_ERROR("aten::roll is currently not support in dynamic input shape compilation");
-               } else {
-                 auto in_shape = util::toVec(in->getDimensions());
-                 for (size_t i = 0; i < dims.size(); i++) {
-                   auto dim = dims[i] < 0 ? (in_shape.size() + dims[i]) : dims[i];
-                   TORCHTRT_CHECK(dim < in_shape.size(), "Dimension out of range");
-                   in = roll(ctx, in, shifts[i], dim, in_shape);
-                 }
-                 auto out = ctx->AssociateValueAndTensor(n->outputs()[0], in);
-
-                 LOG_DEBUG("Output tensor shape: " << out->getDimensions());
-
-                 return true;
-               }
-             }})
+        .pattern({"aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor)",
+                  [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
+                    auto in = args[0].ITensor();
+                    auto shifts = args[1].unwrapToIntList().vec();
+                    auto dims = args[2].unwrapToIntList().vec();
+
+                    TORCHTRT_CHECK(dims.size() == shifts.size(), "dims.size() should be equal to shifts.size()");
+                    if (ctx->input_is_dynamic) {
+                      TORCHTRT_THROW_ERROR("aten::roll is currently not support in dynamic input shape compilation");
+                    } else {
+                      auto in_shape = util::toVec(in->getDimensions());
+                      for (size_t i = 0; i < dims.size(); i++) {
+                        auto dim = dims[i] < 0 ? (in_shape.size() + dims[i]) : dims[i];
+                        TORCHTRT_CHECK(dim < in_shape.size(), "Dimension out of range");
+                        in = roll(ctx, in, shifts[i], dim, in_shape);
+                      }
+                      auto out = ctx->AssociateValueAndTensor(n->outputs()[0], in);
+
+                      LOG_DEBUG("Output tensor shape: " << out->getDimensions());
+
+                      return true;
+                    }
+                  }})
        .pattern(
            {"aten::index.Tensor(Tensor self, Tensor?[] indices) -> (Tensor)",
             [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
@@ -319,7 +315,8 @@ auto select_registrations TORCHTRT_UNUSED =
               int startIdx = 0;
               auto startIdxIVal = args[2].IValue();
               if (!startIdxIVal->isNone()) {
-                 startIdx = startIdxIVal->toInt() > std::numeric_limits<int32_t>::max() ? maxDim : startIdxIVal->toInt();
+                 startIdx =
+                     startIdxIVal->toInt() > std::numeric_limits<int32_t>::max() ? maxDim : startIdxIVal->toInt();
                 startIdx = maxDim == -1 ? startIdx : std::min(startIdx, maxDim);
               }
               // Handle case when given tensor index is negative
@@ -331,7 +328,8 @@ auto select_registrations TORCHTRT_UNUSED =
               int endIdx = maxDim; // -1 for dynamic shape
               auto endIdxIVal = args[3].IValue();
               if (!endIdxIVal->isNone()) {
-                 int truncate_value = endIdxIVal->toInt() > std::numeric_limits<int32_t>::max() ? maxDim : endIdxIVal->toInt();
+                 int truncate_value =
+                     endIdxIVal->toInt() > std::numeric_limits<int32_t>::max() ? maxDim : endIdxIVal->toInt();
                 endIdx = maxDim == -1 ? truncate_value : std::min(truncate_value, maxDim);
               }
               if (maxDim > 0) {
@@ -385,7 +383,8 @@ auto select_registrations TORCHTRT_UNUSED =
                 // update start and end
                 nvinfer1::ITensor* out_start;
                 nvinfer1::ITensor* out_end;
-                 auto start_end = normalize_start_and_end(ctx, ishape_tensor, start_itensor, end_itensor, nbdims, node_name);
+                 auto start_end =
+                     normalize_start_and_end(ctx, ishape_tensor, start_itensor, end_itensor, nbdims, node_name);
                 out_start = start_end[0];
                 out_end = start_end[1];

@@ -397,7 +396,7 @@ auto select_registrations TORCHTRT_UNUSED =
                 slice_layer->setInput(2, *size_itensor); // size, must be set if input is dynamic
               }
               auto slice_out = slice_layer->getOutput(0);
-               
+
               auto out = ctx->AssociateValueAndTensor(n->outputs()[0], slice_out);
               LOG_DEBUG("Slice layer output shape: " << out->getDimensions());

diff --git a/workspace/core/conversion/converters/converter_util.h b/tmp/changes.txt
index cdf2ee5..b155499 100644
--- a/workspace/core/conversion/converters/converter_util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <limits>
#include <map>
#include <string>
-#include <limits>

#include "core/conversion/conversionctx/ConversionCtx.h"
#include "core/conversion/converters/Weights.h"
diff --git a/workspace/tests/core/conversion/converters/test_cast.cpp b/tmp/changes.txt
index 092cdb3..d26c7a0 100644
--- a/workspace/tests/core/conversion/converters/test_cast.cpp
+++ b/tmp/changes.txt
@@ -135,7 +135,6 @@ TEST(Converters, ATenBoolToINT32TensorConvertsCorrectly) {
  ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt, 2e-6));
}

-
TEST(Converters, ATenToSingleConvertsCorrectly) {
  const auto graph = R"IR(
    graph(%y.1 : Tensor):
@@ -164,7 +163,6 @@ TEST(Converters, ATenToSingleConvertsCorrectly) {
  ASSERT_TRUE(torch_tensorrt::tests::util::almostEqual(jit_results[0], trt, 2e-6));
}

-
TEST(Converters, ATenTypeAsConvertsCorrectly) {
  const auto graph = R"IR(
      graph(%0 : Tensor,
diff --git a/workspace/cpp/bin/torchtrtc/main.cpp b/tmp/changes.txt
index 6c207d7..51ec2c5 100644
--- a/workspace/cpp/bin/torchtrtc/main.cpp
+++ b/tmp/changes.txt
@@ -117,8 +117,7 @@ int main(int argc, char** argv) {
      parser, "num_iters", "Number of averaging timing iterations used to select kernels", {"num-avg-timing-iters"});
  args::ValueFlag<uint64_t> workspace_size(
      parser, "workspace_size", "Maximum size of workspace given to TensorRT", {"workspace-size"});
-  args::ValueFlag<uint64_t> dla_sram_size(
-      parser, "dla_sram_size", "DLA managed SRAM size", {"dla-sram-size"});
+  args::ValueFlag<uint64_t> dla_sram_size(parser, "dla_sram_size", "DLA managed SRAM size", {"dla-sram-size"});
  args::ValueFlag<uint64_t> dla_local_dram_size(
      parser, "dla_local_dram_size", "DLA Local DRAM size", {"dla-local-dram-size"});
  args::ValueFlag<uint64_t> dla_global_dram_size(
ERROR: Some files do not conform to style guidelines

Copy link

@yinghai yinghai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @narendasan the c++ lint failure seems to be quite noisy. Is there a clang-format command that we can use to fix it?

@frank-wei frank-wei merged commit 5cb5947 into master Jul 25, 2022
@frank-wei frank-wei deleted the fb-sync-wwei6 branch July 27, 2022 16:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants