Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Optimize hub.py download #1022

Merged
merged 10 commits into from
May 20, 2022
Merged

feat: Optimize hub.py download #1022

merged 10 commits into from
May 20, 2022

Conversation

andi4191
Copy link
Contributor

@andi4191 andi4191 commented May 4, 2022

Signed-off-by: Anurag Dixit a.dixit91@gmail.com

Description

Optimizing hub.py to download model repository only if required. The models are downloaded and deserialized only when the model_snapshot file is missing (first time) OR the version recorded is different.

This PR will reduce the CI pipeline jobs turnaround time.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@andi4191 andi4191 requested a review from narendasan May 4, 2022 03:37
@andi4191 andi4191 self-assigned this May 4, 2022
@github-actions github-actions bot added the component: tests Issues re: Tests label May 4, 2022
.gitignore Outdated Show resolved Hide resolved
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
    /// Op marks a Tensor to be conveted from an Torch Tensor
    /// to a TRT constant Tensor
-    Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+    Operator(
+        "trt::const(Tensor val) -> Tensor",
+        [](Stack& stack) { /*noop*/ },
+        aliasAnalysisFromSchema()),
});

} // namespace jit
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
    /// Op marks a Tensor to be conveted from an Torch Tensor
    /// to a TRT constant Tensor
-    Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+    Operator(
+        "trt::const(Tensor val) -> Tensor",
+        [](Stack& stack) { /*noop*/ },
+        aliasAnalysisFromSchema()),
});

} // namespace jit
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

.gitignore Outdated Show resolved Hide resolved
snapshot_file = 'model_snapshot.txt'
skip_download = False

# If model repository already setup
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like what this should be is a list of models which have been downloaded and we check if the model we are downloading is in the list, because we can always add more. And this would require someone to delete the file to redownload

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the code.

tests/modules/hub.py Show resolved Hide resolved
@andi4191
Copy link
Contributor Author

andi4191 commented May 5, 2022

@narendasan: I refactored the Model download script and added tracking of downloaded files.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py	(original)
+++ /workspace/tests/modules/hub.py	(reformatted)
@@ -88,6 +88,7 @@
    def forward(self, x):
        return F.adaptive_avg_pool2d(x, (5, 5))

+
# Sample Nested Module (for module-level fallback testing)
class ModuleFallbackSub(nn.Module):

@@ -98,6 +99,7 @@

    def forward(self, x):
        return self.relu(self.conv(x))
+

class ModuleFallbackMain(nn.Module):

@@ -110,6 +112,7 @@
    def forward(self, x):
        return self.relu(self.conv(self.layer1(x)))

+
# Sample Looping Modules (for loop fallback testing)
class LoopFallbackEval(nn.Module):

@@ -122,6 +125,7 @@
            add_list = torch.cat((add_list, torch.tensor([x.shape[1]]).to(x.device)), 0)
        return x + add_list

+
class LoopFallbackNoEval(nn.Module):

    def __init__(self):
@@ -131,6 +135,7 @@
        for _ in range(x.shape[1]):
            x = x + torch.ones_like(x)
        return x
+

# Sample Conditional Model (for testing partitioning and fallback in conditionals)
class FallbackIf(torch.nn.Module):
@@ -156,21 +161,23 @@
        x = self.conv1(x)
        return x

+
class ModelManifest:
+
    def __init__(self):
        self.version_matches = False
        if not os.path.exists(MANIFEST_FILE) or os.stat(MANIFEST_FILE).st_size == 0:
            self.manifest = {}
-            self.manifest.update({'version' : torch_version})
+            self.manifest.update({'version': torch_version})
        else:
            with open(MANIFEST_FILE, 'r') as f:
                self.manifest = json.load(f)
                if self.manifest['version'] == torch_version:
                    self.version_matches = True
                else:
-                    print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(torch_version, self.manifest['version']))
+                    print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(
+                        torch_version, self.manifest['version']))
                    self.manifest["version"] = torch_version
-        

    def download(self, models):
        if self.version_matches:
@@ -194,13 +201,13 @@
            record = json.dumps(manifest_record)
            f.write(record)
            f.truncate()
-    
+
    def get_manifest(self):
        return self.manifest
-    
+
    def if_version_matches(self):
        return self.version_matches
-    
+
    def get(self, n, m):
        print("Downloading {}".format(n))
        m["model"] = m["model"].eval().cuda()
@@ -214,8 +221,9 @@
        if m["path"] == "both" or m["path"] == "script":
            script_model = torch.jit.script(m["model"])
            torch.jit.save(script_model, script_filename)
-        
-        self.manifest.update({n : [traced_filename, script_filename]})
+
+        self.manifest.update({n: [traced_filename, script_filename]})
+

def export_model(model, model_name, version_matches):
    if version_matches and os.path.exists(model_name):
@@ -225,7 +233,7 @@
        torch.jit.save(model, model_name)


-def generate_custom_models(manifest, matches = False):
+def generate_custom_models(manifest, matches=False):
    # Pool
    model = Pool().eval().cuda()
    x = torch.ones([1, 3, 10, 10]).cuda()
@@ -252,7 +260,8 @@
    loop_fallback_no_eval_script_model = torch.jit.script(loop_fallback_no_eval_model)
    scripted_loop_fallback_no_eval_name = "loop_fallback_no_eval_scripted.jit.pt"
    export_model(loop_fallback_no_eval_script_model, scripted_loop_fallback_no_eval_name, matches)
-    manifest.update({"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
+    manifest.update(
+        {"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})

    # Conditional
    conditional_model = FallbackIf().eval().cuda()
@@ -289,7 +298,7 @@
    traced_bert_uncased_name = "bert_case_uncased_traced.jit.pt"
    traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
    export_model(traced_model, traced_bert_uncased_name, matches)
-    manifest.update({"torchtrt_bert_case_uncased" : [traced_bert_uncased_name]})
+    manifest.update({"torchtrt_bert_case_uncased": [traced_bert_uncased_name]})


manifest = ModelManifest()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
    /// Op marks a Tensor to be conveted from an Torch Tensor
    /// to a TRT constant Tensor
-    Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+    Operator(
+        "trt::const(Tensor val) -> Tensor",
+        [](Stack& stack) { /*noop*/ },
+        aliasAnalysisFromSchema()),
});

} // namespace jit
ERROR: Some files do not conform to style guidelines

tests/modules/hub.py Outdated Show resolved Hide resolved
@github-actions github-actions bot added the documentation Improvements or additions to documentation label May 5, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
    /// Op marks a Tensor to be conveted from an Torch Tensor
    /// to a TRT constant Tensor
-    Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+    Operator(
+        "trt::const(Tensor val) -> Tensor",
+        [](Stack& stack) { /*noop*/ },
+        aliasAnalysisFromSchema()),
});

} // namespace jit
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py	(original)
+++ /workspace/tests/modules/hub.py	(reformatted)
@@ -88,6 +88,7 @@
    def forward(self, x):
        return F.adaptive_avg_pool2d(x, (5, 5))

+
# Sample Nested Module (for module-level fallback testing)
class ModuleFallbackSub(nn.Module):

@@ -98,6 +99,7 @@

    def forward(self, x):
        return self.relu(self.conv(x))
+

class ModuleFallbackMain(nn.Module):

@@ -110,6 +112,7 @@
    def forward(self, x):
        return self.relu(self.conv(self.layer1(x)))

+
# Sample Looping Modules (for loop fallback testing)
class LoopFallbackEval(nn.Module):

@@ -122,6 +125,7 @@
            add_list = torch.cat((add_list, torch.tensor([x.shape[1]]).to(x.device)), 0)
        return x + add_list

+
class LoopFallbackNoEval(nn.Module):

    def __init__(self):
@@ -131,6 +135,7 @@
        for _ in range(x.shape[1]):
            x = x + torch.ones_like(x)
        return x
+

# Sample Conditional Model (for testing partitioning and fallback in conditionals)
class FallbackIf(torch.nn.Module):
@@ -156,21 +161,23 @@
        x = self.conv1(x)
        return x

+
class ModelManifest:
+
    def __init__(self):
        self.version_matches = False
        if not os.path.exists(MANIFEST_FILE) or os.stat(MANIFEST_FILE).st_size == 0:
            self.manifest = {}
-            self.manifest.update({'version' : torch_version})
+            self.manifest.update({'version': torch_version})
        else:
            with open(MANIFEST_FILE, 'r') as f:
                self.manifest = json.load(f)
                if self.manifest['version'] == torch_version:
                    self.version_matches = True
                else:
-                    print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(torch_version, self.manifest['version']))
+                    print("Torch version: {} mismatches with manifest's version: {}. Re-downloading all models".format(
+                        torch_version, self.manifest['version']))
                    self.manifest["version"] = torch_version
-        

    def download(self, models):
        if self.version_matches:
@@ -194,13 +201,13 @@
            record = json.dumps(manifest_record)
            f.write(record)
            f.truncate()
-    
+
    def get_manifest(self):
        return self.manifest
-    
+
    def if_version_matches(self):
        return self.version_matches
-    
+
    def get(self, n, m):
        print("Downloading {}".format(n))
        m["model"] = m["model"].eval().cuda()
@@ -214,10 +221,11 @@
        if m["path"] == "both" or m["path"] == "script":
            script_model = torch.jit.script(m["model"])
            torch.jit.save(script_model, script_filename)
-        
-        self.manifest.update({n : [traced_filename, script_filename]})
-
-def generate_custom_models(manifest, version_matches = False):
+
+        self.manifest.update({n: [traced_filename, script_filename]})
+
+
+def generate_custom_models(manifest, version_matches=False):
    # Pool
    traced_pool_name = "pooling_traced.jit.pt"
    if not (version_matches and os.path.exists(traced_pool_name)):
@@ -248,7 +256,8 @@
        loop_fallback_no_eval_model = LoopFallbackNoEval().eval().cuda()
        loop_fallback_no_eval_script_model = torch.jit.script(loop_fallback_no_eval_model)
        torch.jit.save(loop_fallback_no_eval_script_model, scripted_loop_fallback_no_eval_name)
-    manifest.update({"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})
+    manifest.update(
+        {"torchtrt_loop_fallback_no_eval": [scripted_loop_fallback_name, scripted_loop_fallback_no_eval_name]})

    # Conditional
    scripted_conditional_name = "conditional_scripted.jit.pt"
@@ -287,7 +296,7 @@

        traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
        torch.jit.save(traced_model, traced_bert_uncased_name)
-    manifest.update({"torchtrt_bert_case_uncased" : [traced_bert_uncased_name]})
+    manifest.update({"torchtrt_bert_case_uncased": [traced_bert_uncased_name]})


manifest = ModelManifest()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
ERROR: Some files do not conform to style guidelines

@andi4191 andi4191 force-pushed the anuragd/optimize_model_hub branch from 08b9853 to 6d149bc Compare May 6, 2022 17:23
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/register_trt_placeholder_ops.cpp b/tmp/changes.txt
index 5ba8171..17d7d3f 100644
--- a/workspace/core/lowering/register_trt_placeholder_ops.cpp
+++ b/tmp/changes.txt
@@ -10,7 +10,10 @@ c10::AliasAnalysisKind aliasAnalysisFromSchema() {
RegisterOperators trt_placeholder_ops_reg({
    /// Op marks a Tensor to be conveted from an Torch Tensor
    /// to a TRT constant Tensor
-    Operator("trt::const(Tensor val) -> Tensor", [](Stack& stack) { /*noop*/ }, aliasAnalysisFromSchema()),
+    Operator(
+        "trt::const(Tensor val) -> Tensor",
+        [](Stack& stack) { /*noop*/ },
+        aliasAnalysisFromSchema()),
});

} // namespace jit
ERROR: Some files do not conform to style guidelines

@github-actions github-actions bot added component: core Issues re: The core compiler component: lowering Issues re: The lowering / preprocessing passes labels May 6, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@andi4191
Copy link
Contributor Author

andi4191 commented May 6, 2022

/blossom-ci

1 similar comment
@andi4191
Copy link
Contributor Author

andi4191 commented May 6, 2022

/blossom-ci

@github-actions
Copy link

github-actions bot commented May 6, 2022

👎 Promotion blocked, new vulnerability found

Vulnerability report

Component Vulnerability Description Severity
The FreeType Project CVE-2020-15999 Heap buffer overflow in Freetype in Google Chrome prior to 86.0.4240.111 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page. MEDIUM
The FreeType Project CVE-2022-27404 FreeType commit 1e2eb65048f75c64b68708efed6ce904c31f3b2f was discovered to contain a heap buffer overflow via the function sfnt_init_face. CRITICAL
The FreeType Project CVE-2022-27405 FreeType commit 53dfdcd8198d2b3201a23c4bad9190519ba918db was discovered to contain a segmentation violation via the function FNT_Size_Request. HIGH
The FreeType Project CVE-2022-27406 FreeType commit 22a0cccb4d9d002f33c1ba7a4b36812c7d4f46b5 was discovered to contain a segmentation violation via the function FT_Request_Size. HIGH

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
…file doesn't exists

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
@andi4191 andi4191 force-pushed the anuragd/optimize_model_hub branch from eb6cf1a to 2e1764a Compare May 10, 2022 02:15
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/custom_models.py	(original)
+++ /workspace/tests/modules/custom_models.py	(reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+

# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -84,5 +85,3 @@
            x = self.log_sig(x)
        x = self.conv1(x)
        return x
-
-   
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
--- /workspace/tests/modules/hub.py	(original)
+++ /workspace/tests/modules/hub.py	(reformatted)
@@ -126,35 +126,36 @@
        name = m["model"]

        config = BertConfig(
-                        vocab_size_or_config_json_file=32000,
-                        hidden_size=768,
-                        num_hidden_layers=12,
-                        num_attention_heads=12,
-                        intermediate_size=3072,
-                        torchscript=True,
-                        )
+            vocab_size_or_config_json_file=32000,
+            hidden_size=768,
+            num_hidden_layers=12,
+            num_attention_heads=12,
+            intermediate_size=3072,
+            torchscript=True,
+        )
        m["model"] = BertModel(config)
        m["model"].eval()
        m["model"] = BertModel.from_pretrained(name, torchscript=True)
        traced_model = torch.jit.trace(m["model"], x)
        torch.jit.save(traced_model, traced_filename)
-        manifest.update({n : [traced_filename]})
+        manifest.update({n: [traced_filename]})
    else:
        m["model"] = m["model"].eval().cuda()
        if m["path"] == "both" or m["path"] == "trace":
            trace_model = torch.jit.trace(m["model"], [x])
            torch.jit.save(trace_model, traced_filename)
-            manifest.update({n : [traced_filename]})
+            manifest.update({n: [traced_filename]})
        if m["path"] == "both" or m["path"] == "script":
            script_model = torch.jit.script(m["model"])
            torch.jit.save(script_model, script_filename)
            if n in manifest.keys():
                files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
                files.append(script_filename)
-                manifest.update({n : files})
+                manifest.update({n: files})
            else:
                manifest.update({n: [script_filename]})
    return manifest
+

def download_models(version_matches, manifest):
    # Download all models if torch version is different than model version
@@ -169,8 +170,8 @@
            if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
               (m["path"] == "script" and os.path.exists(scripted_filename)) or \
               (m["path"] == "trace" and os.path.exists(traced_filename)):
-                   print("Skipping {} ".format(n))
-                   continue
+                print("Skipping {} ".format(n))
+                continue
            manifest = get(n, m, manifest)


@@ -208,4 +209,5 @@
        f.write(record)
        f.truncate()

+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/hub.py	(original)
+++ /workspace/tests/modules/hub.py	(reformatted)
@@ -111,23 +111,24 @@
    if n == "bert-base-uncased":
        traced_model = m["model"]
        torch.jit.save(traced_model, traced_filename)
-        manifest.update({n : [traced_filename]})
+        manifest.update({n: [traced_filename]})
    else:
        m["model"] = m["model"].eval().cuda()
        if m["path"] == "both" or m["path"] == "trace":
            trace_model = torch.jit.trace(m["model"], [x])
            torch.jit.save(trace_model, traced_filename)
-            manifest.update({n : [traced_filename]})
+            manifest.update({n: [traced_filename]})
        if m["path"] == "both" or m["path"] == "script":
            script_model = torch.jit.script(m["model"])
            torch.jit.save(script_model, script_filename)
            if n in manifest.keys():
                files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
                files.append(script_filename)
-                manifest.update({n : files})
+                manifest.update({n: files})
            else:
                manifest.update({n: [script_filename]})
    return manifest
+

def download_models(version_matches, manifest):
    # Download all models if torch version is different than model version
@@ -142,8 +143,8 @@
            if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
               (m["path"] == "script" and os.path.exists(scripted_filename)) or \
               (m["path"] == "trace" and os.path.exists(traced_filename)):
-                   print("Skipping {} ".format(n))
-                   continue
+                print("Skipping {} ".format(n))
+                continue
            manifest = get(n, m, manifest)


@@ -184,4 +185,5 @@
        f.write(record)
        f.truncate()

+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/modules/custom_models.py	(original)
+++ /workspace/tests/modules/custom_models.py	(reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+

# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -98,16 +99,15 @@
    tokens_tensor = torch.tensor([indexed_tokens])
    segments_tensors = torch.tensor([segments_ids])
    config = BertConfig(
-                vocab_size_or_config_json_file=32000,
-                hidden_size=768,
-                num_hidden_layers=12,
-                num_attention_heads=12,
-                intermediate_size=3072,
-                torchscript=True,
-                )
+        vocab_size_or_config_json_file=32000,
+        hidden_size=768,
+        num_hidden_layers=12,
+        num_attention_heads=12,
+        intermediate_size=3072,
+        torchscript=True,
+    )
    model = BertModel(config)
    model.eval()
    model = BertModel.from_pretrained(model_name, torchscript=True)
    traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
    return traced_model
-
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@andi4191 andi4191 requested a review from narendasan May 11, 2022 19:52
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
--- /workspace/tests/modules/custom_models.py	(original)
+++ /workspace/tests/modules/custom_models.py	(reformatted)
@@ -2,6 +2,7 @@
import torch.nn as nn
from transformers import BertModel, BertTokenizer, BertConfig
import torch.nn.functional as F
+

# Sample Pool Model (for testing plugin serialization)
class Pool(nn.Module):
@@ -98,16 +99,15 @@
    tokens_tensor = torch.tensor([indexed_tokens])
    segments_tensors = torch.tensor([segments_ids])
    config = BertConfig(
-                vocab_size_or_config_json_file=32000,
-                hidden_size=768,
-                num_hidden_layers=12,
-                num_attention_heads=12,
-                intermediate_size=3072,
-                torchscript=True,
-                )
+        vocab_size_or_config_json_file=32000,
+        hidden_size=768,
+        num_hidden_layers=12,
+        num_attention_heads=12,
+        intermediate_size=3072,
+        torchscript=True,
+    )
    model = BertModel(config)
    model.eval()
    model = BertModel.from_pretrained(model_name, torchscript=True)
    traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
    return traced_model
-
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/modules/custom_models.py
--- /workspace/tests/modules/hub.py	(original)
+++ /workspace/tests/modules/hub.py	(reformatted)
@@ -111,23 +111,24 @@
    if n == "bert-base-uncased":
        traced_model = m["model"]
        torch.jit.save(traced_model, traced_filename)
-        manifest.update({n : [traced_filename]})
+        manifest.update({n: [traced_filename]})
    else:
        m["model"] = m["model"].eval().cuda()
        if m["path"] == "both" or m["path"] == "trace":
            trace_model = torch.jit.trace(m["model"], [x])
            torch.jit.save(trace_model, traced_filename)
-            manifest.update({n : [traced_filename]})
+            manifest.update({n: [traced_filename]})
        if m["path"] == "both" or m["path"] == "script":
            script_model = torch.jit.script(m["model"])
            torch.jit.save(script_model, script_filename)
            if n in manifest.keys():
                files = list(manifest[n]) if type(manifest[n]) != list else manifest[n]
                files.append(script_filename)
-                manifest.update({n : files})
+                manifest.update({n: files})
            else:
                manifest.update({n: [script_filename]})
    return manifest
+

def download_models(version_matches, manifest):
    # Download all models if torch version is different than model version
@@ -142,8 +143,8 @@
            if (m["path"] == "both" and os.path.exists(scripted_filename) and os.path.exists(traced_filename)) or \
               (m["path"] == "script" and os.path.exists(scripted_filename)) or \
               (m["path"] == "trace" and os.path.exists(traced_filename)):
-                   print("Skipping {} ".format(n))
-                   continue
+                print("Skipping {} ".format(n))
+                continue
            manifest = get(n, m, manifest)


@@ -184,4 +185,5 @@
        f.write(record)
        f.truncate()

+
main()
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/modules/hub.py
ERROR: Some files do not conform to style guidelines

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_trt_intercompatibility.py
Reformatting /workspace/tests/modules/custom_models.py
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
--- /workspace/py/setup.py	(original)
+++ /workspace/py/setup.py	(reformatted)
@@ -242,8 +242,7 @@
            dir_path + "/../bazel-TRTorch/external/tensorrt/include",
            dir_path + "/../bazel-Torch-TensorRT/external/tensorrt/include",
            dir_path + "/../bazel-TensorRT/external/tensorrt/include",
-            dir_path + "/../bazel-tensorrt/external/tensorrt/include",
-            dir_path + "/../"
+            dir_path + "/../bazel-tensorrt/external/tensorrt/include", dir_path + "/../"
        ],
        extra_compile_args=[
            "-Wno-deprecated",
Reformatting /workspace/py/setup.py
ERROR: Some files do not conform to style guidelines

@github-actions github-actions bot added the component: api [Python] Issues re: Python API label May 20, 2022
Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/tests/core/lowering/test_module_fallback_passes.cpp b/tmp/changes.txt
index d57b8c9..d2ea9dc 100644
--- a/workspace/tests/core/lowering/test_module_fallback_passes.cpp
+++ b/tmp/changes.txt
@@ -20,7 +20,6 @@ TEST(Lowering, NotateModuleForFallbackWorksCorrectly) {
  std::unordered_set<std::string> mods_to_mark;
  mods_to_mark.insert("custom_models.ModuleFallbackSub");

-
  torch_tensorrt::core::lowering::passes::NotateModuleForFallback(mod, "", "forward", mods_to_mark);

  auto g = mod.get_method("forward").graph();
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/tests/core/lowering/test_module_fallback_passes.cpp b/tmp/changes.txt
index d57b8c9..d2ea9dc 100644
--- a/workspace/tests/core/lowering/test_module_fallback_passes.cpp
+++ b/tmp/changes.txt
@@ -20,7 +20,6 @@ TEST(Lowering, NotateModuleForFallbackWorksCorrectly) {
  std::unordered_set<std::string> mods_to_mark;
  mods_to_mark.insert("custom_models.ModuleFallbackSub");

-
  torch_tensorrt::core::lowering::passes::NotateModuleForFallback(mod, "", "forward", mods_to_mark);

  auto g = mod.get_method("forward").graph();
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@narendasan narendasan merged commit dcf19cc into master May 20, 2022
@narendasan narendasan deleted the anuragd/optimize_model_hub branch May 20, 2022 03:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: core Issues re: The core compiler component: lowering Issues re: The lowering / preprocessing passes component: tests Issues re: Tests documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants