Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Parameter' object has no attribute '_trt' #565

Open
maronuu opened this issue Jun 1, 2021 · 4 comments
Open

AttributeError: 'Parameter' object has no attribute '_trt' #565

maronuu opened this issue Jun 1, 2021 · 4 comments

Comments

@maronuu
Copy link

maronuu commented Jun 1, 2021

Environment

Ubuntu 20.04
Python 3.8.8
torch 1.8.1
torch2trt 0.2.0
cuda 11.1
cudnn 8.1.1
TensorRT 7.2.2

Issue

I'm trying to convert YOLOR(https://github.com/WongKinYiu/yolor) implemented in PyTorch into TensorRT.

There are two kinds of layers used in YOLOR but not supported by torch2trt now.

  • torch.nn.functional.silu
  • torch.Tensor.expand_as

silu

Thanks to #527, there is no problem here.

expand_as

Thanks to #487, a converter for torch.Tensor.expand is provided.
Since torch.Tensor.expand(other.size()) equals to torch.Tensor.expand_as(other) (https://pytorch.org/docs/stable/tensors.html),
I replaced all expand_as(other) with expand(other.size()) in yolor/utils/layers.py.

I made a script for conversion:

yolor/torch2trt_conversion.py

import argparse
import os
import sys
import time

import torch
import torch.jit
from torch2trt import torch2trt

from models.models import Darknet
from utils.torch_utils import select_device


def main():
    out_path = opt.output
    weights = opt.weights
    img_size = opt.img_size
    cfg = opt.cfg
    device = select_device(opt.device)
    # str2prec = {'int8': torch.int8, 'fp16': torch.float16, 'fp32': torch.float32}
    precision = opt.precision

    # load model
    print('BEGIN loading weights')
    model = Darknet(cfg, img_size).to(device)
    model.load_state_dict(torch.load(weights, map_location=device)['model'])
    model = model.eval()
    print('END loading weights')
    print('BEGIN conversion')
    input_data = torch.randn((1, 3, img_size, img_size)).to(device)
    if precision == 'int8':
        model_trt = torch2trt(model, [input_data], int8_mode=True)
    elif precision == 'fp16':
        model_trt = torch2trt(model, [input_data], fp16_mode=True)
    elif precision == 'fp32':
        model_trt = torch2trt(model, [input_data])
    else:
        raise ValueError("Invalid precision")

    print('END conversion')
    

    # save
    input_data = torch.empty([1, 3, img_size, img_size], dtype=precision).to(device)
    result = model_trt(input_data)
    torch.save(model_trt.state_dict(), out_path)
    print('Successfully saved')


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--output', type=str, default='trt_darknet.pth', help='path to converted model')
    parser.add_argument('--weights', nargs='+', type=str, default='yolor_p6.pt', help='model.pt path(s)')
    parser.add_argument('--img-size', type=int, default=1280, help='inference size (pixels)')
    parser.add_argument('--cfg', type=str, default='cfg/yolor_p6.cfg', help='*.cfg path')
    parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--precision', type=str, default='fp16', 
        help='precision of inference for the output model [int8, fp16, fp32]')
    opt = parser.parse_args()
    print(opt)

    main()

In the end, I got

/workspace/yolor# python torch2trt_conversion.py --img-size 640
Namespace(cfg='cfg/yolor_p6.cfg', device='0', img_size=640, output='trt_darknet.pth', precision='fp16', weights='yolor_p6.pt')
BEGIN loading weights
END loading weights
BEGIN conversion
Traceback (most recent call last):
  File "torch2trt_conversion.py", line 62, in <module>
    main()
  File "torch2trt_conversion.py", line 34, in main
    model_trt = torch2trt(model, [input_data], fp16_mode=True)
  File "/opt/conda/lib/python3.8/site-packages/torch2trt-0.2.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 542, in torch2trt
    outputs = module(*inputs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/workspace/yolor/models/models.py", line 550, in forward
    return self.forward_once(x)
  File "/workspace/yolor/models/models.py", line 601, in forward_once
    x = module(x, out)  # WeightedFeatureFusion(), FeatureConcat()
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/workspace/yolor/utils/layers.py", line 384, in forward
    return a.expand(x.size()) + x
  File "/opt/conda/lib/python3.8/site-packages/torch2trt-0.2.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 289, in wrapper
    converter["converter"](ctx)
  File "/opt/conda/lib/python3.8/site-packages/torch2trt-0.2.0-py3.8-linux-x86_64.egg/torch2trt/converters/expand.py", line 17, in convert_expand
    layer = ctx.network.add_slice(input._trt, start, shape, stride)
AttributeError: 'Parameter' object has no attribute '_trt'

'Parameter' object means 'torch.nn.parameter.Parameter'. This object has attribute data (torch.Tensor).
So, I replaced input with input.data, but it did not work.

What is the problem?

@TheConstant3
Copy link

Hi, do you solve this problem??
I have same issue

@maronuu
Copy link
Author

maronuu commented Jun 25, 2021

Unfortunately, the problem is not solved yet...
The issue posted on /yolor you tell me may be helpful for me.
I'll do a further investigation. Thanks.

@rodja
Copy link

rodja commented Aug 17, 2021

Here is a patch for the yolor paper branch which uses torch2trt in detect.py:

yolor.patch

diff --git a/detect.py b/detect.py
index f2d9f36..21a4f84 100644
--- a/detect.py
+++ b/detect.py
@@ -7,6 +7,10 @@ import torch
 import torch.backends.cudnn as cudnn
 from numpy import random
 
+import logging
+
+from torch2trt import torch2trt, tensorrt_converter, get_arg, trt, add_missing_trt_tensors
+
 from models.experimental import attempt_load
 from utils.datasets import LoadStreams, LoadImages
 from utils.general import check_img_size, non_max_suppression, apply_classifier, scale_coords, xyxy2xywh, \
@@ -14,6 +18,20 @@ from utils.general import check_img_size, non_max_suppression, apply_classifier,
 from utils.plots import plot_one_box
 from utils.torch_utils import select_device, load_classifier, time_synchronized
 
+# REGISTER NEW CONVERTERS
+
+@tensorrt_converter('torch.nn.functional.silu')
+def convert_silu(ctx):
+    input = get_arg(ctx, 'input', pos=0, default=None)
+    output = ctx.method_return
+    input_trt = add_missing_trt_tensors(ctx.network, [input])[0]
+    
+    layer = ctx.network.add_activation(input_trt, trt.ActivationType.SIGMOID)
+    layer = ctx.network.add_elementwise(input_trt, layer.get_output(0), trt.ElementWiseOperation.PROD)
+    
+    output._trt = layer.get_output(0)
+
+# missing: @tensorrt_converter('torch.nn.parameter.Parameter')
 
 def detect(save_img=False):
     source, weights, view_img, save_txt, imgsz = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
@@ -32,6 +50,14 @@ def detect(save_img=False):
     # Load model
     model = attempt_load(weights, map_location=device)  # load FP32 model
     imgsz = check_img_size(imgsz, s=model.stride.max())  # check img_size
+
+    x = torch.ones((1, 3, imgsz, imgsz), device=device)
+    try:
+        model_trt = torch2trt(model, [x], fp16_mode=True)
+    except:
+        logging.exception('could not create tensorRT model')
+        exit()
+
     if half:
         model.half()  # to FP16
 
@@ -59,6 +85,7 @@ def detect(save_img=False):
     t0 = time.time()
     img = torch.zeros((1, 3, imgsz, imgsz), device=device)  # init img
     _ = model(img.half() if half else img) if device.type != 'cpu' else None  # run once
+    _ = model_trt(img) if device.type != 'cpu' else None  # run once
     for path, img, im0s, vid_cap in dataset:
         img = torch.from_numpy(img).to(device)
         img = img.half() if half else img.float()  # uint8 to fp16/32
@@ -67,12 +94,17 @@ def detect(save_img=False):
             img = img.unsqueeze(0)
 
         # Inference
+        t_trt = time_synchronized()
+        pred = model_trt(img)[0]
+        print('trt pred (%.3fs)' % (time_synchronized() - t_trt))
+
         t1 = time_synchronized()
         pred = model(img, augment=opt.augment)[0]
+        t2 = time_synchronized()
+        print('torch pred. (%.3fs)' % (t2 - t1))
 
         # Apply NMS
         pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
-        t2 = time_synchronized()
 
         # Apply Classifier
         if classify:
diff --git a/models/common.py b/models/common.py
index 0a1e7e3..d991fcf 100644
--- a/models/common.py
+++ b/models/common.py
@@ -41,7 +41,7 @@ class ImplicitA(nn.Module):
         nn.init.normal_(self.implicit, std=.02)
 
     def forward(self, x):
-        return self.implicit.expand_as(x) + x
+        return self.implicit.expand(x.size()) + x
 
 
 class ImplicitM(nn.Module):
@@ -52,7 +52,7 @@ class ImplicitM(nn.Module):
         nn.init.normal_(self.implicit, mean=1., std=.02)
 
     def forward(self, x):
-        return self.implicit.expand_as(x) * x
+        return self.implicit.expand(x.size()) * x
     
     
 class ReOrg(nn.Module):
@@ -236,7 +236,7 @@ class BottleneckCSPSE(nn.Module):
         self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
 
     def forward(self, x):
-        x = x * self.cvsig(self.cs(self.avg_pool(x))).expand_as(x)
+        x = x * self.cvsig(self.cs(self.avg_pool(x))).expand(x.size())
         y1 = self.cv3(self.m(self.cv1(x)))
         y2 = self.cv2(x)
         return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
@@ -259,7 +259,7 @@ class BottleneckCSPSEA(nn.Module):
         self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
 
     def forward(self, x):
-        x = x + x * self.cvsig(self.cs(self.avg_pool(x))).expand_as(x)
+        x = x + x * self.cvsig(self.cs(self.avg_pool(x))).expand(x.size())
         y1 = self.cv3(self.m(self.cv1(x)))
         y2 = self.cv2(x)
         return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
\```

</p>
</details> 

When executing

```bash
python3 detect.py --weights model.pt --source /tmp/example_images  --conf 0.25 --img-size 800 --device 0

I'm getting the same AttributeError: 'Parameter' object has no attribute '_trt' as @maronuu.

@X-WhyY
Copy link

X-WhyY commented Jan 25, 2022

Hi, have you solved this problem?
I meet the same problem too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants