New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have installed with plugin successfully, but I can not use 'interpolate' operation? #17
Comments
May you provide an example code sample of using the |
Hi sonack, Thanks for reaching out! If installed and imported correctly, interpolate should appear in the list of registered converters. Could you share the following information
For a full example, the image segmentation notebook converts a model that contains interpolate layers. https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/notebooks/image_segmentation/conversion.ipynb However, in interpolate.py we also register several module tests that would probably be the easiest thing to try first. You can execute them by calling python3 -m torch2trt.test --name=interpolate This will print information like the throughput, latency, as well as max absolute error (compared to PyTorch). Please let me know if you have any other questions! Best, |
Hi John,
The used TensorRT install package:
In order to compile torch2trt without error, I have made some fix in
Besides, I have installed protobuf-3.9.0, and libtorch( The full compilation and install log is below without any complain:
and after installation, the directory would be: There is no wrong when running the alexnet demo: import torch
from torch2trt import torch2trt
from torchvision.models.alexnet import alexnet
# create some regular pytorch model...
model = alexnet(pretrained=True).eval().cuda()
# create example data
x = torch.ones((1, 3, 224, 224)).cuda()
# convert to TensorRT feeding sample data as input
model_trt = torch2trt(model, [x])
y = model(x)
y_trt = model_trt(x)
# check the output against PyTorch
print(torch.max(torch.abs(y - y_trt))) As for whether I have installed torch2trt without plugins before, I think I might installed the torch2trt without plugins at a very first try, but I had uninstalled it before installing the with-plugins one. Does it matters? But when I run my resnet50-based segmentation model, it gives
One more word here, the model was trained using Below is the network definition: def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class convolution(nn.Module):
def __init__(self, k, inp_dim, out_dim, stride=1, with_bn=True):
super(convolution, self).__init__()
pad = (k - 1) // 2
self.conv = nn.Conv2d(inp_dim, out_dim, (k, k), padding=(pad, pad), stride=(stride, stride), bias=not with_bn)
self.bn = nn.BatchNorm2d(out_dim) if with_bn else nn.Sequential()
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
conv = self.conv(x)
bn = self.bn(conv)
relu = self.relu(bn)
return relu
def make_cnv_layer(inp_dim, out_dim):
return convolution(3, inp_dim, out_dim)
def make_kp_layer(cnv_dim, curr_dim, out_dim):
return nn.Sequential(
convolution(3, cnv_dim, curr_dim, with_bn=False),
nn.Conv2d(curr_dim, out_dim, (1, 1))
)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, scale=1):
self.inplanes = 48
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 48, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(48)
self.relu1 = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 48, layers[0])
self.layer2 = self._make_layer(block, 64, layers[1], stride=2)
self.layer3 = self._make_layer(block, 128, layers[2], stride=2)
self.layer4 = self._make_layer(block, 256, layers[3], stride=2)
# self.avgpool = nn.AvgPool2d(7, stride=1)
# self.fc = nn.Linear(512 * block.expansion, num_classes)
# Top layer
self.toplayer = nn.Conv2d(1024, 128, kernel_size=1, stride=1, padding=0) # Reduce channels
self.toplayer_bn = nn.BatchNorm2d(128)
self.toplayer_relu = nn.ReLU(inplace=True)
# Smooth layers
self.smooth1 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.smooth1_bn = nn.BatchNorm2d(128)
self.smooth1_relu = nn.ReLU(inplace=True)
self.smooth2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.smooth2_bn = nn.BatchNorm2d(128)
self.smooth2_relu = nn.ReLU(inplace=True)
self.smooth3 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.smooth3_bn = nn.BatchNorm2d(128)
self.smooth3_relu = nn.ReLU(inplace=True)
# Lateral layers
self.latlayer1 = nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0)
self.latlayer1_bn = nn.BatchNorm2d(128)
self.latlayer1_relu = nn.ReLU(inplace=True)
self.latlayer2 = nn.Conv2d(256, 128, kernel_size=1, stride=1, padding=0)
self.latlayer2_bn = nn.BatchNorm2d(128)
self.latlayer2_relu = nn.ReLU(inplace=True)
##layer1*blockexpansion
self.latlayer3 = nn.Conv2d(48*4, 128, kernel_size=1, stride=1, padding=0)
self.latlayer3_bn = nn.BatchNorm2d(128)
self.latlayer3_relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(512, 128, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(128)
self.relu2 = nn.ReLU(inplace=True)
self.conv_label = make_cnv_layer(128, 64)
self.conv_center = make_cnv_layer(128, 64)
self.conv_up = make_cnv_layer(128, 64)
self.conv_down = make_cnv_layer(128, 64)
self.conv_label_last = make_kp_layer(64, 16, 3)
self.heat_center = make_kp_layer(64, 16, 1)
self.heat_up = make_kp_layer(64, 16, 1)
self.heat_down = make_kp_layer(64, 16, 1)
self.scale = scale
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def _upsample(self, x, y, scale=1):
_, _, H, W = y.size()
# return F.upsample(x, size=(H // scale, W // scale), mode='bilinear')
# print('interpolate')
return F.interpolate(x, size=(H // scale, W // scale), mode='bilinear')
def _upsample_add(self, x, y):
_, _, H, W = y.size()
# return F.upsample(x, size=(H, W), mode='bilinear') + y
# print('interpolate add')
return F.interpolate(x, size=(H, W), mode='bilinear') + y
def forward(self, x):
h = x
h = self.conv1(h)
h = self.bn1(h)
h = self.relu1(h)
h = self.maxpool(h)
h = self.layer1(h)
c2 = h
h = self.layer2(h)
c3 = h
h = self.layer3(h)
c4 = h
h = self.layer4(h)
c5 = h
# Top-down
p5 = self.toplayer(c5)
p5 = self.toplayer_relu(self.toplayer_bn(p5))
c4 = self.latlayer1(c4)
c4 = self.latlayer1_relu(self.latlayer1_bn(c4))
p4 = self._upsample_add(p5, c4)
p4 = self.smooth1(p4)
p4 = self.smooth1_relu(self.smooth1_bn(p4))
c3 = self.latlayer2(c3)
c3 = self.latlayer2_relu(self.latlayer2_bn(c3))
p3 = self._upsample_add(p4, c3)
p3 = self.smooth2(p3)
p3 = self.smooth2_relu(self.smooth2_bn(p3))
c2 = self.latlayer3(c2)
c2 = self.latlayer3_relu(self.latlayer3_bn(c2))
p2 = self._upsample_add(p3, c2)
p2 = self.smooth3(p2)
p2 = self.smooth3_relu(self.smooth3_bn(p2))
p3 = self._upsample(p3, p2)
p4 = self._upsample(p4, p2)
p5 = self._upsample(p5, p2)
out = torch.cat((p2, p3, p4, p5), 1)
out = self.conv2(out)
out = self.relu2(self.bn2(out))
out_label = self.conv_label(out)
out_label = self.conv_label_last(out_label)
out_label = self._upsample(out_label, x, scale=self.scale)
out_label = F.softmax(out_label, dim=1)
out_center = self.conv_center(out)
out_center = self.heat_center(out_center)
out_center = self._upsample(out_center, x, scale=self.scale)
out_up = self.conv_up(out)
out_up = self.heat_up(out_up)
out_up = self._upsample(out_up, x, scale=self.scale)
out_down = self.conv_down(out)
out_down = self.heat_down(out_down)
out_down = self._upsample(out_down, x, scale=self.scale)
# pdb.set_trace()
# return torch.cat([out_label, out_center, out_up, out_down], 1)
return out_label, out_center, out_up, out_down
def resnet18(**kwargs):
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
return model
def resnet34(**kwargs):
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
return model
# I used this
def resnet50(**kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
return model
Thanks, I will dive into it.
I really want to use this awesome repository, thanks for your helping!!! If you have something unclear, please contact me, thanks! Best, |
Do you have any idea? Thanks! |
when I test the 'interpolate' plugin and run notebooks/image_segmentation/conversion.ipynb ,I got this error |
Hi All, @sonack @binzh93 Sorry for the delay on getting back. It is unusual that the build would pass, yet the interpolate test cases would not run. If interpolate.py is imported, we should at least see the test cases fail. Could you try removing the try/except statement around where interpolate is included and reinstalling https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/converters/__init__.py#L31 To see if /what error is thrown? This should probably be made more verbose. @binzh93 Could you share which version of torchvision you're using? The repository is under active development, it's possible they've made a breaking change since I tested. Ie. Thanks for your patience all. Best, |
OK! Thanks very much! I have solved the old issue, but new problem appeared. I pulled the latest repository, and commented the Compilation Log:
But when I import
so I installed with However, when I test this installation on the (notebook)[https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/notebooks/image_segmentation/conversion.ipynb], it complains: The full error log trace is
The easiest way to reproduce this error is How to solve it? Thank you very much! |
I forget to put my self-compiled protobuf before system's default protobuf library, which cause the error. |
|
I compiled the latest version on github 3.9.0 |
I sucessfully install interpolate plugin and import torch2trt with no bug, but when I use "python3 -m torch2trt.test --name=interpolate", "IndexError: list index out of range" error occured. My protobuf version is 3.0.0 |
My protobuf version is 3.9.0, one thing to note is to add build.py
Note to replace {{}} with your own path!!!! setup.py
#!/usr/bin/env bash
python setup.py install --plugins --cuda-dir=/usr/local/cuda-10.0 --trt-inc-dir=/data00/xxx/TensorRT/1404/TensorRT-5.1.5.0/include --trt-lib-dir=/data00/xxx/TensorRT/1404/TensorRT-5.1.5.0/lib replace with your own path. |
I have installed this repo with plugins, but when I issue
torch2trt.CONVERTERS.keys()
:there is no interpolate layer, and when I convert a model with interpolate, it complains 'AttributeError: 'Tensor' object has no attribute '_trt'' before 'interpolate' operation, so what's wrong is here?
The text was updated successfully, but these errors were encountered: