Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError while annotating deeplabv3 model from pytorch #33

Closed
abdulazizm opened this issue Mar 22, 2021 · 35 comments
Closed

AssertionError while annotating deeplabv3 model from pytorch #33

abdulazizm opened this issue Mar 22, 2021 · 35 comments

Comments

@abdulazizm
Copy link

abdulazizm commented Mar 22, 2021

assert i == axis or len(check) == 1

Trying to build deeplabv3 from pytorch to deploy on top of xilinx EDGE device - zcu104. Facing some issue while annotating. Code snippets here:

mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
mod["main"] = bind_params_by_name(mod["main"], params)
desired_layouts = {'nn.conv2d': ['NHWC', 'default']}
seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
                                relay.transform.ConvertLayout(desired_layouts),
                                relay.transform.FoldConstant()])
with tvm.transform.PassContext(opt_level=3):
    mod = seq(mod)
mod = annotation(mod, params, target) #shows error here

Two doubtful things in output:

/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l10_temporary.py:64: UserWarning: Convert Relay Adaptive Avg pool2d layer into normal average pool2d layer
warnings.warn("Convert Relay Adaptive Avg pool2d layer into normal"

File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l1_basic.py", line 414, in concatenate
X = px.ops.concat(op_name, data_layers, axis, relay_id=relay_idx)
File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/graph/ops/l1_basic_nn.py", line 167, in concat
assert i == axis or len(check) == 1
AssertionError

While printing/debugging some values near the above-referenced line, noticed that there is a TODO tag. Anyone working on this implementation? (FYI: It seems i=2 & axis=1 and check={132,108,156,86,28} when it throws assertion error).

@jtuyls
Copy link

jtuyls commented Mar 22, 2021

@abdulazizm This line checks whether the dimensions that are not concatenated over have the same value for all inputs. Could you share the Relay representation of the model (print(mod['main'])) after doing an InferType pass (mod = tvm.transform.InferType()(mod))? That way I can see what the input layers are for this concatenation.

@abdulazizm
Copy link
Author

@abdulazizm This line checks whether the dimensions that are not concatenated over have the same value for all inputs. Could you share the Relay representation of the model (print(mod['main'])) after doing an InferType pass (mod = tvm.transform.InferType()(mod))? That way I can see what the input layers are for this concatenation.

@jtuyls Thanks for esteemed reply. Here are the requested details.

(vitis-ai-pytorch) Vitis-AI /workspace/python/compile > python3 compile_pytorch_deeplab.py 2021-03-22 11:34:26.871043: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/xilinx/xrt/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib 2021-03-22 11:34:26.871072: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/quantization/decent_quantizer.py:50: UserWarning: Could not import decent_q module. Please check if installed. warnings.warn("Could not import decent_q module. Please check" File /home/vitis-ai-user/.tvm_test_data/data/cat.png exists, skip. input img size (224, 224) transform_image_torchvision torch.Size([3, 224, 224]) (1, 3, 224, 224) <class 'tuple'> WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 WARNING:root:Untyped Tensor found, assume it is float32 fn (%data: Tensor[(1, 3, 224, 224), float32], %model.backbone.conv1.weight: Tensor[(64, 3, 7, 7), float32], %model.backbone.bn1.weight: Tensor[(64), float32], %model.backbone.bn1.bias: Tensor[(64), float32], %model.backbone.bn1.running_mean: Tensor[(64), float32], %model.backbone.bn1.running_var: Tensor[(64), float32], %model.backbone.layer1.0.conv1.weight: Tensor[(64, 64, 1, 1), float32], %model.backbone.layer1.0.bn1.weight: Tensor[(64), float32], %model.backbone.layer1.0.bn1.bias: Tensor[(64), float32], %model.backbone.layer1.0.bn1.running_mean: Tensor[(64), float32], %model.backbone.layer1.0.bn1.running_var: Tensor[(64), float32], %model.backbone.layer1.0.conv2.weight: Tensor[(64, 64, 3, 3), float32], %model.backbone.layer1.0.bn2.weight: Tensor[(64), float32], %model.backbone.layer1.0.bn2.bias: Tensor[(64), float32], %model.backbone.layer1.0.bn2.running_mean: Tensor[(64), float32], %model.backbone.layer1.0.bn2.running_var: Tensor[(64), float32], %model.backbone.layer1.0.conv3.weight: Tensor[(256, 64, 1, 1), float32], %model.backbone.layer1.0.bn3.weight: Tensor[(256), float32], %model.backbone.layer1.0.bn3.bias: Tensor[(256), float32], %model.backbone.layer1.0.bn3.running_mean: Tensor[(256), float32], %model.backbone.layer1.0.bn3.running_var: Tensor[(256), float32], %model.backbone.layer1.0.downsample.0.weight: Tensor[(256, 64, 1, 1), float32], %model.backbone.layer1.0.downsample.1.weight: Tensor[(256), float32], %model.backbone.layer1.0.downsample.1.bias: Tensor[(256), float32], %model.backbone.layer1.0.downsample.1.running_mean: Tensor[(256), float32], %model.backbone.layer1.0.downsample.1.running_var: Tensor[(256), float32], %model.backbone.layer1.1.conv1.weight: Tensor[(64, 256, 1, 1), float32], %model.backbone.layer1.1.bn1.weight: Tensor[(64), float32], %model.backbone.layer1.1.bn1.bias: Tensor[(64), float32], %model.backbone.layer1.1.bn1.running_mean: Tensor[(64), float32], %model.backbone.layer1.1.bn1.running_var: Tensor[(64), float32], %model.backbone.layer1.1.conv2.weight: Tensor[(64, 64, 3, 3), float32], %model.backbone.layer1.1.bn2.weight: Tensor[(64), float32], %model.backbone.layer1.1.bn2.bias: Tensor[(64), float32], %model.backbone.layer1.1.bn2.running_mean: Tensor[(64), float32], %model.backbone.layer1.1.bn2.running_var: Tensor[(64), float32], %model.backbone.layer1.1.conv3.weight: Tensor[(256, 64, 1, 1), float32], %model.backbone.layer1.1.bn3.weight: Tensor[(256), float32], %model.backbone.layer1.1.bn3.bias: Tensor[(256), float32], %model.backbone.layer1.1.bn3.running_mean: Tensor[(256), float32], %model.backbone.layer1.1.bn3.running_var: Tensor[(256), float32], %model.backbone.layer1.2.conv1.weight: Tensor[(64, 256, 1, 1), float32], %model.backbone.layer1.2.bn1.weight: Tensor[(64), float32], %model.backbone.layer1.2.bn1.bias: Tensor[(64), float32], %model.backbone.layer1.2.bn1.running_mean: Tensor[(64), float32], %model.backbone.layer1.2.bn1.running_var: Tensor[(64), float32], %model.backbone.layer1.2.conv2.weight: Tensor[(64, 64, 3, 3), float32], %model.backbone.layer1.2.bn2.weight: Tensor[(64), float32], %model.backbone.layer1.2.bn2.bias: Tensor[(64), float32], %model.backbone.layer1.2.bn2.running_mean: Tensor[(64), float32], %model.backbone.layer1.2.bn2.running_var: Tensor[(64), float32], %model.backbone.layer1.2.conv3.weight: Tensor[(256, 64, 1, 1), float32], %model.backbone.layer1.2.bn3.weight: Tensor[(256), float32], %model.backbone.layer1.2.bn3.bias: Tensor[(256), float32], %model.backbone.layer1.2.bn3.running_mean: Tensor[(256), float32], %model.backbone.layer1.2.bn3.running_var: Tensor[(256), float32], %model.backbone.layer2.0.conv1.weight: Tensor[(128, 256, 1, 1), float32], %model.backbone.layer2.0.bn1.weight: Tensor[(128), float32], %model.backbone.layer2.0.bn1.bias: Tensor[(128), float32], %model.backbone.layer2.0.bn1.running_mean: Tensor[(128), float32], %model.backbone.layer2.0.bn1.running_var: Tensor[(128), float32], %model.backbone.layer2.0.conv2.weight: Tensor[(128, 128, 3, 3), float32], %model.backbone.layer2.0.bn2.weight: Tensor[(128), float32], %model.backbone.layer2.0.bn2.bias: Tensor[(128), float32], %model.backbone.layer2.0.bn2.running_mean: Tensor[(128), float32], %model.backbone.layer2.0.bn2.running_var: Tensor[(128), float32], %model.backbone.layer2.0.conv3.weight: Tensor[(512, 128, 1, 1), float32], %model.backbone.layer2.0.bn3.weight: Tensor[(512), float32], %model.backbone.layer2.0.bn3.bias: Tensor[(512), float32], %model.backbone.layer2.0.bn3.running_mean: Tensor[(512), float32], %model.backbone.layer2.0.bn3.running_var: Tensor[(512), float32], %model.backbone.layer2.0.downsample.0.weight: Tensor[(512, 256, 1, 1), float32], %model.backbone.layer2.0.downsample.1.weight: Tensor[(512), float32], %model.backbone.layer2.0.downsample.1.bias: Tensor[(512), float32], %model.backbone.layer2.0.downsample.1.running_mean: Tensor[(512), float32], %model.backbone.layer2.0.downsample.1.running_var: Tensor[(512), float32], %model.backbone.layer2.1.conv1.weight: Tensor[(128, 512, 1, 1), float32], %model.backbone.layer2.1.bn1.weight: Tensor[(128), float32], %model.backbone.layer2.1.bn1.bias: Tensor[(128), float32], %model.backbone.layer2.1.bn1.running_mean: Tensor[(128), float32], %model.backbone.layer2.1.bn1.running_var: Tensor[(128), float32], %model.backbone.layer2.1.conv2.weight: Tensor[(128, 128, 3, 3), float32], %model.backbone.layer2.1.bn2.weight: Tensor[(128), float32], %model.backbone.layer2.1.bn2.bias: Tensor[(128), float32], %model.backbone.layer2.1.bn2.running_mean: Tensor[(128), float32], %model.backbone.layer2.1.bn2.running_var: Tensor[(128), float32], %model.backbone.layer2.1.conv3.weight: Tensor[(512, 128, 1, 1), float32], %model.backbone.layer2.1.bn3.weight: Tensor[(512), float32], %model.backbone.layer2.1.bn3.bias: Tensor[(512), float32], %model.backbone.layer2.1.bn3.running_mean: Tensor[(512), float32], %model.backbone.layer2.1.bn3.running_var: Tensor[(512), float32], %model.backbone.layer2.2.conv1.weight: Tensor[(128, 512, 1, 1), float32], %model.backbone.layer2.2.bn1.weight: Tensor[(128), float32], %model.backbone.layer2.2.bn1.bias: Tensor[(128), float32], %model.backbone.layer2.2.bn1.running_mean: Tensor[(128), float32], %model.backbone.layer2.2.bn1.running_var: Tensor[(128), float32], %model.backbone.layer2.2.conv2.weight: Tensor[(128, 128, 3, 3), float32], %model.backbone.layer2.2.bn2.weight: Tensor[(128), float32], %model.backbone.layer2.2.bn2.bias: Tensor[(128), float32], %model.backbone.layer2.2.bn2.running_mean: Tensor[(128), float32], %model.backbone.layer2.2.bn2.running_var: Tensor[(128), float32], %model.backbone.layer2.2.conv3.weight: Tensor[(512, 128, 1, 1), float32], %model.backbone.layer2.2.bn3.weight: Tensor[(512), float32], %model.backbone.layer2.2.bn3.bias: Tensor[(512), float32], %model.backbone.layer2.2.bn3.running_mean: Tensor[(512), float32], %model.backbone.layer2.2.bn3.running_var: Tensor[(512), float32], %model.backbone.layer2.3.conv1.weight: Tensor[(128, 512, 1, 1), float32], %model.backbone.layer2.3.bn1.weight: Tensor[(128), float32], %model.backbone.layer2.3.bn1.bias: Tensor[(128), float32], %model.backbone.layer2.3.bn1.running_mean: Tensor[(128), float32], %model.backbone.layer2.3.bn1.running_var: Tensor[(128), float32], %model.backbone.layer2.3.conv2.weight: Tensor[(128, 128, 3, 3), float32], %model.backbone.layer2.3.bn2.weight: Tensor[(128), float32], %model.backbone.layer2.3.bn2.bias: Tensor[(128), float32], %model.backbone.layer2.3.bn2.running_mean: Tensor[(128), float32], %model.backbone.layer2.3.bn2.running_var: Tensor[(128), float32], %model.backbone.layer2.3.conv3.weight: Tensor[(512, 128, 1, 1), float32], %model.backbone.layer2.3.bn3.weight: Tensor[(512), float32], %model.backbone.layer2.3.bn3.bias: Tensor[(512), float32], %model.backbone.layer2.3.bn3.running_mean: Tensor[(512), float32], %model.backbone.layer2.3.bn3.running_var: Tensor[(512), float32], %model.backbone.layer3.0.conv1.weight: Tensor[(256, 512, 1, 1), float32], %model.backbone.layer3.0.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.0.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.0.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.0.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.0.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.0.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.0.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.0.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.0.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.0.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.0.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.0.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.0.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.0.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.0.downsample.0.weight: Tensor[(1024, 512, 1, 1), float32], %model.backbone.layer3.0.downsample.1.weight: Tensor[(1024), float32], %model.backbone.layer3.0.downsample.1.bias: Tensor[(1024), float32], %model.backbone.layer3.0.downsample.1.running_mean: Tensor[(1024), float32], %model.backbone.layer3.0.downsample.1.running_var: Tensor[(1024), float32], %model.backbone.layer3.1.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.1.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.1.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.1.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.1.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.1.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.1.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.1.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.1.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.1.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.1.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.1.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.1.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.1.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.1.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.2.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.2.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.2.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.2.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.2.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.2.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.2.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.2.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.2.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.2.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.2.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.2.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.2.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.2.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.2.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.3.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.3.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.3.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.3.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.3.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.3.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.3.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.3.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.3.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.3.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.3.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.3.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.3.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.3.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.3.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.4.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.4.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.4.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.4.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.4.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.4.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.4.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.4.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.4.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.4.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.4.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.4.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.4.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.4.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.4.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.5.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.5.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.5.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.5.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.5.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.5.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.5.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.5.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.5.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.5.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.5.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.5.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.5.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.5.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.5.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.6.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.6.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.6.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.6.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.6.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.6.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.6.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.6.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.6.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.6.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.6.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.6.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.6.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.6.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.6.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.7.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.7.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.7.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.7.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.7.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.7.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.7.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.7.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.7.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.7.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.7.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.7.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.7.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.7.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.7.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.8.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.8.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.8.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.8.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.8.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.8.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.8.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.8.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.8.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.8.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.8.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.8.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.8.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.8.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.8.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.9.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.9.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.9.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.9.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.9.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.9.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.9.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.9.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.9.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.9.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.9.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.9.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.9.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.9.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.9.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.10.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.10.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.10.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.10.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.10.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.10.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.10.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.10.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.10.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.10.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.10.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.10.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.10.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.10.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.10.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.11.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.11.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.11.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.11.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.11.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.11.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.11.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.11.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.11.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.11.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.11.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.11.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.11.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.11.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.11.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.12.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.12.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.12.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.12.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.12.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.12.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.12.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.12.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.12.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.12.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.12.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.12.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.12.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.12.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.12.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.13.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.13.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.13.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.13.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.13.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.13.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.13.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.13.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.13.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.13.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.13.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.13.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.13.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.13.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.13.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.14.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.14.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.14.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.14.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.14.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.14.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.14.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.14.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.14.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.14.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.14.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.14.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.14.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.14.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.14.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.15.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.15.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.15.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.15.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.15.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.15.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.15.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.15.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.15.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.15.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.15.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.15.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.15.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.15.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.15.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.16.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.16.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.16.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.16.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.16.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.16.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.16.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.16.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.16.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.16.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.16.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.16.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.16.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.16.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.16.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.17.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.17.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.17.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.17.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.17.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.17.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.17.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.17.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.17.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.17.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.17.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.17.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.17.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.17.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.17.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.18.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.18.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.18.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.18.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.18.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.18.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.18.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.18.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.18.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.18.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.18.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.18.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.18.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.18.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.18.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.19.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.19.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.19.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.19.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.19.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.19.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.19.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.19.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.19.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.19.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.19.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.19.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.19.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.19.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.19.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.20.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.20.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.20.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.20.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.20.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.20.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.20.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.20.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.20.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.20.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.20.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.20.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.20.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.20.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.20.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.21.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.21.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.21.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.21.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.21.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.21.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.21.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.21.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.21.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.21.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.21.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.21.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.21.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.21.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.21.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer3.22.conv1.weight: Tensor[(256, 1024, 1, 1), float32], %model.backbone.layer3.22.bn1.weight: Tensor[(256), float32], %model.backbone.layer3.22.bn1.bias: Tensor[(256), float32], %model.backbone.layer3.22.bn1.running_mean: Tensor[(256), float32], %model.backbone.layer3.22.bn1.running_var: Tensor[(256), float32], %model.backbone.layer3.22.conv2.weight: Tensor[(256, 256, 3, 3), float32], %model.backbone.layer3.22.bn2.weight: Tensor[(256), float32], %model.backbone.layer3.22.bn2.bias: Tensor[(256), float32], %model.backbone.layer3.22.bn2.running_mean: Tensor[(256), float32], %model.backbone.layer3.22.bn2.running_var: Tensor[(256), float32], %model.backbone.layer3.22.conv3.weight: Tensor[(1024, 256, 1, 1), float32], %model.backbone.layer3.22.bn3.weight: Tensor[(1024), float32], %model.backbone.layer3.22.bn3.bias: Tensor[(1024), float32], %model.backbone.layer3.22.bn3.running_mean: Tensor[(1024), float32], %model.backbone.layer3.22.bn3.running_var: Tensor[(1024), float32], %model.backbone.layer4.0.conv1.weight: Tensor[(512, 1024, 1, 1), float32], %model.backbone.layer4.0.bn1.weight: Tensor[(512), float32], %model.backbone.layer4.0.bn1.bias: Tensor[(512), float32], %model.backbone.layer4.0.bn1.running_mean: Tensor[(512), float32], %model.backbone.layer4.0.bn1.running_var: Tensor[(512), float32], %model.backbone.layer4.0.conv2.weight: Tensor[(512, 512, 3, 3), float32], %model.backbone.layer4.0.bn2.weight: Tensor[(512), float32], %model.backbone.layer4.0.bn2.bias: Tensor[(512), float32], %model.backbone.layer4.0.bn2.running_mean: Tensor[(512), float32], %model.backbone.layer4.0.bn2.running_var: Tensor[(512), float32], %model.backbone.layer4.0.conv3.weight: Tensor[(2048, 512, 1, 1), float32], %model.backbone.layer4.0.bn3.weight: Tensor[(2048), float32], %model.backbone.layer4.0.bn3.bias: Tensor[(2048), float32], %model.backbone.layer4.0.bn3.running_mean: Tensor[(2048), float32], %model.backbone.layer4.0.bn3.running_var: Tensor[(2048), float32], %model.backbone.layer4.0.downsample.0.weight: Tensor[(2048, 1024, 1, 1), float32], %model.backbone.layer4.0.downsample.1.weight: Tensor[(2048), float32], %model.backbone.layer4.0.downsample.1.bias: Tensor[(2048), float32], %model.backbone.layer4.0.downsample.1.running_mean: Tensor[(2048), float32], %model.backbone.layer4.0.downsample.1.running_var: Tensor[(2048), float32], %model.backbone.layer4.1.conv1.weight: Tensor[(512, 2048, 1, 1), float32], %model.backbone.layer4.1.bn1.weight: Tensor[(512), float32], %model.backbone.layer4.1.bn1.bias: Tensor[(512), float32], %model.backbone.layer4.1.bn1.running_mean: Tensor[(512), float32], %model.backbone.layer4.1.bn1.running_var: Tensor[(512), float32], %model.backbone.layer4.1.conv2.weight: Tensor[(512, 512, 3, 3), float32], %model.backbone.layer4.1.bn2.weight: Tensor[(512), float32], %model.backbone.layer4.1.bn2.bias: Tensor[(512), float32], %model.backbone.layer4.1.bn2.running_mean: Tensor[(512), float32], %model.backbone.layer4.1.bn2.running_var: Tensor[(512), float32], %model.backbone.layer4.1.conv3.weight: Tensor[(2048, 512, 1, 1), float32], %model.backbone.layer4.1.bn3.weight: Tensor[(2048), float32], %model.backbone.layer4.1.bn3.bias: Tensor[(2048), float32], %model.backbone.layer4.1.bn3.running_mean: Tensor[(2048), float32], %model.backbone.layer4.1.bn3.running_var: Tensor[(2048), float32], %model.backbone.layer4.2.conv1.weight: Tensor[(512, 2048, 1, 1), float32], %model.backbone.layer4.2.bn1.weight: Tensor[(512), float32], %model.backbone.layer4.2.bn1.bias: Tensor[(512), float32], %model.backbone.layer4.2.bn1.running_mean: Tensor[(512), float32], %model.backbone.layer4.2.bn1.running_var: Tensor[(512), float32], %model.backbone.layer4.2.conv2.weight: Tensor[(512, 512, 3, 3), float32], %model.backbone.layer4.2.bn2.weight: Tensor[(512), float32], %model.backbone.layer4.2.bn2.bias: Tensor[(512), float32], %model.backbone.layer4.2.bn2.running_mean: Tensor[(512), float32], %model.backbone.layer4.2.bn2.running_var: Tensor[(512), float32], %model.backbone.layer4.2.conv3.weight: Tensor[(2048, 512, 1, 1), float32], %model.backbone.layer4.2.bn3.weight: Tensor[(2048), float32], %model.backbone.layer4.2.bn3.bias: Tensor[(2048), float32], %model.backbone.layer4.2.bn3.running_mean: Tensor[(2048), float32], %model.backbone.layer4.2.bn3.running_var: Tensor[(2048), float32], %model.aux_classifier.0.weight: Tensor[(256, 1024, 3, 3), float32], %model.aux_classifier.1.weight: Tensor[(256), float32], %model.aux_classifier.1.bias: Tensor[(256), float32], %model.aux_classifier.1.running_mean: Tensor[(256), float32], %model.aux_classifier.1.running_var: Tensor[(256), float32], %model.aux_classifier.4.weight: Tensor[(21, 256, 1, 1), float32], %model.aux_classifier.4.bias: Tensor[(21), float32], %model.classifier.0.convs.0.0.weight: Tensor[(256, 2048, 1, 1), float32], %model.classifier.0.convs.0.1.weight: Tensor[(256), float32], %model.classifier.0.convs.0.1.bias: Tensor[(256), float32], %model.classifier.0.convs.0.1.running_mean: Tensor[(256), float32], %model.classifier.0.convs.0.1.running_var: Tensor[(256), float32], %model.classifier.0.convs.1.0.weight: Tensor[(256, 2048, 3, 3), float32], %model.classifier.0.convs.1.1.weight: Tensor[(256), float32], %model.classifier.0.convs.1.1.bias: Tensor[(256), float32], %model.classifier.0.convs.1.1.running_mean: Tensor[(256), float32], %model.classifier.0.convs.1.1.running_var: Tensor[(256), float32], %model.classifier.0.convs.2.0.weight: Tensor[(256, 2048, 3, 3), float32], %model.classifier.0.convs.2.1.weight: Tensor[(256), float32], %model.classifier.0.convs.2.1.bias: Tensor[(256), float32], %model.classifier.0.convs.2.1.running_mean: Tensor[(256), float32], %model.classifier.0.convs.2.1.running_var: Tensor[(256), float32], %model.classifier.0.convs.3.0.weight: Tensor[(256, 2048, 3, 3), float32], %model.classifier.0.convs.3.1.weight: Tensor[(256), float32], %model.classifier.0.convs.3.1.bias: Tensor[(256), float32], %model.classifier.0.convs.3.1.running_mean: Tensor[(256), float32], %model.classifier.0.convs.3.1.running_var: Tensor[(256), float32], %model.classifier.0.convs.4.1.weight: Tensor[(256, 2048, 1, 1), float32], %model.classifier.0.convs.4.2.weight: Tensor[(256), float32], %model.classifier.0.convs.4.2.bias: Tensor[(256), float32], %model.classifier.0.convs.4.2.running_mean: Tensor[(256), float32], %model.classifier.0.convs.4.2.running_var: Tensor[(256), float32], %model.classifier.0.project.0.weight: Tensor[(256, 1280, 1, 1), float32], %model.classifier.0.project.1.weight: Tensor[(256), float32], %model.classifier.0.project.1.bias: Tensor[(256), float32], %model.classifier.0.project.1.running_mean: Tensor[(256), float32], %model.classifier.0.project.1.running_var: Tensor[(256), float32], %model.classifier.1.weight: Tensor[(256, 256, 3, 3), float32], %model.classifier.2.weight: Tensor[(256), float32], %model.classifier.2.bias: Tensor[(256), float32], %model.classifier.2.running_mean: Tensor[(256), float32], %model.classifier.2.running_var: Tensor[(256), float32], %model.classifier.4.weight: Tensor[(21, 256, 1, 1), float32], %model.classifier.4.bias: Tensor[(21), float32]) -> (Tensor[(1, 21, 224, 224), float32], Tensor[(1, 21, 224, 224), float32]) { %0 = nn.conv2d(%data, %model.backbone.conv1.weight, strides=[2, 2], padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 7]) /* ty=Tensor[(1, 64, 112, 112), float32] */; %1 = nn.batch_norm(%0, %model.backbone.bn1.weight, %model.backbone.bn1.bias, %model.backbone.bn1.running_mean, %model.backbone.bn1.running_var) /* ty=(Tensor[(1, 64, 112, 112), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %2 = %1.0; %3 = nn.relu(%2) /* ty=Tensor[(1, 64, 112, 112), float32] */; %4 = nn.max_pool2d(%3, pool_size=[3, 3], strides=[2, 2], padding=[1, 1, 1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %5 = nn.conv2d(%4, %model.backbone.layer1.0.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %6 = nn.batch_norm(%5, %model.backbone.layer1.0.bn1.weight, %model.backbone.layer1.0.bn1.bias, %model.backbone.layer1.0.bn1.running_mean, %model.backbone.layer1.0.bn1.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %7 = %6.0; %8 = nn.relu(%7) /* ty=Tensor[(1, 64, 56, 56), float32] */; %9 = nn.conv2d(%8, %model.backbone.layer1.0.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %10 = nn.batch_norm(%9, %model.backbone.layer1.0.bn2.weight, %model.backbone.layer1.0.bn2.bias, %model.backbone.layer1.0.bn2.running_mean, %model.backbone.layer1.0.bn2.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %11 = %10.0; %12 = nn.relu(%11) /* ty=Tensor[(1, 64, 56, 56), float32] */; %13 = nn.conv2d(%12, %model.backbone.layer1.0.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 56, 56), float32] */; %14 = nn.batch_norm(%13, %model.backbone.layer1.0.bn3.weight, %model.backbone.layer1.0.bn3.bias, %model.backbone.layer1.0.bn3.running_mean, %model.backbone.layer1.0.bn3.running_var) /* ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %15 = %14.0; %16 = nn.conv2d(%4, %model.backbone.layer1.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 56, 56), float32] */; %17 = nn.batch_norm(%16, %model.backbone.layer1.0.downsample.1.weight, %model.backbone.layer1.0.downsample.1.bias, %model.backbone.layer1.0.downsample.1.running_mean, %model.backbone.layer1.0.downsample.1.running_var) /* ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %18 = %17.0; %19 = add(%15, %18) /* ty=Tensor[(1, 256, 56, 56), float32] */; %20 = nn.relu(%19) /* ty=Tensor[(1, 256, 56, 56), float32] */; %21 = nn.conv2d(%20, %model.backbone.layer1.1.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %22 = nn.batch_norm(%21, %model.backbone.layer1.1.bn1.weight, %model.backbone.layer1.1.bn1.bias, %model.backbone.layer1.1.bn1.running_mean, %model.backbone.layer1.1.bn1.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %23 = %22.0; %24 = nn.relu(%23) /* ty=Tensor[(1, 64, 56, 56), float32] */; %25 = nn.conv2d(%24, %model.backbone.layer1.1.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %26 = nn.batch_norm(%25, %model.backbone.layer1.1.bn2.weight, %model.backbone.layer1.1.bn2.bias, %model.backbone.layer1.1.bn2.running_mean, %model.backbone.layer1.1.bn2.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %27 = %26.0; %28 = nn.relu(%27) /* ty=Tensor[(1, 64, 56, 56), float32] */; %29 = nn.conv2d(%28, %model.backbone.layer1.1.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 56, 56), float32] */; %30 = nn.batch_norm(%29, %model.backbone.layer1.1.bn3.weight, %model.backbone.layer1.1.bn3.bias, %model.backbone.layer1.1.bn3.running_mean, %model.backbone.layer1.1.bn3.running_var) /* ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %31 = %30.0; %32 = add(%31, %20) /* ty=Tensor[(1, 256, 56, 56), float32] */; %33 = nn.relu(%32) /* ty=Tensor[(1, 256, 56, 56), float32] */; %34 = nn.conv2d(%33, %model.backbone.layer1.2.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %35 = nn.batch_norm(%34, %model.backbone.layer1.2.bn1.weight, %model.backbone.layer1.2.bn1.bias, %model.backbone.layer1.2.bn1.running_mean, %model.backbone.layer1.2.bn1.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %36 = %35.0; %37 = nn.relu(%36) /* ty=Tensor[(1, 64, 56, 56), float32] */; %38 = nn.conv2d(%37, %model.backbone.layer1.2.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */; %39 = nn.batch_norm(%38, %model.backbone.layer1.2.bn2.weight, %model.backbone.layer1.2.bn2.bias, %model.backbone.layer1.2.bn2.running_mean, %model.backbone.layer1.2.bn2.running_var) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */; %40 = %39.0; %41 = nn.relu(%40) /* ty=Tensor[(1, 64, 56, 56), float32] */; %42 = nn.conv2d(%41, %model.backbone.layer1.2.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 56, 56), float32] */; %43 = nn.batch_norm(%42, %model.backbone.layer1.2.bn3.weight, %model.backbone.layer1.2.bn3.bias, %model.backbone.layer1.2.bn3.running_mean, %model.backbone.layer1.2.bn3.running_var) /* ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %44 = %43.0; %45 = add(%44, %33) /* ty=Tensor[(1, 256, 56, 56), float32] */; %46 = nn.relu(%45) /* ty=Tensor[(1, 256, 56, 56), float32] */; %47 = nn.conv2d(%46, %model.backbone.layer2.0.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 56, 56), float32] */; %48 = nn.batch_norm(%47, %model.backbone.layer2.0.bn1.weight, %model.backbone.layer2.0.bn1.bias, %model.backbone.layer2.0.bn1.running_mean, %model.backbone.layer2.0.bn1.running_var) /* ty=(Tensor[(1, 128, 56, 56), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %49 = %48.0; %50 = nn.relu(%49) /* ty=Tensor[(1, 128, 56, 56), float32] */; %51 = nn.conv2d(%50, %model.backbone.layer2.0.conv2.weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %52 = nn.batch_norm(%51, %model.backbone.layer2.0.bn2.weight, %model.backbone.layer2.0.bn2.bias, %model.backbone.layer2.0.bn2.running_mean, %model.backbone.layer2.0.bn2.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %53 = %52.0; %54 = nn.relu(%53) /* ty=Tensor[(1, 128, 28, 28), float32] */; %55 = nn.conv2d(%54, %model.backbone.layer2.0.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %56 = nn.batch_norm(%55, %model.backbone.layer2.0.bn3.weight, %model.backbone.layer2.0.bn3.bias, %model.backbone.layer2.0.bn3.running_mean, %model.backbone.layer2.0.bn3.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %57 = %56.0; %58 = nn.conv2d(%46, %model.backbone.layer2.0.downsample.0.weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %59 = nn.batch_norm(%58, %model.backbone.layer2.0.downsample.1.weight, %model.backbone.layer2.0.downsample.1.bias, %model.backbone.layer2.0.downsample.1.running_mean, %model.backbone.layer2.0.downsample.1.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %60 = %59.0; %61 = add(%57, %60) /* ty=Tensor[(1, 512, 28, 28), float32] */; %62 = nn.relu(%61) /* ty=Tensor[(1, 512, 28, 28), float32] */; %63 = nn.conv2d(%62, %model.backbone.layer2.1.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %64 = nn.batch_norm(%63, %model.backbone.layer2.1.bn1.weight, %model.backbone.layer2.1.bn1.bias, %model.backbone.layer2.1.bn1.running_mean, %model.backbone.layer2.1.bn1.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %65 = %64.0; %66 = nn.relu(%65) /* ty=Tensor[(1, 128, 28, 28), float32] */; %67 = nn.conv2d(%66, %model.backbone.layer2.1.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %68 = nn.batch_norm(%67, %model.backbone.layer2.1.bn2.weight, %model.backbone.layer2.1.bn2.bias, %model.backbone.layer2.1.bn2.running_mean, %model.backbone.layer2.1.bn2.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %69 = %68.0; %70 = nn.relu(%69) /* ty=Tensor[(1, 128, 28, 28), float32] */; %71 = nn.conv2d(%70, %model.backbone.layer2.1.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %72 = nn.batch_norm(%71, %model.backbone.layer2.1.bn3.weight, %model.backbone.layer2.1.bn3.bias, %model.backbone.layer2.1.bn3.running_mean, %model.backbone.layer2.1.bn3.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %73 = %72.0; %74 = add(%73, %62) /* ty=Tensor[(1, 512, 28, 28), float32] */; %75 = nn.relu(%74) /* ty=Tensor[(1, 512, 28, 28), float32] */; %76 = nn.conv2d(%75, %model.backbone.layer2.2.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %77 = nn.batch_norm(%76, %model.backbone.layer2.2.bn1.weight, %model.backbone.layer2.2.bn1.bias, %model.backbone.layer2.2.bn1.running_mean, %model.backbone.layer2.2.bn1.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %78 = %77.0; %79 = nn.relu(%78) /* ty=Tensor[(1, 128, 28, 28), float32] */; %80 = nn.conv2d(%79, %model.backbone.layer2.2.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %81 = nn.batch_norm(%80, %model.backbone.layer2.2.bn2.weight, %model.backbone.layer2.2.bn2.bias, %model.backbone.layer2.2.bn2.running_mean, %model.backbone.layer2.2.bn2.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %82 = %81.0; %83 = nn.relu(%82) /* ty=Tensor[(1, 128, 28, 28), float32] */; %84 = nn.conv2d(%83, %model.backbone.layer2.2.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %85 = nn.batch_norm(%84, %model.backbone.layer2.2.bn3.weight, %model.backbone.layer2.2.bn3.bias, %model.backbone.layer2.2.bn3.running_mean, %model.backbone.layer2.2.bn3.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %86 = %85.0; %87 = add(%86, %75) /* ty=Tensor[(1, 512, 28, 28), float32] */; %88 = nn.relu(%87) /* ty=Tensor[(1, 512, 28, 28), float32] */; %89 = nn.conv2d(%88, %model.backbone.layer2.3.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %90 = nn.batch_norm(%89, %model.backbone.layer2.3.bn1.weight, %model.backbone.layer2.3.bn1.bias, %model.backbone.layer2.3.bn1.running_mean, %model.backbone.layer2.3.bn1.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %91 = %90.0; %92 = nn.relu(%91) /* ty=Tensor[(1, 128, 28, 28), float32] */; %93 = nn.conv2d(%92, %model.backbone.layer2.3.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */; %94 = nn.batch_norm(%93, %model.backbone.layer2.3.bn2.weight, %model.backbone.layer2.3.bn2.bias, %model.backbone.layer2.3.bn2.running_mean, %model.backbone.layer2.3.bn2.running_var) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */; %95 = %94.0; %96 = nn.relu(%95) /* ty=Tensor[(1, 128, 28, 28), float32] */; %97 = nn.conv2d(%96, %model.backbone.layer2.3.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %98 = nn.batch_norm(%97, %model.backbone.layer2.3.bn3.weight, %model.backbone.layer2.3.bn3.bias, %model.backbone.layer2.3.bn3.running_mean, %model.backbone.layer2.3.bn3.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %99 = %98.0; %100 = add(%99, %88) /* ty=Tensor[(1, 512, 28, 28), float32] */; %101 = nn.relu(%100) /* ty=Tensor[(1, 512, 28, 28), float32] */; %102 = nn.conv2d(%101, %model.backbone.layer3.0.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %103 = nn.batch_norm(%102, %model.backbone.layer3.0.bn1.weight, %model.backbone.layer3.0.bn1.bias, %model.backbone.layer3.0.bn1.running_mean, %model.backbone.layer3.0.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %104 = %103.0; %105 = nn.relu(%104) /* ty=Tensor[(1, 256, 28, 28), float32] */; %106 = nn.conv2d(%105, %model.backbone.layer3.0.conv2.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %107 = nn.batch_norm(%106, %model.backbone.layer3.0.bn2.weight, %model.backbone.layer3.0.bn2.bias, %model.backbone.layer3.0.bn2.running_mean, %model.backbone.layer3.0.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %108 = %107.0; %109 = nn.relu(%108) /* ty=Tensor[(1, 256, 28, 28), float32] */; %110 = nn.conv2d(%109, %model.backbone.layer3.0.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %111 = nn.batch_norm(%110, %model.backbone.layer3.0.bn3.weight, %model.backbone.layer3.0.bn3.bias, %model.backbone.layer3.0.bn3.running_mean, %model.backbone.layer3.0.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %112 = %111.0; %113 = nn.conv2d(%101, %model.backbone.layer3.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %114 = nn.batch_norm(%113, %model.backbone.layer3.0.downsample.1.weight, %model.backbone.layer3.0.downsample.1.bias, %model.backbone.layer3.0.downsample.1.running_mean, %model.backbone.layer3.0.downsample.1.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %115 = %114.0; %116 = add(%112, %115) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %117 = nn.relu(%116) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %118 = nn.conv2d(%117, %model.backbone.layer3.1.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %119 = nn.batch_norm(%118, %model.backbone.layer3.1.bn1.weight, %model.backbone.layer3.1.bn1.bias, %model.backbone.layer3.1.bn1.running_mean, %model.backbone.layer3.1.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %120 = %119.0; %121 = nn.relu(%120) /* ty=Tensor[(1, 256, 28, 28), float32] */; %122 = nn.conv2d(%121, %model.backbone.layer3.1.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %123 = nn.batch_norm(%122, %model.backbone.layer3.1.bn2.weight, %model.backbone.layer3.1.bn2.bias, %model.backbone.layer3.1.bn2.running_mean, %model.backbone.layer3.1.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %124 = %123.0; %125 = nn.relu(%124) /* ty=Tensor[(1, 256, 28, 28), float32] */; %126 = nn.conv2d(%125, %model.backbone.layer3.1.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %127 = nn.batch_norm(%126, %model.backbone.layer3.1.bn3.weight, %model.backbone.layer3.1.bn3.bias, %model.backbone.layer3.1.bn3.running_mean, %model.backbone.layer3.1.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %128 = %127.0; %129 = add(%128, %117) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %130 = nn.relu(%129) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %131 = nn.conv2d(%130, %model.backbone.layer3.2.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %132 = nn.batch_norm(%131, %model.backbone.layer3.2.bn1.weight, %model.backbone.layer3.2.bn1.bias, %model.backbone.layer3.2.bn1.running_mean, %model.backbone.layer3.2.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %133 = %132.0; %134 = nn.relu(%133) /* ty=Tensor[(1, 256, 28, 28), float32] */; %135 = nn.conv2d(%134, %model.backbone.layer3.2.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %136 = nn.batch_norm(%135, %model.backbone.layer3.2.bn2.weight, %model.backbone.layer3.2.bn2.bias, %model.backbone.layer3.2.bn2.running_mean, %model.backbone.layer3.2.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %137 = %136.0; %138 = nn.relu(%137) /* ty=Tensor[(1, 256, 28, 28), float32] */; %139 = nn.conv2d(%138, %model.backbone.layer3.2.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %140 = nn.batch_norm(%139, %model.backbone.layer3.2.bn3.weight, %model.backbone.layer3.2.bn3.bias, %model.backbone.layer3.2.bn3.running_mean, %model.backbone.layer3.2.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %141 = %140.0; %142 = add(%141, %130) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %143 = nn.relu(%142) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %144 = nn.conv2d(%143, %model.backbone.layer3.3.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %145 = nn.batch_norm(%144, %model.backbone.layer3.3.bn1.weight, %model.backbone.layer3.3.bn1.bias, %model.backbone.layer3.3.bn1.running_mean, %model.backbone.layer3.3.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %146 = %145.0; %147 = nn.relu(%146) /* ty=Tensor[(1, 256, 28, 28), float32] */; %148 = nn.conv2d(%147, %model.backbone.layer3.3.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %149 = nn.batch_norm(%148, %model.backbone.layer3.3.bn2.weight, %model.backbone.layer3.3.bn2.bias, %model.backbone.layer3.3.bn2.running_mean, %model.backbone.layer3.3.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %150 = %149.0; %151 = nn.relu(%150) /* ty=Tensor[(1, 256, 28, 28), float32] */; %152 = nn.conv2d(%151, %model.backbone.layer3.3.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %153 = nn.batch_norm(%152, %model.backbone.layer3.3.bn3.weight, %model.backbone.layer3.3.bn3.bias, %model.backbone.layer3.3.bn3.running_mean, %model.backbone.layer3.3.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %154 = %153.0; %155 = add(%154, %143) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %156 = nn.relu(%155) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %157 = nn.conv2d(%156, %model.backbone.layer3.4.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %158 = nn.batch_norm(%157, %model.backbone.layer3.4.bn1.weight, %model.backbone.layer3.4.bn1.bias, %model.backbone.layer3.4.bn1.running_mean, %model.backbone.layer3.4.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %159 = %158.0; %160 = nn.relu(%159) /* ty=Tensor[(1, 256, 28, 28), float32] */; %161 = nn.conv2d(%160, %model.backbone.layer3.4.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %162 = nn.batch_norm(%161, %model.backbone.layer3.4.bn2.weight, %model.backbone.layer3.4.bn2.bias, %model.backbone.layer3.4.bn2.running_mean, %model.backbone.layer3.4.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %163 = %162.0; %164 = nn.relu(%163) /* ty=Tensor[(1, 256, 28, 28), float32] */; %165 = nn.conv2d(%164, %model.backbone.layer3.4.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %166 = nn.batch_norm(%165, %model.backbone.layer3.4.bn3.weight, %model.backbone.layer3.4.bn3.bias, %model.backbone.layer3.4.bn3.running_mean, %model.backbone.layer3.4.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %167 = %166.0; %168 = add(%167, %156) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %169 = nn.relu(%168) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %170 = nn.conv2d(%169, %model.backbone.layer3.5.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %171 = nn.batch_norm(%170, %model.backbone.layer3.5.bn1.weight, %model.backbone.layer3.5.bn1.bias, %model.backbone.layer3.5.bn1.running_mean, %model.backbone.layer3.5.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %172 = %171.0; %173 = nn.relu(%172) /* ty=Tensor[(1, 256, 28, 28), float32] */; %174 = nn.conv2d(%173, %model.backbone.layer3.5.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %175 = nn.batch_norm(%174, %model.backbone.layer3.5.bn2.weight, %model.backbone.layer3.5.bn2.bias, %model.backbone.layer3.5.bn2.running_mean, %model.backbone.layer3.5.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %176 = %175.0; %177 = nn.relu(%176) /* ty=Tensor[(1, 256, 28, 28), float32] */; %178 = nn.conv2d(%177, %model.backbone.layer3.5.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %179 = nn.batch_norm(%178, %model.backbone.layer3.5.bn3.weight, %model.backbone.layer3.5.bn3.bias, %model.backbone.layer3.5.bn3.running_mean, %model.backbone.layer3.5.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %180 = %179.0; %181 = add(%180, %169) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %182 = nn.relu(%181) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %183 = nn.conv2d(%182, %model.backbone.layer3.6.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %184 = nn.batch_norm(%183, %model.backbone.layer3.6.bn1.weight, %model.backbone.layer3.6.bn1.bias, %model.backbone.layer3.6.bn1.running_mean, %model.backbone.layer3.6.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %185 = %184.0; %186 = nn.relu(%185) /* ty=Tensor[(1, 256, 28, 28), float32] */; %187 = nn.conv2d(%186, %model.backbone.layer3.6.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %188 = nn.batch_norm(%187, %model.backbone.layer3.6.bn2.weight, %model.backbone.layer3.6.bn2.bias, %model.backbone.layer3.6.bn2.running_mean, %model.backbone.layer3.6.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %189 = %188.0; %190 = nn.relu(%189) /* ty=Tensor[(1, 256, 28, 28), float32] */; %191 = nn.conv2d(%190, %model.backbone.layer3.6.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %192 = nn.batch_norm(%191, %model.backbone.layer3.6.bn3.weight, %model.backbone.layer3.6.bn3.bias, %model.backbone.layer3.6.bn3.running_mean, %model.backbone.layer3.6.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %193 = %192.0; %194 = add(%193, %182) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %195 = nn.relu(%194) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %196 = nn.conv2d(%195, %model.backbone.layer3.7.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %197 = nn.batch_norm(%196, %model.backbone.layer3.7.bn1.weight, %model.backbone.layer3.7.bn1.bias, %model.backbone.layer3.7.bn1.running_mean, %model.backbone.layer3.7.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %198 = %197.0; %199 = nn.relu(%198) /* ty=Tensor[(1, 256, 28, 28), float32] */; %200 = nn.conv2d(%199, %model.backbone.layer3.7.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %201 = nn.batch_norm(%200, %model.backbone.layer3.7.bn2.weight, %model.backbone.layer3.7.bn2.bias, %model.backbone.layer3.7.bn2.running_mean, %model.backbone.layer3.7.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %202 = %201.0; %203 = nn.relu(%202) /* ty=Tensor[(1, 256, 28, 28), float32] */; %204 = nn.conv2d(%203, %model.backbone.layer3.7.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %205 = nn.batch_norm(%204, %model.backbone.layer3.7.bn3.weight, %model.backbone.layer3.7.bn3.bias, %model.backbone.layer3.7.bn3.running_mean, %model.backbone.layer3.7.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %206 = %205.0; %207 = add(%206, %195) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %208 = nn.relu(%207) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %209 = nn.conv2d(%208, %model.backbone.layer3.8.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %210 = nn.batch_norm(%209, %model.backbone.layer3.8.bn1.weight, %model.backbone.layer3.8.bn1.bias, %model.backbone.layer3.8.bn1.running_mean, %model.backbone.layer3.8.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %211 = %210.0; %212 = nn.relu(%211) /* ty=Tensor[(1, 256, 28, 28), float32] */; %213 = nn.conv2d(%212, %model.backbone.layer3.8.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %214 = nn.batch_norm(%213, %model.backbone.layer3.8.bn2.weight, %model.backbone.layer3.8.bn2.bias, %model.backbone.layer3.8.bn2.running_mean, %model.backbone.layer3.8.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %215 = %214.0; %216 = nn.relu(%215) /* ty=Tensor[(1, 256, 28, 28), float32] */; %217 = nn.conv2d(%216, %model.backbone.layer3.8.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %218 = nn.batch_norm(%217, %model.backbone.layer3.8.bn3.weight, %model.backbone.layer3.8.bn3.bias, %model.backbone.layer3.8.bn3.running_mean, %model.backbone.layer3.8.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %219 = %218.0; %220 = add(%219, %208) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %221 = nn.relu(%220) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %222 = nn.conv2d(%221, %model.backbone.layer3.9.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %223 = nn.batch_norm(%222, %model.backbone.layer3.9.bn1.weight, %model.backbone.layer3.9.bn1.bias, %model.backbone.layer3.9.bn1.running_mean, %model.backbone.layer3.9.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %224 = %223.0; %225 = nn.relu(%224) /* ty=Tensor[(1, 256, 28, 28), float32] */; %226 = nn.conv2d(%225, %model.backbone.layer3.9.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %227 = nn.batch_norm(%226, %model.backbone.layer3.9.bn2.weight, %model.backbone.layer3.9.bn2.bias, %model.backbone.layer3.9.bn2.running_mean, %model.backbone.layer3.9.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %228 = %227.0; %229 = nn.relu(%228) /* ty=Tensor[(1, 256, 28, 28), float32] */; %230 = nn.conv2d(%229, %model.backbone.layer3.9.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %231 = nn.batch_norm(%230, %model.backbone.layer3.9.bn3.weight, %model.backbone.layer3.9.bn3.bias, %model.backbone.layer3.9.bn3.running_mean, %model.backbone.layer3.9.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %232 = %231.0; %233 = add(%232, %221) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %234 = nn.relu(%233) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %235 = nn.conv2d(%234, %model.backbone.layer3.10.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %236 = nn.batch_norm(%235, %model.backbone.layer3.10.bn1.weight, %model.backbone.layer3.10.bn1.bias, %model.backbone.layer3.10.bn1.running_mean, %model.backbone.layer3.10.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %237 = %236.0; %238 = nn.relu(%237) /* ty=Tensor[(1, 256, 28, 28), float32] */; %239 = nn.conv2d(%238, %model.backbone.layer3.10.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %240 = nn.batch_norm(%239, %model.backbone.layer3.10.bn2.weight, %model.backbone.layer3.10.bn2.bias, %model.backbone.layer3.10.bn2.running_mean, %model.backbone.layer3.10.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %241 = %240.0; %242 = nn.relu(%241) /* ty=Tensor[(1, 256, 28, 28), float32] */; %243 = nn.conv2d(%242, %model.backbone.layer3.10.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %244 = nn.batch_norm(%243, %model.backbone.layer3.10.bn3.weight, %model.backbone.layer3.10.bn3.bias, %model.backbone.layer3.10.bn3.running_mean, %model.backbone.layer3.10.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %245 = %244.0; %246 = add(%245, %234) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %247 = nn.relu(%246) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %248 = nn.conv2d(%247, %model.backbone.layer3.11.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %249 = nn.batch_norm(%248, %model.backbone.layer3.11.bn1.weight, %model.backbone.layer3.11.bn1.bias, %model.backbone.layer3.11.bn1.running_mean, %model.backbone.layer3.11.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %250 = %249.0; %251 = nn.relu(%250) /* ty=Tensor[(1, 256, 28, 28), float32] */; %252 = nn.conv2d(%251, %model.backbone.layer3.11.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %253 = nn.batch_norm(%252, %model.backbone.layer3.11.bn2.weight, %model.backbone.layer3.11.bn2.bias, %model.backbone.layer3.11.bn2.running_mean, %model.backbone.layer3.11.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %254 = %253.0; %255 = nn.relu(%254) /* ty=Tensor[(1, 256, 28, 28), float32] */; %256 = nn.conv2d(%255, %model.backbone.layer3.11.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %257 = nn.batch_norm(%256, %model.backbone.layer3.11.bn3.weight, %model.backbone.layer3.11.bn3.bias, %model.backbone.layer3.11.bn3.running_mean, %model.backbone.layer3.11.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %258 = %257.0; %259 = add(%258, %247) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %260 = nn.relu(%259) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %261 = nn.conv2d(%260, %model.backbone.layer3.12.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %262 = nn.batch_norm(%261, %model.backbone.layer3.12.bn1.weight, %model.backbone.layer3.12.bn1.bias, %model.backbone.layer3.12.bn1.running_mean, %model.backbone.layer3.12.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %263 = %262.0; %264 = nn.relu(%263) /* ty=Tensor[(1, 256, 28, 28), float32] */; %265 = nn.conv2d(%264, %model.backbone.layer3.12.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %266 = nn.batch_norm(%265, %model.backbone.layer3.12.bn2.weight, %model.backbone.layer3.12.bn2.bias, %model.backbone.layer3.12.bn2.running_mean, %model.backbone.layer3.12.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %267 = %266.0; %268 = nn.relu(%267) /* ty=Tensor[(1, 256, 28, 28), float32] */; %269 = nn.conv2d(%268, %model.backbone.layer3.12.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %270 = nn.batch_norm(%269, %model.backbone.layer3.12.bn3.weight, %model.backbone.layer3.12.bn3.bias, %model.backbone.layer3.12.bn3.running_mean, %model.backbone.layer3.12.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %271 = %270.0; %272 = add(%271, %260) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %273 = nn.relu(%272) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %274 = nn.conv2d(%273, %model.backbone.layer3.13.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %275 = nn.batch_norm(%274, %model.backbone.layer3.13.bn1.weight, %model.backbone.layer3.13.bn1.bias, %model.backbone.layer3.13.bn1.running_mean, %model.backbone.layer3.13.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %276 = %275.0; %277 = nn.relu(%276) /* ty=Tensor[(1, 256, 28, 28), float32] */; %278 = nn.conv2d(%277, %model.backbone.layer3.13.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %279 = nn.batch_norm(%278, %model.backbone.layer3.13.bn2.weight, %model.backbone.layer3.13.bn2.bias, %model.backbone.layer3.13.bn2.running_mean, %model.backbone.layer3.13.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %280 = %279.0; %281 = nn.relu(%280) /* ty=Tensor[(1, 256, 28, 28), float32] */; %282 = nn.conv2d(%281, %model.backbone.layer3.13.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %283 = nn.batch_norm(%282, %model.backbone.layer3.13.bn3.weight, %model.backbone.layer3.13.bn3.bias, %model.backbone.layer3.13.bn3.running_mean, %model.backbone.layer3.13.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %284 = %283.0; %285 = add(%284, %273) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %286 = nn.relu(%285) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %287 = nn.conv2d(%286, %model.backbone.layer3.14.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %288 = nn.batch_norm(%287, %model.backbone.layer3.14.bn1.weight, %model.backbone.layer3.14.bn1.bias, %model.backbone.layer3.14.bn1.running_mean, %model.backbone.layer3.14.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %289 = %288.0; %290 = nn.relu(%289) /* ty=Tensor[(1, 256, 28, 28), float32] */; %291 = nn.conv2d(%290, %model.backbone.layer3.14.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %292 = nn.batch_norm(%291, %model.backbone.layer3.14.bn2.weight, %model.backbone.layer3.14.bn2.bias, %model.backbone.layer3.14.bn2.running_mean, %model.backbone.layer3.14.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %293 = %292.0; %294 = nn.relu(%293) /* ty=Tensor[(1, 256, 28, 28), float32] */; %295 = nn.conv2d(%294, %model.backbone.layer3.14.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %296 = nn.batch_norm(%295, %model.backbone.layer3.14.bn3.weight, %model.backbone.layer3.14.bn3.bias, %model.backbone.layer3.14.bn3.running_mean, %model.backbone.layer3.14.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %297 = %296.0; %298 = add(%297, %286) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %299 = nn.relu(%298) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %300 = nn.conv2d(%299, %model.backbone.layer3.15.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %301 = nn.batch_norm(%300, %model.backbone.layer3.15.bn1.weight, %model.backbone.layer3.15.bn1.bias, %model.backbone.layer3.15.bn1.running_mean, %model.backbone.layer3.15.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %302 = %301.0; %303 = nn.relu(%302) /* ty=Tensor[(1, 256, 28, 28), float32] */; %304 = nn.conv2d(%303, %model.backbone.layer3.15.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %305 = nn.batch_norm(%304, %model.backbone.layer3.15.bn2.weight, %model.backbone.layer3.15.bn2.bias, %model.backbone.layer3.15.bn2.running_mean, %model.backbone.layer3.15.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %306 = %305.0; %307 = nn.relu(%306) /* ty=Tensor[(1, 256, 28, 28), float32] */; %308 = nn.conv2d(%307, %model.backbone.layer3.15.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %309 = nn.batch_norm(%308, %model.backbone.layer3.15.bn3.weight, %model.backbone.layer3.15.bn3.bias, %model.backbone.layer3.15.bn3.running_mean, %model.backbone.layer3.15.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %310 = %309.0; %311 = add(%310, %299) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %312 = nn.relu(%311) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %313 = nn.conv2d(%312, %model.backbone.layer3.16.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %314 = nn.batch_norm(%313, %model.backbone.layer3.16.bn1.weight, %model.backbone.layer3.16.bn1.bias, %model.backbone.layer3.16.bn1.running_mean, %model.backbone.layer3.16.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %315 = %314.0; %316 = nn.relu(%315) /* ty=Tensor[(1, 256, 28, 28), float32] */; %317 = nn.conv2d(%316, %model.backbone.layer3.16.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %318 = nn.batch_norm(%317, %model.backbone.layer3.16.bn2.weight, %model.backbone.layer3.16.bn2.bias, %model.backbone.layer3.16.bn2.running_mean, %model.backbone.layer3.16.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %319 = %318.0; %320 = nn.relu(%319) /* ty=Tensor[(1, 256, 28, 28), float32] */; %321 = nn.conv2d(%320, %model.backbone.layer3.16.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %322 = nn.batch_norm(%321, %model.backbone.layer3.16.bn3.weight, %model.backbone.layer3.16.bn3.bias, %model.backbone.layer3.16.bn3.running_mean, %model.backbone.layer3.16.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %323 = %322.0; %324 = add(%323, %312) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %325 = nn.relu(%324) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %326 = nn.conv2d(%325, %model.backbone.layer3.17.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %327 = nn.batch_norm(%326, %model.backbone.layer3.17.bn1.weight, %model.backbone.layer3.17.bn1.bias, %model.backbone.layer3.17.bn1.running_mean, %model.backbone.layer3.17.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %328 = %327.0; %329 = nn.relu(%328) /* ty=Tensor[(1, 256, 28, 28), float32] */; %330 = nn.conv2d(%329, %model.backbone.layer3.17.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %331 = nn.batch_norm(%330, %model.backbone.layer3.17.bn2.weight, %model.backbone.layer3.17.bn2.bias, %model.backbone.layer3.17.bn2.running_mean, %model.backbone.layer3.17.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %332 = %331.0; %333 = nn.relu(%332) /* ty=Tensor[(1, 256, 28, 28), float32] */; %334 = nn.conv2d(%333, %model.backbone.layer3.17.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %335 = nn.batch_norm(%334, %model.backbone.layer3.17.bn3.weight, %model.backbone.layer3.17.bn3.bias, %model.backbone.layer3.17.bn3.running_mean, %model.backbone.layer3.17.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %336 = %335.0; %337 = add(%336, %325) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %338 = nn.relu(%337) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %339 = nn.conv2d(%338, %model.backbone.layer3.18.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %340 = nn.batch_norm(%339, %model.backbone.layer3.18.bn1.weight, %model.backbone.layer3.18.bn1.bias, %model.backbone.layer3.18.bn1.running_mean, %model.backbone.layer3.18.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %341 = %340.0; %342 = nn.relu(%341) /* ty=Tensor[(1, 256, 28, 28), float32] */; %343 = nn.conv2d(%342, %model.backbone.layer3.18.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %344 = nn.batch_norm(%343, %model.backbone.layer3.18.bn2.weight, %model.backbone.layer3.18.bn2.bias, %model.backbone.layer3.18.bn2.running_mean, %model.backbone.layer3.18.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %345 = %344.0; %346 = nn.relu(%345) /* ty=Tensor[(1, 256, 28, 28), float32] */; %347 = nn.conv2d(%346, %model.backbone.layer3.18.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %348 = nn.batch_norm(%347, %model.backbone.layer3.18.bn3.weight, %model.backbone.layer3.18.bn3.bias, %model.backbone.layer3.18.bn3.running_mean, %model.backbone.layer3.18.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %349 = %348.0; %350 = add(%349, %338) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %351 = nn.relu(%350) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %352 = nn.conv2d(%351, %model.backbone.layer3.19.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %353 = nn.batch_norm(%352, %model.backbone.layer3.19.bn1.weight, %model.backbone.layer3.19.bn1.bias, %model.backbone.layer3.19.bn1.running_mean, %model.backbone.layer3.19.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %354 = %353.0; %355 = nn.relu(%354) /* ty=Tensor[(1, 256, 28, 28), float32] */; %356 = nn.conv2d(%355, %model.backbone.layer3.19.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %357 = nn.batch_norm(%356, %model.backbone.layer3.19.bn2.weight, %model.backbone.layer3.19.bn2.bias, %model.backbone.layer3.19.bn2.running_mean, %model.backbone.layer3.19.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %358 = %357.0; %359 = nn.relu(%358) /* ty=Tensor[(1, 256, 28, 28), float32] */; %360 = nn.conv2d(%359, %model.backbone.layer3.19.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %361 = nn.batch_norm(%360, %model.backbone.layer3.19.bn3.weight, %model.backbone.layer3.19.bn3.bias, %model.backbone.layer3.19.bn3.running_mean, %model.backbone.layer3.19.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %362 = %361.0; %363 = add(%362, %351) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %364 = nn.relu(%363) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %365 = nn.conv2d(%364, %model.backbone.layer3.20.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %366 = nn.batch_norm(%365, %model.backbone.layer3.20.bn1.weight, %model.backbone.layer3.20.bn1.bias, %model.backbone.layer3.20.bn1.running_mean, %model.backbone.layer3.20.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %367 = %366.0; %368 = nn.relu(%367) /* ty=Tensor[(1, 256, 28, 28), float32] */; %369 = nn.conv2d(%368, %model.backbone.layer3.20.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %370 = nn.batch_norm(%369, %model.backbone.layer3.20.bn2.weight, %model.backbone.layer3.20.bn2.bias, %model.backbone.layer3.20.bn2.running_mean, %model.backbone.layer3.20.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %371 = %370.0; %372 = nn.relu(%371) /* ty=Tensor[(1, 256, 28, 28), float32] */; %373 = nn.conv2d(%372, %model.backbone.layer3.20.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %374 = nn.batch_norm(%373, %model.backbone.layer3.20.bn3.weight, %model.backbone.layer3.20.bn3.bias, %model.backbone.layer3.20.bn3.running_mean, %model.backbone.layer3.20.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %375 = %374.0; %376 = add(%375, %364) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %377 = nn.relu(%376) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %378 = nn.conv2d(%377, %model.backbone.layer3.21.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %379 = nn.batch_norm(%378, %model.backbone.layer3.21.bn1.weight, %model.backbone.layer3.21.bn1.bias, %model.backbone.layer3.21.bn1.running_mean, %model.backbone.layer3.21.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %380 = %379.0; %381 = nn.relu(%380) /* ty=Tensor[(1, 256, 28, 28), float32] */; %382 = nn.conv2d(%381, %model.backbone.layer3.21.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %383 = nn.batch_norm(%382, %model.backbone.layer3.21.bn2.weight, %model.backbone.layer3.21.bn2.bias, %model.backbone.layer3.21.bn2.running_mean, %model.backbone.layer3.21.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %384 = %383.0; %385 = nn.relu(%384) /* ty=Tensor[(1, 256, 28, 28), float32] */; %386 = nn.conv2d(%385, %model.backbone.layer3.21.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %387 = nn.batch_norm(%386, %model.backbone.layer3.21.bn3.weight, %model.backbone.layer3.21.bn3.bias, %model.backbone.layer3.21.bn3.running_mean, %model.backbone.layer3.21.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %388 = %387.0; %389 = add(%388, %377) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %390 = nn.relu(%389) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %391 = nn.conv2d(%390, %model.backbone.layer3.22.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %392 = nn.batch_norm(%391, %model.backbone.layer3.22.bn1.weight, %model.backbone.layer3.22.bn1.bias, %model.backbone.layer3.22.bn1.running_mean, %model.backbone.layer3.22.bn1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %393 = %392.0; %394 = nn.relu(%393) /* ty=Tensor[(1, 256, 28, 28), float32] */; %395 = nn.conv2d(%394, %model.backbone.layer3.22.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %396 = nn.batch_norm(%395, %model.backbone.layer3.22.bn2.weight, %model.backbone.layer3.22.bn2.bias, %model.backbone.layer3.22.bn2.running_mean, %model.backbone.layer3.22.bn2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %397 = %396.0; %398 = nn.relu(%397) /* ty=Tensor[(1, 256, 28, 28), float32] */; %399 = nn.conv2d(%398, %model.backbone.layer3.22.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %400 = nn.batch_norm(%399, %model.backbone.layer3.22.bn3.weight, %model.backbone.layer3.22.bn3.bias, %model.backbone.layer3.22.bn3.running_mean, %model.backbone.layer3.22.bn3.running_var) /* ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) */; %401 = %400.0; %402 = add(%401, %390) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %403 = nn.relu(%402) /* ty=Tensor[(1, 1024, 28, 28), float32] */; %404 = nn.conv2d(%403, %model.backbone.layer4.0.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %405 = nn.batch_norm(%404, %model.backbone.layer4.0.bn1.weight, %model.backbone.layer4.0.bn1.bias, %model.backbone.layer4.0.bn1.running_mean, %model.backbone.layer4.0.bn1.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %406 = %405.0; %407 = nn.relu(%406) /* ty=Tensor[(1, 512, 28, 28), float32] */; %408 = nn.conv2d(%407, %model.backbone.layer4.0.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %409 = nn.batch_norm(%408, %model.backbone.layer4.0.bn2.weight, %model.backbone.layer4.0.bn2.bias, %model.backbone.layer4.0.bn2.running_mean, %model.backbone.layer4.0.bn2.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %410 = %409.0; %411 = nn.relu(%410) /* ty=Tensor[(1, 512, 28, 28), float32] */; %412 = nn.conv2d(%411, %model.backbone.layer4.0.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %413 = nn.batch_norm(%412, %model.backbone.layer4.0.bn3.weight, %model.backbone.layer4.0.bn3.bias, %model.backbone.layer4.0.bn3.running_mean, %model.backbone.layer4.0.bn3.running_var) /* ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) */; %414 = %413.0; %415 = nn.conv2d(%403, %model.backbone.layer4.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %416 = nn.batch_norm(%415, %model.backbone.layer4.0.downsample.1.weight, %model.backbone.layer4.0.downsample.1.bias, %model.backbone.layer4.0.downsample.1.running_mean, %model.backbone.layer4.0.downsample.1.running_var) /* ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) */; %417 = %416.0; %418 = add(%414, %417) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %419 = nn.relu(%418) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %420 = nn.conv2d(%419, %model.backbone.layer4.1.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %421 = nn.batch_norm(%420, %model.backbone.layer4.1.bn1.weight, %model.backbone.layer4.1.bn1.bias, %model.backbone.layer4.1.bn1.running_mean, %model.backbone.layer4.1.bn1.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %422 = %421.0; %423 = nn.relu(%422) /* ty=Tensor[(1, 512, 28, 28), float32] */; %424 = nn.conv2d(%423, %model.backbone.layer4.1.conv2.weight, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %425 = nn.batch_norm(%424, %model.backbone.layer4.1.bn2.weight, %model.backbone.layer4.1.bn2.bias, %model.backbone.layer4.1.bn2.running_mean, %model.backbone.layer4.1.bn2.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %426 = %425.0; %427 = nn.relu(%426) /* ty=Tensor[(1, 512, 28, 28), float32] */; %428 = nn.conv2d(%427, %model.backbone.layer4.1.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %429 = nn.batch_norm(%428, %model.backbone.layer4.1.bn3.weight, %model.backbone.layer4.1.bn3.bias, %model.backbone.layer4.1.bn3.running_mean, %model.backbone.layer4.1.bn3.running_var) /* ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) */; %430 = %429.0; %431 = add(%430, %419) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %432 = nn.relu(%431) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %433 = nn.conv2d(%432, %model.backbone.layer4.2.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %434 = nn.batch_norm(%433, %model.backbone.layer4.2.bn1.weight, %model.backbone.layer4.2.bn1.bias, %model.backbone.layer4.2.bn1.running_mean, %model.backbone.layer4.2.bn1.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %435 = %434.0; %436 = nn.relu(%435) /* ty=Tensor[(1, 512, 28, 28), float32] */; %437 = nn.conv2d(%436, %model.backbone.layer4.2.conv2.weight, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 28, 28), float32] */; %438 = nn.batch_norm(%437, %model.backbone.layer4.2.bn2.weight, %model.backbone.layer4.2.bn2.bias, %model.backbone.layer4.2.bn2.running_mean, %model.backbone.layer4.2.bn2.running_var) /* ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) */; %439 = %438.0; %440 = nn.relu(%439) /* ty=Tensor[(1, 512, 28, 28), float32] */; %441 = nn.conv2d(%440, %model.backbone.layer4.2.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %442 = nn.batch_norm(%441, %model.backbone.layer4.2.bn3.weight, %model.backbone.layer4.2.bn3.bias, %model.backbone.layer4.2.bn3.running_mean, %model.backbone.layer4.2.bn3.running_var) /* ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) */; %443 = %442.0; %444 = add(%443, %432) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %445 = nn.relu(%444) /* ty=Tensor[(1, 2048, 28, 28), float32] */; %446 = (%445, %403); %447 = %446.1; %448 = nn.conv2d(%447, %model.aux_classifier.0.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %449 = nn.batch_norm(%448, %model.aux_classifier.1.weight, %model.aux_classifier.1.bias, %model.aux_classifier.1.running_mean, %model.aux_classifier.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %450 = %449.0; %451 = nn.relu(%450) /* ty=Tensor[(1, 256, 28, 28), float32] */; %452 = nn.dropout(%451, rate=0.1f) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(1, 256, 28, 28), float32]) */; %453 = %452.0; %454 = nn.conv2d(%453, %model.aux_classifier.4.weight, padding=[0, 0, 0, 0], channels=21, kernel_size=[1, 1]) /* ty=Tensor[(1, 21, 28, 28), float32] */; %455 = nn.bias_add(%454, %model.aux_classifier.4.bias) /* ty=Tensor[(1, 21, 28, 28), float32] */; %456 = image.resize(%455, size=[224, 224]) /* ty=Tensor[(1, 21, 224, 224), float32] */; %457 = %446.0; %458 = nn.conv2d(%457, %model.classifier.0.convs.0.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %459 = nn.batch_norm(%458, %model.classifier.0.convs.0.1.weight, %model.classifier.0.convs.0.1.bias, %model.classifier.0.convs.0.1.running_mean, %model.classifier.0.convs.0.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %460 = %459.0; %461 = nn.relu(%460) /* ty=Tensor[(1, 256, 28, 28), float32] */; %462 = nn.conv2d(%457, %model.classifier.0.convs.1.0.weight, padding=[12, 12, 12, 12], dilation=[12, 12], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %463 = nn.batch_norm(%462, %model.classifier.0.convs.1.1.weight, %model.classifier.0.convs.1.1.bias, %model.classifier.0.convs.1.1.running_mean, %model.classifier.0.convs.1.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %464 = %463.0; %465 = nn.relu(%464) /* ty=Tensor[(1, 256, 28, 28), float32] */; %466 = nn.conv2d(%457, %model.classifier.0.convs.2.0.weight, padding=[24, 24, 24, 24], dilation=[24, 24], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %467 = nn.batch_norm(%466, %model.classifier.0.convs.2.1.weight, %model.classifier.0.convs.2.1.bias, %model.classifier.0.convs.2.1.running_mean, %model.classifier.0.convs.2.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %468 = %467.0; %469 = nn.relu(%468) /* ty=Tensor[(1, 256, 28, 28), float32] */; %470 = nn.conv2d(%457, %model.classifier.0.convs.3.0.weight, padding=[36, 36, 36, 36], dilation=[36, 36], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %471 = nn.batch_norm(%470, %model.classifier.0.convs.3.1.weight, %model.classifier.0.convs.3.1.bias, %model.classifier.0.convs.3.1.running_mean, %model.classifier.0.convs.3.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %472 = %471.0; %473 = nn.relu(%472) /* ty=Tensor[(1, 256, 28, 28), float32] */; %474 = nn.adaptive_avg_pool2d(%457, output_size=[1, 1]) /* ty=Tensor[(1, 2048, 1, 1), float32] */; %475 = nn.conv2d(%474, %model.classifier.0.convs.4.1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 1, 1), float32] */; %476 = nn.batch_norm(%475, %model.classifier.0.convs.4.2.weight, %model.classifier.0.convs.4.2.bias, %model.classifier.0.convs.4.2.running_mean, %model.classifier.0.convs.4.2.running_var) /* ty=(Tensor[(1, 256, 1, 1), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %477 = %476.0; %478 = nn.relu(%477) /* ty=Tensor[(1, 256, 1, 1), float32] */; %479 = image.resize(%478, size=[28, 28]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %480 = (%461, %465, %469, %473, %479); %481 = concatenate(%480, axis=1) /* ty=Tensor[(1, 1280, 28, 28), float32] */; %482 = nn.conv2d(%481, %model.classifier.0.project.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %483 = nn.batch_norm(%482, %model.classifier.0.project.1.weight, %model.classifier.0.project.1.bias, %model.classifier.0.project.1.running_mean, %model.classifier.0.project.1.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %484 = %483.0; %485 = nn.relu(%484) /* ty=Tensor[(1, 256, 28, 28), float32] */; %486 = nn.dropout(%485) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(1, 256, 28, 28), float32]) */; %487 = %486.0; %488 = nn.conv2d(%487, %model.classifier.1.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 28, 28), float32] */; %489 = nn.batch_norm(%488, %model.classifier.2.weight, %model.classifier.2.bias, %model.classifier.2.running_mean, %model.classifier.2.running_var) /* ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) */; %490 = %489.0; %491 = nn.relu(%490) /* ty=Tensor[(1, 256, 28, 28), float32] */; %492 = nn.conv2d(%491, %model.classifier.4.weight, padding=[0, 0, 0, 0], channels=21, kernel_size=[1, 1]) /* ty=Tensor[(1, 21, 28, 28), float32] */; %493 = nn.bias_add(%492, %model.classifier.4.bias) /* ty=Tensor[(1, 21, 28, 28), float32] */; %494 = image.resize(%493, size=[224, 224]) /* ty=Tensor[(1, 21, 224, 224), float32] */; %495 = (%456, %494); %496 = %495.0; %497 = %495.1; (%496, %497) }

Please let me know if anything else needed.

@abdulazizm
Copy link
Author

abdulazizm commented Mar 22, 2021

%0 = nn.conv2d(%data, %model.backbone.conv1.weight, strides=[2, 2], padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 7]) /* ty=Tensor[(1, 64, 112, 112), float32] /;
%1 = nn.batch_norm(%0, %model.backbone.bn1.weight, %model.backbone.bn1.bias, %model.backbone.bn1.running_mean, %model.backbone.bn1.running_var) /
ty=(Tensor[(1, 64, 112, 112), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%2 = %1.0;
%3 = nn.relu(%2) /
ty=Tensor[(1, 64, 112, 112), float32] /;
%4 = nn.max_pool2d(%3, pool_size=[3, 3], strides=[2, 2], padding=[1, 1, 1, 1]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%5 = nn.conv2d(%4, %model.backbone.layer1.0.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%6 = nn.batch_norm(%5, %model.backbone.layer1.0.bn1.weight, %model.backbone.layer1.0.bn1.bias, %model.backbone.layer1.0.bn1.running_mean, %model.backbone.layer1.0.bn1.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%7 = %6.0;
%8 = nn.relu(%7) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%9 = nn.conv2d(%8, %model.backbone.layer1.0.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%10 = nn.batch_norm(%9, %model.backbone.layer1.0.bn2.weight, %model.backbone.layer1.0.bn2.bias, %model.backbone.layer1.0.bn2.running_mean, %model.backbone.layer1.0.bn2.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%11 = %10.0;
%12 = nn.relu(%11) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%13 = nn.conv2d(%12, %model.backbone.layer1.0.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%14 = nn.batch_norm(%13, %model.backbone.layer1.0.bn3.weight, %model.backbone.layer1.0.bn3.bias, %model.backbone.layer1.0.bn3.running_mean, %model.backbone.layer1.0.bn3.running_var) /
ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%15 = %14.0;
%16 = nn.conv2d(%4, %model.backbone.layer1.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%17 = nn.batch_norm(%16, %model.backbone.layer1.0.downsample.1.weight, %model.backbone.layer1.0.downsample.1.bias, %model.backbone.layer1.0.downsample.1.running_mean, %model.backbone.layer1.0.downsample.1.running_var) /
ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%18 = %17.0;
%19 = add(%15, %18) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%20 = nn.relu(%19) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%21 = nn.conv2d(%20, %model.backbone.layer1.1.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%22 = nn.batch_norm(%21, %model.backbone.layer1.1.bn1.weight, %model.backbone.layer1.1.bn1.bias, %model.backbone.layer1.1.bn1.running_mean, %model.backbone.layer1.1.bn1.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%23 = %22.0;
%24 = nn.relu(%23) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%25 = nn.conv2d(%24, %model.backbone.layer1.1.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%26 = nn.batch_norm(%25, %model.backbone.layer1.1.bn2.weight, %model.backbone.layer1.1.bn2.bias, %model.backbone.layer1.1.bn2.running_mean, %model.backbone.layer1.1.bn2.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%27 = %26.0;
%28 = nn.relu(%27) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%29 = nn.conv2d(%28, %model.backbone.layer1.1.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%30 = nn.batch_norm(%29, %model.backbone.layer1.1.bn3.weight, %model.backbone.layer1.1.bn3.bias, %model.backbone.layer1.1.bn3.running_mean, %model.backbone.layer1.1.bn3.running_var) /
ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%31 = %30.0;
%32 = add(%31, %20) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%33 = nn.relu(%32) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%34 = nn.conv2d(%33, %model.backbone.layer1.2.conv1.weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%35 = nn.batch_norm(%34, %model.backbone.layer1.2.bn1.weight, %model.backbone.layer1.2.bn1.bias, %model.backbone.layer1.2.bn1.running_mean, %model.backbone.layer1.2.bn1.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%36 = %35.0;
%37 = nn.relu(%36) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%38 = nn.conv2d(%37, %model.backbone.layer1.2.conv2.weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%39 = nn.batch_norm(%38, %model.backbone.layer1.2.bn2.weight, %model.backbone.layer1.2.bn2.bias, %model.backbone.layer1.2.bn2.running_mean, %model.backbone.layer1.2.bn2.running_var) /
ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) /;
%40 = %39.0;
%41 = nn.relu(%40) /
ty=Tensor[(1, 64, 56, 56), float32] /;
%42 = nn.conv2d(%41, %model.backbone.layer1.2.conv3.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%43 = nn.batch_norm(%42, %model.backbone.layer1.2.bn3.weight, %model.backbone.layer1.2.bn3.bias, %model.backbone.layer1.2.bn3.running_mean, %model.backbone.layer1.2.bn3.running_var) /
ty=(Tensor[(1, 256, 56, 56), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%44 = %43.0;
%45 = add(%44, %33) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%46 = nn.relu(%45) /
ty=Tensor[(1, 256, 56, 56), float32] /;
%47 = nn.conv2d(%46, %model.backbone.layer2.0.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /
ty=Tensor[(1, 128, 56, 56), float32] /;
%48 = nn.batch_norm(%47, %model.backbone.layer2.0.bn1.weight, %model.backbone.layer2.0.bn1.bias, %model.backbone.layer2.0.bn1.running_mean, %model.backbone.layer2.0.bn1.running_var) /
ty=(Tensor[(1, 128, 56, 56), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%49 = %48.0;
%50 = nn.relu(%49) /
ty=Tensor[(1, 128, 56, 56), float32] /;
%51 = nn.conv2d(%50, %model.backbone.layer2.0.conv2.weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%52 = nn.batch_norm(%51, %model.backbone.layer2.0.bn2.weight, %model.backbone.layer2.0.bn2.bias, %model.backbone.layer2.0.bn2.running_mean, %model.backbone.layer2.0.bn2.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%53 = %52.0;
%54 = nn.relu(%53) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%55 = nn.conv2d(%54, %model.backbone.layer2.0.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%56 = nn.batch_norm(%55, %model.backbone.layer2.0.bn3.weight, %model.backbone.layer2.0.bn3.bias, %model.backbone.layer2.0.bn3.running_mean, %model.backbone.layer2.0.bn3.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%57 = %56.0;
%58 = nn.conv2d(%46, %model.backbone.layer2.0.downsample.0.weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%59 = nn.batch_norm(%58, %model.backbone.layer2.0.downsample.1.weight, %model.backbone.layer2.0.downsample.1.bias, %model.backbone.layer2.0.downsample.1.running_mean, %model.backbone.layer2.0.downsample.1.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%60 = %59.0;
%61 = add(%57, %60) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%62 = nn.relu(%61) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%63 = nn.conv2d(%62, %model.backbone.layer2.1.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%64 = nn.batch_norm(%63, %model.backbone.layer2.1.bn1.weight, %model.backbone.layer2.1.bn1.bias, %model.backbone.layer2.1.bn1.running_mean, %model.backbone.layer2.1.bn1.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%65 = %64.0;
%66 = nn.relu(%65) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%67 = nn.conv2d(%66, %model.backbone.layer2.1.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%68 = nn.batch_norm(%67, %model.backbone.layer2.1.bn2.weight, %model.backbone.layer2.1.bn2.bias, %model.backbone.layer2.1.bn2.running_mean, %model.backbone.layer2.1.bn2.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%69 = %68.0;
%70 = nn.relu(%69) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%71 = nn.conv2d(%70, %model.backbone.layer2.1.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%72 = nn.batch_norm(%71, %model.backbone.layer2.1.bn3.weight, %model.backbone.layer2.1.bn3.bias, %model.backbone.layer2.1.bn3.running_mean, %model.backbone.layer2.1.bn3.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%73 = %72.0;
%74 = add(%73, %62) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%75 = nn.relu(%74) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%76 = nn.conv2d(%75, %model.backbone.layer2.2.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%77 = nn.batch_norm(%76, %model.backbone.layer2.2.bn1.weight, %model.backbone.layer2.2.bn1.bias, %model.backbone.layer2.2.bn1.running_mean, %model.backbone.layer2.2.bn1.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%78 = %77.0;
%79 = nn.relu(%78) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%80 = nn.conv2d(%79, %model.backbone.layer2.2.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%81 = nn.batch_norm(%80, %model.backbone.layer2.2.bn2.weight, %model.backbone.layer2.2.bn2.bias, %model.backbone.layer2.2.bn2.running_mean, %model.backbone.layer2.2.bn2.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%82 = %81.0;
%83 = nn.relu(%82) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%84 = nn.conv2d(%83, %model.backbone.layer2.2.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%85 = nn.batch_norm(%84, %model.backbone.layer2.2.bn3.weight, %model.backbone.layer2.2.bn3.bias, %model.backbone.layer2.2.bn3.running_mean, %model.backbone.layer2.2.bn3.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%86 = %85.0;
%87 = add(%86, %75) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%88 = nn.relu(%87) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%89 = nn.conv2d(%88, %model.backbone.layer2.3.conv1.weight, padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%90 = nn.batch_norm(%89, %model.backbone.layer2.3.bn1.weight, %model.backbone.layer2.3.bn1.bias, %model.backbone.layer2.3.bn1.running_mean, %model.backbone.layer2.3.bn1.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%91 = %90.0;
%92 = nn.relu(%91) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%93 = nn.conv2d(%92, %model.backbone.layer2.3.conv2.weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%94 = nn.batch_norm(%93, %model.backbone.layer2.3.bn2.weight, %model.backbone.layer2.3.bn2.bias, %model.backbone.layer2.3.bn2.running_mean, %model.backbone.layer2.3.bn2.running_var) /
ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) /;
%95 = %94.0;
%96 = nn.relu(%95) /
ty=Tensor[(1, 128, 28, 28), float32] /;
%97 = nn.conv2d(%96, %model.backbone.layer2.3.conv3.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%98 = nn.batch_norm(%97, %model.backbone.layer2.3.bn3.weight, %model.backbone.layer2.3.bn3.bias, %model.backbone.layer2.3.bn3.running_mean, %model.backbone.layer2.3.bn3.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%99 = %98.0;
%100 = add(%99, %88) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%101 = nn.relu(%100) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%102 = nn.conv2d(%101, %model.backbone.layer3.0.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%103 = nn.batch_norm(%102, %model.backbone.layer3.0.bn1.weight, %model.backbone.layer3.0.bn1.bias, %model.backbone.layer3.0.bn1.running_mean, %model.backbone.layer3.0.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%104 = %103.0;
%105 = nn.relu(%104) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%106 = nn.conv2d(%105, %model.backbone.layer3.0.conv2.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%107 = nn.batch_norm(%106, %model.backbone.layer3.0.bn2.weight, %model.backbone.layer3.0.bn2.bias, %model.backbone.layer3.0.bn2.running_mean, %model.backbone.layer3.0.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%108 = %107.0;
%109 = nn.relu(%108) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%110 = nn.conv2d(%109, %model.backbone.layer3.0.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%111 = nn.batch_norm(%110, %model.backbone.layer3.0.bn3.weight, %model.backbone.layer3.0.bn3.bias, %model.backbone.layer3.0.bn3.running_mean, %model.backbone.layer3.0.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%112 = %111.0;
%113 = nn.conv2d(%101, %model.backbone.layer3.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%114 = nn.batch_norm(%113, %model.backbone.layer3.0.downsample.1.weight, %model.backbone.layer3.0.downsample.1.bias, %model.backbone.layer3.0.downsample.1.running_mean, %model.backbone.layer3.0.downsample.1.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%115 = %114.0;
%116 = add(%112, %115) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%117 = nn.relu(%116) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%118 = nn.conv2d(%117, %model.backbone.layer3.1.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%119 = nn.batch_norm(%118, %model.backbone.layer3.1.bn1.weight, %model.backbone.layer3.1.bn1.bias, %model.backbone.layer3.1.bn1.running_mean, %model.backbone.layer3.1.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%120 = %119.0;
%121 = nn.relu(%120) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%122 = nn.conv2d(%121, %model.backbone.layer3.1.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%123 = nn.batch_norm(%122, %model.backbone.layer3.1.bn2.weight, %model.backbone.layer3.1.bn2.bias, %model.backbone.layer3.1.bn2.running_mean, %model.backbone.layer3.1.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%124 = %123.0;
%125 = nn.relu(%124) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%126 = nn.conv2d(%125, %model.backbone.layer3.1.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%127 = nn.batch_norm(%126, %model.backbone.layer3.1.bn3.weight, %model.backbone.layer3.1.bn3.bias, %model.backbone.layer3.1.bn3.running_mean, %model.backbone.layer3.1.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%128 = %127.0;
%129 = add(%128, %117) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%130 = nn.relu(%129) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%131 = nn.conv2d(%130, %model.backbone.layer3.2.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%132 = nn.batch_norm(%131, %model.backbone.layer3.2.bn1.weight, %model.backbone.layer3.2.bn1.bias, %model.backbone.layer3.2.bn1.running_mean, %model.backbone.layer3.2.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%133 = %132.0;
%134 = nn.relu(%133) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%135 = nn.conv2d(%134, %model.backbone.layer3.2.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%136 = nn.batch_norm(%135, %model.backbone.layer3.2.bn2.weight, %model.backbone.layer3.2.bn2.bias, %model.backbone.layer3.2.bn2.running_mean, %model.backbone.layer3.2.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%137 = %136.0;
%138 = nn.relu(%137) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%139 = nn.conv2d(%138, %model.backbone.layer3.2.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%140 = nn.batch_norm(%139, %model.backbone.layer3.2.bn3.weight, %model.backbone.layer3.2.bn3.bias, %model.backbone.layer3.2.bn3.running_mean, %model.backbone.layer3.2.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%141 = %140.0;
%142 = add(%141, %130) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%143 = nn.relu(%142) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%144 = nn.conv2d(%143, %model.backbone.layer3.3.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%145 = nn.batch_norm(%144, %model.backbone.layer3.3.bn1.weight, %model.backbone.layer3.3.bn1.bias, %model.backbone.layer3.3.bn1.running_mean, %model.backbone.layer3.3.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%146 = %145.0;
%147 = nn.relu(%146) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%148 = nn.conv2d(%147, %model.backbone.layer3.3.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%149 = nn.batch_norm(%148, %model.backbone.layer3.3.bn2.weight, %model.backbone.layer3.3.bn2.bias, %model.backbone.layer3.3.bn2.running_mean, %model.backbone.layer3.3.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%150 = %149.0;
%151 = nn.relu(%150) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%152 = nn.conv2d(%151, %model.backbone.layer3.3.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%153 = nn.batch_norm(%152, %model.backbone.layer3.3.bn3.weight, %model.backbone.layer3.3.bn3.bias, %model.backbone.layer3.3.bn3.running_mean, %model.backbone.layer3.3.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%154 = %153.0;
%155 = add(%154, %143) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%156 = nn.relu(%155) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%157 = nn.conv2d(%156, %model.backbone.layer3.4.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%158 = nn.batch_norm(%157, %model.backbone.layer3.4.bn1.weight, %model.backbone.layer3.4.bn1.bias, %model.backbone.layer3.4.bn1.running_mean, %model.backbone.layer3.4.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%159 = %158.0;
%160 = nn.relu(%159) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%161 = nn.conv2d(%160, %model.backbone.layer3.4.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%162 = nn.batch_norm(%161, %model.backbone.layer3.4.bn2.weight, %model.backbone.layer3.4.bn2.bias, %model.backbone.layer3.4.bn2.running_mean, %model.backbone.layer3.4.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%163 = %162.0;
%164 = nn.relu(%163) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%165 = nn.conv2d(%164, %model.backbone.layer3.4.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%166 = nn.batch_norm(%165, %model.backbone.layer3.4.bn3.weight, %model.backbone.layer3.4.bn3.bias, %model.backbone.layer3.4.bn3.running_mean, %model.backbone.layer3.4.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%167 = %166.0;
%168 = add(%167, %156) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%169 = nn.relu(%168) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%170 = nn.conv2d(%169, %model.backbone.layer3.5.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%171 = nn.batch_norm(%170, %model.backbone.layer3.5.bn1.weight, %model.backbone.layer3.5.bn1.bias, %model.backbone.layer3.5.bn1.running_mean, %model.backbone.layer3.5.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%172 = %171.0;
%173 = nn.relu(%172) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%174 = nn.conv2d(%173, %model.backbone.layer3.5.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%175 = nn.batch_norm(%174, %model.backbone.layer3.5.bn2.weight, %model.backbone.layer3.5.bn2.bias, %model.backbone.layer3.5.bn2.running_mean, %model.backbone.layer3.5.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%176 = %175.0;
%177 = nn.relu(%176) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%178 = nn.conv2d(%177, %model.backbone.layer3.5.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%179 = nn.batch_norm(%178, %model.backbone.layer3.5.bn3.weight, %model.backbone.layer3.5.bn3.bias, %model.backbone.layer3.5.bn3.running_mean, %model.backbone.layer3.5.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%180 = %179.0;
%181 = add(%180, %169) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%182 = nn.relu(%181) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%183 = nn.conv2d(%182, %model.backbone.layer3.6.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%184 = nn.batch_norm(%183, %model.backbone.layer3.6.bn1.weight, %model.backbone.layer3.6.bn1.bias, %model.backbone.layer3.6.bn1.running_mean, %model.backbone.layer3.6.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%185 = %184.0;
%186 = nn.relu(%185) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%187 = nn.conv2d(%186, %model.backbone.layer3.6.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%188 = nn.batch_norm(%187, %model.backbone.layer3.6.bn2.weight, %model.backbone.layer3.6.bn2.bias, %model.backbone.layer3.6.bn2.running_mean, %model.backbone.layer3.6.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%189 = %188.0;
%190 = nn.relu(%189) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%191 = nn.conv2d(%190, %model.backbone.layer3.6.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%192 = nn.batch_norm(%191, %model.backbone.layer3.6.bn3.weight, %model.backbone.layer3.6.bn3.bias, %model.backbone.layer3.6.bn3.running_mean, %model.backbone.layer3.6.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%193 = %192.0;
%194 = add(%193, %182) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%195 = nn.relu(%194) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%196 = nn.conv2d(%195, %model.backbone.layer3.7.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%197 = nn.batch_norm(%196, %model.backbone.layer3.7.bn1.weight, %model.backbone.layer3.7.bn1.bias, %model.backbone.layer3.7.bn1.running_mean, %model.backbone.layer3.7.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%198 = %197.0;
%199 = nn.relu(%198) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%200 = nn.conv2d(%199, %model.backbone.layer3.7.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%201 = nn.batch_norm(%200, %model.backbone.layer3.7.bn2.weight, %model.backbone.layer3.7.bn2.bias, %model.backbone.layer3.7.bn2.running_mean, %model.backbone.layer3.7.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%202 = %201.0;
%203 = nn.relu(%202) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%204 = nn.conv2d(%203, %model.backbone.layer3.7.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%205 = nn.batch_norm(%204, %model.backbone.layer3.7.bn3.weight, %model.backbone.layer3.7.bn3.bias, %model.backbone.layer3.7.bn3.running_mean, %model.backbone.layer3.7.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%206 = %205.0;
%207 = add(%206, %195) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%208 = nn.relu(%207) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%209 = nn.conv2d(%208, %model.backbone.layer3.8.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%210 = nn.batch_norm(%209, %model.backbone.layer3.8.bn1.weight, %model.backbone.layer3.8.bn1.bias, %model.backbone.layer3.8.bn1.running_mean, %model.backbone.layer3.8.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%211 = %210.0;
%212 = nn.relu(%211) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%213 = nn.conv2d(%212, %model.backbone.layer3.8.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%214 = nn.batch_norm(%213, %model.backbone.layer3.8.bn2.weight, %model.backbone.layer3.8.bn2.bias, %model.backbone.layer3.8.bn2.running_mean, %model.backbone.layer3.8.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%215 = %214.0;
%216 = nn.relu(%215) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%217 = nn.conv2d(%216, %model.backbone.layer3.8.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%218 = nn.batch_norm(%217, %model.backbone.layer3.8.bn3.weight, %model.backbone.layer3.8.bn3.bias, %model.backbone.layer3.8.bn3.running_mean, %model.backbone.layer3.8.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%219 = %218.0;
%220 = add(%219, %208) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%221 = nn.relu(%220) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%222 = nn.conv2d(%221, %model.backbone.layer3.9.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%223 = nn.batch_norm(%222, %model.backbone.layer3.9.bn1.weight, %model.backbone.layer3.9.bn1.bias, %model.backbone.layer3.9.bn1.running_mean, %model.backbone.layer3.9.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%224 = %223.0;
%225 = nn.relu(%224) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%226 = nn.conv2d(%225, %model.backbone.layer3.9.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%227 = nn.batch_norm(%226, %model.backbone.layer3.9.bn2.weight, %model.backbone.layer3.9.bn2.bias, %model.backbone.layer3.9.bn2.running_mean, %model.backbone.layer3.9.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%228 = %227.0;
%229 = nn.relu(%228) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%230 = nn.conv2d(%229, %model.backbone.layer3.9.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%231 = nn.batch_norm(%230, %model.backbone.layer3.9.bn3.weight, %model.backbone.layer3.9.bn3.bias, %model.backbone.layer3.9.bn3.running_mean, %model.backbone.layer3.9.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%232 = %231.0;
%233 = add(%232, %221) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%234 = nn.relu(%233) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%235 = nn.conv2d(%234, %model.backbone.layer3.10.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%236 = nn.batch_norm(%235, %model.backbone.layer3.10.bn1.weight, %model.backbone.layer3.10.bn1.bias, %model.backbone.layer3.10.bn1.running_mean, %model.backbone.layer3.10.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%237 = %236.0;
%238 = nn.relu(%237) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%239 = nn.conv2d(%238, %model.backbone.layer3.10.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%240 = nn.batch_norm(%239, %model.backbone.layer3.10.bn2.weight, %model.backbone.layer3.10.bn2.bias, %model.backbone.layer3.10.bn2.running_mean, %model.backbone.layer3.10.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%241 = %240.0;
%242 = nn.relu(%241) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%243 = nn.conv2d(%242, %model.backbone.layer3.10.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%244 = nn.batch_norm(%243, %model.backbone.layer3.10.bn3.weight, %model.backbone.layer3.10.bn3.bias, %model.backbone.layer3.10.bn3.running_mean, %model.backbone.layer3.10.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%245 = %244.0;
%246 = add(%245, %234) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%247 = nn.relu(%246) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%248 = nn.conv2d(%247, %model.backbone.layer3.11.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%249 = nn.batch_norm(%248, %model.backbone.layer3.11.bn1.weight, %model.backbone.layer3.11.bn1.bias, %model.backbone.layer3.11.bn1.running_mean, %model.backbone.layer3.11.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%250 = %249.0;
%251 = nn.relu(%250) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%252 = nn.conv2d(%251, %model.backbone.layer3.11.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%253 = nn.batch_norm(%252, %model.backbone.layer3.11.bn2.weight, %model.backbone.layer3.11.bn2.bias, %model.backbone.layer3.11.bn2.running_mean, %model.backbone.layer3.11.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%254 = %253.0;
%255 = nn.relu(%254) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%256 = nn.conv2d(%255, %model.backbone.layer3.11.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%257 = nn.batch_norm(%256, %model.backbone.layer3.11.bn3.weight, %model.backbone.layer3.11.bn3.bias, %model.backbone.layer3.11.bn3.running_mean, %model.backbone.layer3.11.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%258 = %257.0;
%259 = add(%258, %247) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%260 = nn.relu(%259) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%261 = nn.conv2d(%260, %model.backbone.layer3.12.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%262 = nn.batch_norm(%261, %model.backbone.layer3.12.bn1.weight, %model.backbone.layer3.12.bn1.bias, %model.backbone.layer3.12.bn1.running_mean, %model.backbone.layer3.12.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%263 = %262.0;
%264 = nn.relu(%263) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%265 = nn.conv2d(%264, %model.backbone.layer3.12.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%266 = nn.batch_norm(%265, %model.backbone.layer3.12.bn2.weight, %model.backbone.layer3.12.bn2.bias, %model.backbone.layer3.12.bn2.running_mean, %model.backbone.layer3.12.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%267 = %266.0;
%268 = nn.relu(%267) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%269 = nn.conv2d(%268, %model.backbone.layer3.12.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%270 = nn.batch_norm(%269, %model.backbone.layer3.12.bn3.weight, %model.backbone.layer3.12.bn3.bias, %model.backbone.layer3.12.bn3.running_mean, %model.backbone.layer3.12.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%271 = %270.0;
%272 = add(%271, %260) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%273 = nn.relu(%272) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%274 = nn.conv2d(%273, %model.backbone.layer3.13.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%275 = nn.batch_norm(%274, %model.backbone.layer3.13.bn1.weight, %model.backbone.layer3.13.bn1.bias, %model.backbone.layer3.13.bn1.running_mean, %model.backbone.layer3.13.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%276 = %275.0;
%277 = nn.relu(%276) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%278 = nn.conv2d(%277, %model.backbone.layer3.13.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%279 = nn.batch_norm(%278, %model.backbone.layer3.13.bn2.weight, %model.backbone.layer3.13.bn2.bias, %model.backbone.layer3.13.bn2.running_mean, %model.backbone.layer3.13.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%280 = %279.0;
%281 = nn.relu(%280) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%282 = nn.conv2d(%281, %model.backbone.layer3.13.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%283 = nn.batch_norm(%282, %model.backbone.layer3.13.bn3.weight, %model.backbone.layer3.13.bn3.bias, %model.backbone.layer3.13.bn3.running_mean, %model.backbone.layer3.13.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%284 = %283.0;
%285 = add(%284, %273) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%286 = nn.relu(%285) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%287 = nn.conv2d(%286, %model.backbone.layer3.14.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%288 = nn.batch_norm(%287, %model.backbone.layer3.14.bn1.weight, %model.backbone.layer3.14.bn1.bias, %model.backbone.layer3.14.bn1.running_mean, %model.backbone.layer3.14.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%289 = %288.0;
%290 = nn.relu(%289) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%291 = nn.conv2d(%290, %model.backbone.layer3.14.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%292 = nn.batch_norm(%291, %model.backbone.layer3.14.bn2.weight, %model.backbone.layer3.14.bn2.bias, %model.backbone.layer3.14.bn2.running_mean, %model.backbone.layer3.14.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%293 = %292.0;
%294 = nn.relu(%293) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%295 = nn.conv2d(%294, %model.backbone.layer3.14.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%296 = nn.batch_norm(%295, %model.backbone.layer3.14.bn3.weight, %model.backbone.layer3.14.bn3.bias, %model.backbone.layer3.14.bn3.running_mean, %model.backbone.layer3.14.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%297 = %296.0;
%298 = add(%297, %286) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%299 = nn.relu(%298) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%300 = nn.conv2d(%299, %model.backbone.layer3.15.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%301 = nn.batch_norm(%300, %model.backbone.layer3.15.bn1.weight, %model.backbone.layer3.15.bn1.bias, %model.backbone.layer3.15.bn1.running_mean, %model.backbone.layer3.15.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%302 = %301.0;
%303 = nn.relu(%302) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%304 = nn.conv2d(%303, %model.backbone.layer3.15.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%305 = nn.batch_norm(%304, %model.backbone.layer3.15.bn2.weight, %model.backbone.layer3.15.bn2.bias, %model.backbone.layer3.15.bn2.running_mean, %model.backbone.layer3.15.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%306 = %305.0;
%307 = nn.relu(%306) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%308 = nn.conv2d(%307, %model.backbone.layer3.15.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%309 = nn.batch_norm(%308, %model.backbone.layer3.15.bn3.weight, %model.backbone.layer3.15.bn3.bias, %model.backbone.layer3.15.bn3.running_mean, %model.backbone.layer3.15.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%310 = %309.0;
%311 = add(%310, %299) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%312 = nn.relu(%311) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%313 = nn.conv2d(%312, %model.backbone.layer3.16.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%314 = nn.batch_norm(%313, %model.backbone.layer3.16.bn1.weight, %model.backbone.layer3.16.bn1.bias, %model.backbone.layer3.16.bn1.running_mean, %model.backbone.layer3.16.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%315 = %314.0;
%316 = nn.relu(%315) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%317 = nn.conv2d(%316, %model.backbone.layer3.16.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%318 = nn.batch_norm(%317, %model.backbone.layer3.16.bn2.weight, %model.backbone.layer3.16.bn2.bias, %model.backbone.layer3.16.bn2.running_mean, %model.backbone.layer3.16.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%319 = %318.0;
%320 = nn.relu(%319) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%321 = nn.conv2d(%320, %model.backbone.layer3.16.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%322 = nn.batch_norm(%321, %model.backbone.layer3.16.bn3.weight, %model.backbone.layer3.16.bn3.bias, %model.backbone.layer3.16.bn3.running_mean, %model.backbone.layer3.16.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%323 = %322.0;
%324 = add(%323, %312) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%325 = nn.relu(%324) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%326 = nn.conv2d(%325, %model.backbone.layer3.17.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%327 = nn.batch_norm(%326, %model.backbone.layer3.17.bn1.weight, %model.backbone.layer3.17.bn1.bias, %model.backbone.layer3.17.bn1.running_mean, %model.backbone.layer3.17.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%328 = %327.0;
%329 = nn.relu(%328) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%330 = nn.conv2d(%329, %model.backbone.layer3.17.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%331 = nn.batch_norm(%330, %model.backbone.layer3.17.bn2.weight, %model.backbone.layer3.17.bn2.bias, %model.backbone.layer3.17.bn2.running_mean, %model.backbone.layer3.17.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%332 = %331.0;
%333 = nn.relu(%332) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%334 = nn.conv2d(%333, %model.backbone.layer3.17.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%335 = nn.batch_norm(%334, %model.backbone.layer3.17.bn3.weight, %model.backbone.layer3.17.bn3.bias, %model.backbone.layer3.17.bn3.running_mean, %model.backbone.layer3.17.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%336 = %335.0;
%337 = add(%336, %325) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%338 = nn.relu(%337) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%339 = nn.conv2d(%338, %model.backbone.layer3.18.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%340 = nn.batch_norm(%339, %model.backbone.layer3.18.bn1.weight, %model.backbone.layer3.18.bn1.bias, %model.backbone.layer3.18.bn1.running_mean, %model.backbone.layer3.18.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%341 = %340.0;
%342 = nn.relu(%341) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%343 = nn.conv2d(%342, %model.backbone.layer3.18.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%344 = nn.batch_norm(%343, %model.backbone.layer3.18.bn2.weight, %model.backbone.layer3.18.bn2.bias, %model.backbone.layer3.18.bn2.running_mean, %model.backbone.layer3.18.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%345 = %344.0;
%346 = nn.relu(%345) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%347 = nn.conv2d(%346, %model.backbone.layer3.18.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%348 = nn.batch_norm(%347, %model.backbone.layer3.18.bn3.weight, %model.backbone.layer3.18.bn3.bias, %model.backbone.layer3.18.bn3.running_mean, %model.backbone.layer3.18.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%349 = %348.0;
%350 = add(%349, %338) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%351 = nn.relu(%350) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%352 = nn.conv2d(%351, %model.backbone.layer3.19.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%353 = nn.batch_norm(%352, %model.backbone.layer3.19.bn1.weight, %model.backbone.layer3.19.bn1.bias, %model.backbone.layer3.19.bn1.running_mean, %model.backbone.layer3.19.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%354 = %353.0;
%355 = nn.relu(%354) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%356 = nn.conv2d(%355, %model.backbone.layer3.19.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%357 = nn.batch_norm(%356, %model.backbone.layer3.19.bn2.weight, %model.backbone.layer3.19.bn2.bias, %model.backbone.layer3.19.bn2.running_mean, %model.backbone.layer3.19.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%358 = %357.0;
%359 = nn.relu(%358) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%360 = nn.conv2d(%359, %model.backbone.layer3.19.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%361 = nn.batch_norm(%360, %model.backbone.layer3.19.bn3.weight, %model.backbone.layer3.19.bn3.bias, %model.backbone.layer3.19.bn3.running_mean, %model.backbone.layer3.19.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%362 = %361.0;
%363 = add(%362, %351) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%364 = nn.relu(%363) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%365 = nn.conv2d(%364, %model.backbone.layer3.20.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%366 = nn.batch_norm(%365, %model.backbone.layer3.20.bn1.weight, %model.backbone.layer3.20.bn1.bias, %model.backbone.layer3.20.bn1.running_mean, %model.backbone.layer3.20.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%367 = %366.0;
%368 = nn.relu(%367) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%369 = nn.conv2d(%368, %model.backbone.layer3.20.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%370 = nn.batch_norm(%369, %model.backbone.layer3.20.bn2.weight, %model.backbone.layer3.20.bn2.bias, %model.backbone.layer3.20.bn2.running_mean, %model.backbone.layer3.20.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%371 = %370.0;
%372 = nn.relu(%371) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%373 = nn.conv2d(%372, %model.backbone.layer3.20.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%374 = nn.batch_norm(%373, %model.backbone.layer3.20.bn3.weight, %model.backbone.layer3.20.bn3.bias, %model.backbone.layer3.20.bn3.running_mean, %model.backbone.layer3.20.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%375 = %374.0;
%376 = add(%375, %364) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%377 = nn.relu(%376) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%378 = nn.conv2d(%377, %model.backbone.layer3.21.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%379 = nn.batch_norm(%378, %model.backbone.layer3.21.bn1.weight, %model.backbone.layer3.21.bn1.bias, %model.backbone.layer3.21.bn1.running_mean, %model.backbone.layer3.21.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%380 = %379.0;
%381 = nn.relu(%380) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%382 = nn.conv2d(%381, %model.backbone.layer3.21.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%383 = nn.batch_norm(%382, %model.backbone.layer3.21.bn2.weight, %model.backbone.layer3.21.bn2.bias, %model.backbone.layer3.21.bn2.running_mean, %model.backbone.layer3.21.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%384 = %383.0;
%385 = nn.relu(%384) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%386 = nn.conv2d(%385, %model.backbone.layer3.21.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%387 = nn.batch_norm(%386, %model.backbone.layer3.21.bn3.weight, %model.backbone.layer3.21.bn3.bias, %model.backbone.layer3.21.bn3.running_mean, %model.backbone.layer3.21.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%388 = %387.0;
%389 = add(%388, %377) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%390 = nn.relu(%389) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%391 = nn.conv2d(%390, %model.backbone.layer3.22.conv1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%392 = nn.batch_norm(%391, %model.backbone.layer3.22.bn1.weight, %model.backbone.layer3.22.bn1.bias, %model.backbone.layer3.22.bn1.running_mean, %model.backbone.layer3.22.bn1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%393 = %392.0;
%394 = nn.relu(%393) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%395 = nn.conv2d(%394, %model.backbone.layer3.22.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%396 = nn.batch_norm(%395, %model.backbone.layer3.22.bn2.weight, %model.backbone.layer3.22.bn2.bias, %model.backbone.layer3.22.bn2.running_mean, %model.backbone.layer3.22.bn2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%397 = %396.0;
%398 = nn.relu(%397) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%399 = nn.conv2d(%398, %model.backbone.layer3.22.conv3.weight, padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%400 = nn.batch_norm(%399, %model.backbone.layer3.22.bn3.weight, %model.backbone.layer3.22.bn3.bias, %model.backbone.layer3.22.bn3.running_mean, %model.backbone.layer3.22.bn3.running_var) /
ty=(Tensor[(1, 1024, 28, 28), float32], Tensor[(1024), float32], Tensor[(1024), float32]) /;
%401 = %400.0;
%402 = add(%401, %390) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%403 = nn.relu(%402) /
ty=Tensor[(1, 1024, 28, 28), float32] /;
%404 = nn.conv2d(%403, %model.backbone.layer4.0.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%405 = nn.batch_norm(%404, %model.backbone.layer4.0.bn1.weight, %model.backbone.layer4.0.bn1.bias, %model.backbone.layer4.0.bn1.running_mean, %model.backbone.layer4.0.bn1.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%406 = %405.0;
%407 = nn.relu(%406) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%408 = nn.conv2d(%407, %model.backbone.layer4.0.conv2.weight, padding=[2, 2, 2, 2], dilation=[2, 2], channels=512, kernel_size=[3, 3]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%409 = nn.batch_norm(%408, %model.backbone.layer4.0.bn2.weight, %model.backbone.layer4.0.bn2.bias, %model.backbone.layer4.0.bn2.running_mean, %model.backbone.layer4.0.bn2.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%410 = %409.0;
%411 = nn.relu(%410) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%412 = nn.conv2d(%411, %model.backbone.layer4.0.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%413 = nn.batch_norm(%412, %model.backbone.layer4.0.bn3.weight, %model.backbone.layer4.0.bn3.bias, %model.backbone.layer4.0.bn3.running_mean, %model.backbone.layer4.0.bn3.running_var) /
ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) /;
%414 = %413.0;
%415 = nn.conv2d(%403, %model.backbone.layer4.0.downsample.0.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%416 = nn.batch_norm(%415, %model.backbone.layer4.0.downsample.1.weight, %model.backbone.layer4.0.downsample.1.bias, %model.backbone.layer4.0.downsample.1.running_mean, %model.backbone.layer4.0.downsample.1.running_var) /
ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) /;
%417 = %416.0;
%418 = add(%414, %417) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%419 = nn.relu(%418) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%420 = nn.conv2d(%419, %model.backbone.layer4.1.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%421 = nn.batch_norm(%420, %model.backbone.layer4.1.bn1.weight, %model.backbone.layer4.1.bn1.bias, %model.backbone.layer4.1.bn1.running_mean, %model.backbone.layer4.1.bn1.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%422 = %421.0;
%423 = nn.relu(%422) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%424 = nn.conv2d(%423, %model.backbone.layer4.1.conv2.weight, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%425 = nn.batch_norm(%424, %model.backbone.layer4.1.bn2.weight, %model.backbone.layer4.1.bn2.bias, %model.backbone.layer4.1.bn2.running_mean, %model.backbone.layer4.1.bn2.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%426 = %425.0;
%427 = nn.relu(%426) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%428 = nn.conv2d(%427, %model.backbone.layer4.1.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%429 = nn.batch_norm(%428, %model.backbone.layer4.1.bn3.weight, %model.backbone.layer4.1.bn3.bias, %model.backbone.layer4.1.bn3.running_mean, %model.backbone.layer4.1.bn3.running_var) /
ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) /;
%430 = %429.0;
%431 = add(%430, %419) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%432 = nn.relu(%431) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%433 = nn.conv2d(%432, %model.backbone.layer4.2.conv1.weight, padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%434 = nn.batch_norm(%433, %model.backbone.layer4.2.bn1.weight, %model.backbone.layer4.2.bn1.bias, %model.backbone.layer4.2.bn1.running_mean, %model.backbone.layer4.2.bn1.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%435 = %434.0;
%436 = nn.relu(%435) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%437 = nn.conv2d(%436, %model.backbone.layer4.2.conv2.weight, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3]) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%438 = nn.batch_norm(%437, %model.backbone.layer4.2.bn2.weight, %model.backbone.layer4.2.bn2.bias, %model.backbone.layer4.2.bn2.running_mean, %model.backbone.layer4.2.bn2.running_var) /
ty=(Tensor[(1, 512, 28, 28), float32], Tensor[(512), float32], Tensor[(512), float32]) /;
%439 = %438.0;
%440 = nn.relu(%439) /
ty=Tensor[(1, 512, 28, 28), float32] /;
%441 = nn.conv2d(%440, %model.backbone.layer4.2.conv3.weight, padding=[0, 0, 0, 0], channels=2048, kernel_size=[1, 1]) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%442 = nn.batch_norm(%441, %model.backbone.layer4.2.bn3.weight, %model.backbone.layer4.2.bn3.bias, %model.backbone.layer4.2.bn3.running_mean, %model.backbone.layer4.2.bn3.running_var) /
ty=(Tensor[(1, 2048, 28, 28), float32], Tensor[(2048), float32], Tensor[(2048), float32]) /;
%443 = %442.0;
%444 = add(%443, %432) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%445 = nn.relu(%444) /
ty=Tensor[(1, 2048, 28, 28), float32] /;
%446 = (%445, %403);
%447 = %446.1;
%448 = nn.conv2d(%447, %model.aux_classifier.0.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%449 = nn.batch_norm(%448, %model.aux_classifier.1.weight, %model.aux_classifier.1.bias, %model.aux_classifier.1.running_mean, %model.aux_classifier.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%450 = %449.0;
%451 = nn.relu(%450) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%452 = nn.dropout(%451, rate=0.1f) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(1, 256, 28, 28), float32]) /;
%453 = %452.0;
%454 = nn.conv2d(%453, %model.aux_classifier.4.weight, padding=[0, 0, 0, 0], channels=21, kernel_size=[1, 1]) /
ty=Tensor[(1, 21, 28, 28), float32] /;
%455 = nn.bias_add(%454, %model.aux_classifier.4.bias) /
ty=Tensor[(1, 21, 28, 28), float32] /;
%456 = image.resize(%455, size=[224, 224]) /
ty=Tensor[(1, 21, 224, 224), float32] /;
%457 = %446.0;
%458 = nn.conv2d(%457, %model.classifier.0.convs.0.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%459 = nn.batch_norm(%458, %model.classifier.0.convs.0.1.weight, %model.classifier.0.convs.0.1.bias, %model.classifier.0.convs.0.1.running_mean, %model.classifier.0.convs.0.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%460 = %459.0;
%461 = nn.relu(%460) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%462 = nn.conv2d(%457, %model.classifier.0.convs.1.0.weight, padding=[12, 12, 12, 12], dilation=[12, 12], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%463 = nn.batch_norm(%462, %model.classifier.0.convs.1.1.weight, %model.classifier.0.convs.1.1.bias, %model.classifier.0.convs.1.1.running_mean, %model.classifier.0.convs.1.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%464 = %463.0;
%465 = nn.relu(%464) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%466 = nn.conv2d(%457, %model.classifier.0.convs.2.0.weight, padding=[24, 24, 24, 24], dilation=[24, 24], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%467 = nn.batch_norm(%466, %model.classifier.0.convs.2.1.weight, %model.classifier.0.convs.2.1.bias, %model.classifier.0.convs.2.1.running_mean, %model.classifier.0.convs.2.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%468 = %467.0;
%469 = nn.relu(%468) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%470 = nn.conv2d(%457, %model.classifier.0.convs.3.0.weight, padding=[36, 36, 36, 36], dilation=[36, 36], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%471 = nn.batch_norm(%470, %model.classifier.0.convs.3.1.weight, %model.classifier.0.convs.3.1.bias, %model.classifier.0.convs.3.1.running_mean, %model.classifier.0.convs.3.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%472 = %471.0;
%473 = nn.relu(%472) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%474 = nn.adaptive_avg_pool2d(%457, output_size=[1, 1]) /
ty=Tensor[(1, 2048, 1, 1), float32] /;
%475 = nn.conv2d(%474, %model.classifier.0.convs.4.1.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 1, 1), float32] /;
%476 = nn.batch_norm(%475, %model.classifier.0.convs.4.2.weight, %model.classifier.0.convs.4.2.bias, %model.classifier.0.convs.4.2.running_mean, %model.classifier.0.convs.4.2.running_var) /
ty=(Tensor[(1, 256, 1, 1), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%477 = %476.0;
%478 = nn.relu(%477) /
ty=Tensor[(1, 256, 1, 1), float32] /;
%479 = image.resize(%478, size=[28, 28]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%480 = (%461, %465, %469, %473, %479);
%481 = concatenate(%480, axis=1) /
ty=Tensor[(1, 1280, 28, 28), float32] /;
%482 = nn.conv2d(%481, %model.classifier.0.project.0.weight, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%483 = nn.batch_norm(%482, %model.classifier.0.project.1.weight, %model.classifier.0.project.1.bias, %model.classifier.0.project.1.running_mean, %model.classifier.0.project.1.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%484 = %483.0;
%485 = nn.relu(%484) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%486 = nn.dropout(%485) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(1, 256, 28, 28), float32]) /;
%487 = %486.0;
%488 = nn.conv2d(%487, %model.classifier.1.weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%489 = nn.batch_norm(%488, %model.classifier.2.weight, %model.classifier.2.bias, %model.classifier.2.running_mean, %model.classifier.2.running_var) /
ty=(Tensor[(1, 256, 28, 28), float32], Tensor[(256), float32], Tensor[(256), float32]) /;
%490 = %489.0;
%491 = nn.relu(%490) /
ty=Tensor[(1, 256, 28, 28), float32] /;
%492 = nn.conv2d(%491, %model.classifier.4.weight, padding=[0, 0, 0, 0], channels=21, kernel_size=[1, 1]) /
ty=Tensor[(1, 21, 28, 28), float32] /;
%493 = nn.bias_add(%492, %model.classifier.4.bias) /
ty=Tensor[(1, 21, 28, 28), float32] /;
%494 = image.resize(%493, size=[224, 224]) /
ty=Tensor[(1, 21, 224, 224), float32] */;
%495 = (%456, %494);
%496 = %495.0;
%497 = %495.1;
(%496, %497)
}

Recopied - same as previous one (just with good alignment/indentation)

@jtuyls
Copy link

jtuyls commented Mar 29, 2021

@abdulazizm Thanks for posting the Relay expression. The issue seems to be in this concatenate layer: %481 = concatenate(%480, axis=1) / ty=Tensor[(1, 1280, 28, 28), float32] /;. The height size should be 28, but are {132,108,156,86,28}, so most are going wrong. I went through the Relay expressions but couldn't derive which layers that are translated wrongly to pyxir. Could you enable debug information logging and provide me with the full console output?

To log debug info, add this at the top of your script:

import logging
logging.basicConfig()
logger = logging.getLogger('pyxir')
logger.setLevel(logging.INFO)

@abdulazizm
Copy link
Author

abdulazizm commented Mar 29, 2021

@jtuyls Hope you are getting nearer and about to solve this issue. I have enabled logging earlier too. Just attaching logs without printing mod['main'] just for your reference. Let me know if you are worried about my model tracing.

(vitis-ai-pytorch) Vitis-AI /workspace/python/compile > python3 compile_pytorch_deeplab.py
2021-03-29 10:29:15.964160: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/xilinx/xrt/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib
2021-03-29 10:29:15.964190: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/quantization/decent_quantizer.py:50: UserWarning: Could not import decent_q module. Please check if installed.
  warnings.warn("Could not import decent_q module. Please check"
File /home/vitis-ai-user/.tvm_test_data/data/cat.png exists, skip.
input img size (224, 224)
transform_image_torchvision torch.Size([3, 224, 224])
(1, 3, 224, 224) <class 'tuple'>
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
WARNING:root:Untyped Tensor found, assume it is float32
INFO:pyxir:
**************************************************
* RELAY IR TO PYXIR
**************************************************
/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l10_temporary.py:64: UserWarning: Convert Relay Adaptive Avg pool2d layer into normal average pool2d layer
  warnings.warn("Convert Relay Adaptive Avg pool2d layer into normal"
Traceback (most recent call last):
  File "compile_pytorch_deeplab.py", line 180, in <module>
    mod = annotation(mod, params, target)
  File "/workspace/python/tvm/relay/op/contrib/vitis_ai.py", line 92, in annotation
    xgraph = pyxir.frontend.tvm.from_relay(mod, params, postprocessing=None)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay.py", line 58, in from_relay
    cvx_preprocessing=cvx_preprocessing
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xgraph_converter.py", line 96, in from_relay_to_xgraph
    cvx_prep=cvx_preprocessing)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 75, in function
    op_idx, RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 184, in tuple_expr
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l1_basic.py", line 78, in add
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l2_convolution.py", line 216, in nn_conv2d
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 239, in tuple_get_item
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l1_basic.py", line 239, in nn_batch_norm
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l2_convolution.py", line 216, in nn_conv2d
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 239, in tuple_get_item
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 239, in tuple_get_item
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l1_basic.py", line 239, in nn_batch_norm
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l2_convolution.py", line 216, in nn_conv2d
    **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_2_xlayer_registry.py", line 122, in __base_relay_2_xlayer
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l0_expr_and_others.py", line 117, in call
    RELAY_2_XLAYER, **kwargs)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/frontend/tvm/relay_tools/relay_l1_basic.py", line 414, in concatenate
    X = px.ops.concat(op_name, data_layers, axis, relay_id=relay_idx)
  File "/home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/graph/ops/l1_basic_nn.py", line 167, in concat
    assert i == axis or len(check) == 1
AssertionError

@jtuyls
Copy link

jtuyls commented Mar 29, 2021

@abdulazizm Could you do the same, but with the DEBUG flag? The INFO flag is not showing enough details to debug this part of the codebase. My bad for asking to use the INFO flag earlier.

logger.setLevel(logging.DEBUG)

@abdulazizm
Copy link
Author

@jtuyls No issues, here is the requested log (attaching as file - seems to have more details). Hope this helps
assertionerror_log_pyxir_github_jtuyls.log

@jtuyls
Copy link

jtuyls commented Mar 30, 2021

@abdulazizm This issue is caused by the conv2d shape inference not having been added for dilation > 1. It's still WIP but you could try this branch of PyXIR: https://github.com/Xilinx/pyxir/tree/dev-rf-test-0

@abdulazizm
Copy link
Author

@jtuyls Thanks for the time and your code change. Tried pyxir branch (dev-rf-test-0), annotation() API worked good without assertion error but getting seg fault during PartitionGraph() call.

Annotation done
Merge compiler done
Fatal Python error: Segmentation fault

Current thread 0x00007f3b39594740 (most recent call first):
  File "/workspace/python/tvm/_ffi/_ctypes/packed_func.py", line 233 in __call__
  File "/workspace/python/tvm/ir/transform.py", line 127 in __call__
  File "compile_pytorch_deeplab.py", line 192 in <module>
Segmentation fault (core dumped)

Code used for reference:

mod = annotation(mod, params, target)
print("Annotation done")

mod = relay.transform.MergeCompilerRegions()(mod)
print("Merge compiler done")

mod = relay.transform.PartitionGraph()(mod)
print("Partition Graph done")

@jtuyls
Copy link

jtuyls commented Mar 31, 2021

@abdulazizm This looks like a partitioning issue. Could you provide the output of mod['main'] after MergeCompilerRegions?

@abdulazizm
Copy link
Author

@jtuyls Yeah it seems to be a partitioning issue. (May be this is why we didnt had dilation/strides not implemented earlier?) Here is the requested output ->
segmentationfault_log_pyxir_github_jtuyls.log

Thanks

@jtuyls
Copy link

jtuyls commented Apr 7, 2021

@abdulazizm I think the issue is this dropout layer: %886 = nn.dropout(%885, rate=0.1f) /* ty=(Tensor[(1, 28, 28, 256), float32], Tensor[(1, 28, 28, 256), float32]) */;. The flow should be able to handle this but the expected result will be that all convolutions before the dropout will be inside the DPU partition (we currently only handle one partition) and all convolutions after the dropout will be executed on CPU.

@abdulazizm
Copy link
Author

Hi @jtuyls ,

The flow should be able to handle this

Do we have any quick fix updated for this segfault issue (in the previous mentioned branch)?

all convolutions after the dropout will be executed on CPU

So we may not be using DPU efficiently for this purpose? This may be acceptable for my purpose, do you want me to proceed or hold on?

Is there any workaround you recommend us to try for deeplab model? Guess trying deeplab with frameworks other than Pytorch may not help - Right?

@jtuyls
Copy link

jtuyls commented Apr 7, 2021

@abdulazizm

Do we have any quick fix updated for this segfault issue (in the previous mentioned branch)?

Yes, I am working on it and will push it to the same branch. I will ping you when it's in.

So we may not be using DPU efficiently for this purpose? This may be acceptable for my purpose, do you want me to proceed or hold on?

Is there any workaround you recommend us to try for deeplab model? Guess trying deeplab with frameworks other than Pytorch may not help - Right?

Yes, the performance will suffer from this. You could remove the dropout layers from the model before passing the model to TVM to avoid this.

@abdulazizm
Copy link
Author

@jornt-xilinx That sounds great. Thanks.

@jtuyls
Copy link

jtuyls commented Apr 9, 2021

@abdulazizm I think it was the dilated conv2d with large padding that is unsupported on the DPU that was causing incorrect partitioning (DPUCZDX8G supports padding sizes in range [0, kernel_size-1], but kernel size was 3 and padding size 4). I could recreate a small test case and fixed the partitioning issue but this means that this specific conv2d and following convolutions will be executed on CPU. I pushed the fix to this branch again: https://github.com/Xilinx/pyxir/tree/dev-rf-test-0. Could you try it out to check whether it works for you?

@abdulazizm
Copy link
Author

@jtuyls That works great!! Can able to build the deeplab v3 model successfully. Thanks. Will try the inference results and update benchmarks here shortly. When can we expect these changes in release branch?

Shall we close this issue?

@jtuyls
Copy link

jtuyls commented Apr 12, 2021

@abdulazizm Great that it's working, I will move the changes into dev shortly and will probably get them in released this or next week. Feel free to close the issue.

@abdulazizm
Copy link
Author

@jtuyls Couldnt load the library module generated in EDGE device. Not sure why. Am I missing something?

root@pynq:/home/xilinx# python3 run_pytorch_deeplab.py
/usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/runtime/__init__.py:34: UserWarning: Could not load `cpu-tf` runtime because of error: No module named 'tensorflow'
  .format(e))
Traceback (most recent call last):
  File "run_pytorch_deeplab.py", line 63, in <module>
    lib = tvm.runtime.module.load_module(lib_path)
  File "/home/xilinx/tvm/python/tvm/runtime/module.py", line 472, in load_module
    return _ffi_api.ModuleLoadFromFile(path, fmt)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  6: TVMFuncCall
  5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::runtime::Module (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>::AssignTypedLambda<tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>(tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  4: tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
  3: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  2: tvm::runtime::CreateModuleFromLibrary(tvm::runtime::ObjectPtr<tvm::runtime::Library>)
  1: tvm::runtime::ProcessModuleBlob(char const*, tvm::runtime::ObjectPtr<tvm::runtime::Library>)
  0: tvm::runtime::LoadModuleFromBinary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, dmlc::Stream*)
  File "/home/xilinx/tvm/src/runtime/library_module.cc", line 116
TVMError: Binary was created using GraphRuntimeFactory but a loader of that name is not registered. Available loaders are VitisAIRuntime, GraphExecutorFactory, metadata, VMExecutable. Perhaps you need to recompile with this runtime enabled.

@jtuyls
Copy link

jtuyls commented Apr 12, 2021

@abdulazizm I think this might be caused by the TVM GraphRuntime having been changed to GraphExecutor. I think you have different TVM versions installed on the host machine and board and you will have to pull in the latest TVM version on the host to resolve this.

@abdulazizm
Copy link
Author

@jtuyls Tried with latest TVM and Pyxir (dev-rf-test-0 git branch), couldnt compile now. Any other workflow has changed (its popping not to use annotation API - but tvm's vitis-ai example and pyxir example still has annotation API used in it)? and loading build config changed? (couldnt find the recent way load build config).

/workspace/python/tvm/contrib/target/vitis_ai.py:138: UserWarning: You are using a deprecated way of passing build configs (e.g. `relay.ext.vitis_ai.options.target`). Check out the Vitis AI  documentation here: https://tvm.apache.org/docs/deploy/vitis_ai.html to switch to recommended way for passing build configs.
  "You are using a deprecated way of passing build configs (e.g."
Traceback (most recent call last):
  File "compile_pytorch_deeplab.py", line 211, in <module>
    lib = relay.build(mod, tvm_target, params=params)
  File "/workspace/python/tvm/relay/build_module.py", line 290, in build
    graph_json, runtime_mod, params = bld_mod.build(mod=ir_mod, target=target, params=params)
  File "/workspace/python/tvm/relay/build_module.py", line 136, in build
    self._build(mod, target, target_host)
  File "/workspace/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
KeyError: 'Traceback (most recent call last):\n  6: TVMFuncCall\n  5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n  4: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&)\n  3: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::GraphExecutorCodegenModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n  2: tvm::relay::backend::GraphExecutorCodegen::Codegen(tvm::relay::Function)\n  1: tvm::relay::CompileEngineImpl::LowerExternalFunctions()\n  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n  File "/workspace/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun\n    rv = local_pyfunc(*pyargs)\n  File "/workspace/python/tvm/contrib/target/vitis_ai.py", line 215, in vitis_ai_compiler\n    name, xgraph_str, dpu_target, vai_build_dir, vai_work_dir, export_runtime_module\n  File "/workspace/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__\n    raise get_last_ffi_error()\nKeyError: "KeyError: (\'cpu-tf\',)\\nAt:\\n  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/runtime/runtime_factory.py(60): build_runtime\\n  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/base.py(347): build\\n  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/base.py(495): build_online_quant_rt_opaque_func\\n  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/opaque_func.py(113): opaque_func_wrapper\\n  /workspace/python/tvm/_ffi/_ctypes/packed_func.py(233): __call__\\n  /workspace/python/tvm/contrib/target/vitis_ai.py(215): vitis_ai_compiler\\n  /workspace/python/tvm/_ffi/_ctypes/packed_func.py(81): cfun\\n  /workspace/python/tvm/_ffi/_ctypes/packed_func.py(233): __call__\\n  /workspace/python/tvm/relay/build_module.py(136): build\\n  /workspace/python/tvm/relay/build_module.py(290): build\\n  compile_pytorch_deeplab.py(211): <module>"'
(vitis-ai-pytorch) Vitis-AI /workspace/deeplab_code > 

@jtuyls
Copy link

jtuyls commented Apr 13, 2021

@abdulazizm Were you also in the vitis-ai-pytorch environment earlier? You should use the vitis-ai-tensorflow environment.

@abdulazizm
Copy link
Author

abdulazizm commented Apr 13, 2021

@jtuyls Yeah i was using pytorch earlier too since I was using deeplab model from pytorch framework, someone suggested to use this conda environment. (Reference attached below) - will try tensorflow environment too and keep you posted

@abdulazizm Hi, I noticed that the conda you used is vitis-ai-tensorflow. If it's a pytorch model, you could have a try to use vitis-ai-pytorch.
For more details, you could refer to https://www.xilinx.com/html_docs/vitis_ai/1_3/deploying_running.html#zgy1576168058789

@jtuyls
Copy link

jtuyls commented Apr 13, 2021

@abdulazizm I see, yeah, for the TVM work it is always the tensorflow environment. Not sure why the pytorch environment was working for you earlier.

@abdulazizm
Copy link
Author

abdulazizm commented Apr 13, 2021

@jtuyls That worked. With tensorflow conda environment managed to build the library file. But on EDGE device at runtime throws error like below while mounting lib. Any idea?

root@pynq:/home/xilinx# python3 run_pytorch_deeplab.py
/usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/runtime/__init__.py:34: UserWarning: Could not load `cpu-tf` runtime because of error: No module named 'tensorflow'
  .format(e))
Traceback (most recent call last):
  File "run_pytorch_deeplab.py", line 63, in <module>
    lib = tvm.runtime.module.load_module(lib_path)
  File "/home/xilinx/tvm/python/tvm/runtime/module.py", line 472, in load_module
    return _ffi_api.ModuleLoadFromFile(path, fmt)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
KeyError: "KeyError: ('cpu-tf',)\nAt:\n  /usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/runtime/runtime_factory.py(60): build_runtime\n  /usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/base.py(347): build\n  /usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/base.py(495): build_online_quant_rt_opaque_func\n  /usr/local/lib/python3.6/dist-packages/pyxir-0.1.6-py3.6-linux-aarch64.egg/pyxir/opaque_func.py(113): opaque_func_wrapper\n  /home/xilinx/tvm/python/tvm/runtime/module.py(472): load_module\n  run_pytorch_deeplab.py(63): <module>"
root@pynq:/home/xilinx# 

Tried tvm.runtime.load_module(lib_path) instead of tvm.runtime.module.load_module(lib_path), but didnt help - same errors

@jtuyls
Copy link

jtuyls commented Apr 13, 2021

@abdulazizm I think your model didn't get quantized and compiled properly on the host machine. Make sure that you are providing enough calibration images and to verify check that the quantizer and compiler got called in the console output. Also, build and export for aarch64 afterwards, here is a full example: https://github.com/Xilinx/pyxir/blob/master/examples/tvm/edge_resnet_18_host.py.

@abdulazizm
Copy link
Author

@jtuyls With default 128 quant_size could able to compile the model with quantization. It took around 3.85 seconds for an image from coco dataset (earlier it was 8.5 seconds - guess it used cpu completely). Hope if I could able to remove dropout layer, will see drastic improvements.

Thanks for the time, support and patience. Closing this issue.

@jtuyls
Copy link

jtuyls commented Apr 15, 2021

@abdulazizm That isn't good performance and means that still a lot of the convolutions are executed on the CPU. Removing the dropout might help but there are also dilated convolutions with large padding values that are breaking up the DPU partition and causing those and subsequent convolutions to be offloaded to the CPU. Like these ones:

%970 = nn.conv2d(%969, meta[relay.Constant][517] /* ty=Tensor[(512, 512, 3, 3), float32] */, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3], data_layout="NHWC") /* ty=Tensor[(1, 28, 28, 512), float32] */;
...
%1001 = nn.conv2d(%1000, meta[relay.Constant][532] /* ty=Tensor[(256, 2048, 3, 3), float32] */, padding=[12, 12, 12, 12], dilation=[12, 12], channels=256, kernel_size=[3, 3], data_layout="NHWC") /* ty=Tensor[(1, 28, 28, 256), float32] */;

It looks like those dilated convolutions with large padding values are causing more damage to the performance than the dropout so to achieve good performance these operations will also have to be adjusted to DPU supported dilated convolutions.

For you reference, on page 23 of the Zynq DPU product guide you can find a table of DPU supported operations. As an example, the above convolutions are not supported because the padding values are larger than kernel_w/h - 1.

@abdulazizm
Copy link
Author

@jtuyls Tried with padding and dilation = (2,2) instead of (12,12), (24,24), (36,36) but didn't help much. Took around the same ~4seconds for single image inference - not sure why.

@jtuyls
Copy link

jtuyls commented Apr 15, 2021

@abdulazizm I guess there are still unsupported convolutions around then. You also replaced the one with (4, 4)?

%970 = nn.conv2d(%969, meta[relay.Constant][517] /* ty=Tensor[(512, 512, 3, 3), float32] */, padding=[4, 4, 4, 4], dilation=[4, 4], channels=512, kernel_size=[3, 3], data_layout="NHWC") /* ty=Tensor[(1, 28, 28, 512), float32] */;

You can look at the annotation.compiler_end expressions in the Relay module after 'MergeCompilerRegions'. The operations after this have been annotated as 'unsupported' and therefore are not in the DPU partition. This should give some clues about why subsequent convolutions are still offloaded to the CPU.

@abdulazizm
Copy link
Author

@jtuyls You are right, I missed (4,4). Now tried removing those too, but couldn't get the library exported even after quantization done.

INFO: Calibration Done.
2021-04-15 13:08:43.356638: W tensorflow/contrib/decent_q/utils/quantize_utils.cc:883] [DECENT_WARNING] Cannot find quantize info file: /tmp/tmp14y7tcjq//temp/Pad_37_aquant. Use default quantize info.
2021-04-15 13:08:43.356672: W tensorflow/contrib/decent_q/utils/quantize_utils.cc:883] [DECENT_WARNING] Cannot find quantize info file: /tmp/tmp14y7tcjq//temp/Pad_36_aquant. Use default quantize info.
2021-04-15 13:08:43.356779: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:1401] [DECENT_WARNING] Node nn.relu-94143046764624/y's output values are all zeros. This may cause error for DPU compiler, please check your float model.
INFO: Generating Deploy Model...
2021-04-15 13:08:45.003207: W tensorflow/contrib/decent_q/utils/deploy_quantized_graph.cc:1152] [DECENT_WARNING] Batchnorm Node (nn.relu-94143046764624 + Mul) is not folded. It will be converted to a Scale node (nn.relu-94143046764624) to deploy on DPU. This may cause accuracy decrease and error for DPU compiler.
INFO: Deploy Model Generated.
********************* Quantization Summary *********************
INFO: Output:
  quantize_eval_model: /tmp/tmp14y7tcjq/quantize_eval_model.pb
  deploy_model: /tmp/tmp14y7tcjq/deploy_model.pb
[VAI_C-BACKEND][Check Failed: kernel_param * input_channel_group <= 1024][/home/xbuild/conda-bld/dnnc_1606505494059/work/submodules/asicv2com/src/InstrGenerator/InstrGeneratorDilatedConv.cpp:161][DATA_OUTRANGE][Data value is out of range!]
*** Check failure stack trace: ***
Traceback (most recent call last):
  File "compile_pytorch_deeplab.py", line 296, in <module>
    InferenceSession.run()
  File "/workspace/python/tvm/contrib/graph_executor.py", line 206, in run
    self._run()
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: AssertionError: Can't retrieve right out tensor names from DNNC compiler output
At:
  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/contrib/target/components/DPUCZDX8G/vai_c.py(164): compile
  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/contrib/target/components/DPUCZDX8G/zcu104.py(81): xgraph_dpu_zcu104_compiler
  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/base.py(156): compile
  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/base.py(208): compile_opaque_func
  /home/vitis-ai-user/.local/lib/python3.6/site-packages/pyxir-0.1.6-py3.6-linux-x86_64.egg/pyxir/opaque_func.py(113): opaque_func_wrapper
  /workspace/python/tvm/contrib/graph_executor.py(206): run
  compile_pytorch_deeplab.py(296): <module>

@abdulazizm
Copy link
Author

@jornt-xilinx Any suggestions for the above issue? Merged dev-rf-test-0 to master branch?

@jtuyls
Copy link

jtuyls commented Apr 21, 2021

@abdulazizm I merged dev-rf-test-0 into master yesterday (v0.1.7). As for the error, it looks like now the compiler is failing on a dilated conv2d operation that shouldn't get through. Could you share the output of mod['main'] again after MergeCompilerRegions?

@abdulazizm
Copy link
Author

Debug_log_pyxir_deeplab_less_dilated_conv.txt
mod_main_after_mergecompiler.txt
@jornt-xilinx Thanks for merging changes into master branch. Attached required log files for reference. Please let me know how can we proceed further. Do you want me to create separate issue for this or we can have it here?

@abdulazizm abdulazizm reopened this Apr 27, 2021
@jtuyls
Copy link

jtuyls commented Apr 30, 2021

@abdulazizm I think we are running into this DPU Conv2d constraint: kernel_w *kernel_h * (ceil(input_channel / channel_parallel)) <= bank_depth/2 for at least one of the conv2d's:

%986 = nn.relu(%985) /* ty=Tensor[(1, 28, 28, 2048), float32] */;
...
%999 = nn.conv2d(%986, %998, padding=[2, 2, 2, 2], dilation=[2, 2], channels=256, kernel_size=[3, 3], data_layout="NHWC") /* ty=Tensor[(1, 28, 28, 256), float32] */;

kernel_w, kernel_h = 3, 3
input_channel = 2048
channel_parallel = 16
bank_depth = 2048

So, kernel_w *kernel_h * (ceil(input_channel / channel_parallel)) = 1152 which is greater than bank_depth/2 = 1024. I will make an adjustment to catch this earlier on and offload to CPU, but for performance the conv2d will have to be adjusted to be compatible with the DPU constraints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants