Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIXME: Cannot handle Mean values beyond computePrecision limits for now #211

Closed
LeiWang1999 opened this issue Jan 31, 2021 · 1 comment
Closed

Comments

@LeiWang1999
Copy link

root@a02d337a99a5:/usr/local/nvdla/sw/umd# /usr/local/nvdla/sw/prebuilt/x86-ubuntu/nvdla_compiler --prototxt /home/Downloads/cifar-10-resnet20/resnet-18.prototxt --caffemodel /home/Downloads/cifar-10-resnet20/resnet-18.caffemodel --cprecision int8
creating new wisdom context...
opening wisdom context...
parsing caffe network...
libnvdla<3> mark softmax
Marking total 1 outputs
initialize all tensors with const scaling factors of 127...
attaching parsed network to the wisdom...
compiling profile "fast-math"... config "nv_full"...
libnvdla<2> Prototxt #chnls (C = 3) != Profile #chnls for input (NVDLA_IMG_A8B8G8R8: C = 4). Preferring #chnls from Profile for compiling.
libnvdla<2> Unable to do IMG channel post-extension for weights of node 'dc-conv-0', proceed without channel post-extension
(DLA) Error 0x0000000b: FIXME: Cannot handle Mean values beyond computePrecision limits for now (in engine-ast/BatchNormOp.cpp, function preProcessAuxData(), line 515)
(DLA) Error 0x0000000b: (propagating from engine-ast/EngineGraph.cpp, function preProcessAuxData(), line 1067)
(DLA) Error 0x0000000b: (propagating from Compiler.cpp, function preProcessAuxData(), line 1600)
(DLA) Error 0x00000008: failed compulation phase: preProcessAuxData (propagating from Compiler.cpp, function compileInternal(), line 512)
(DLA) Error 0x00000008: (propagating from Compiler.cpp, function compileInternal(), line 423)
(DLA) Error 0x00000008: (propagating from Compiler.cpp, function compile(), line 372)
(DLA) Error 0x00000008: (propagating from CompileTest.cpp, function compileProfile(), line 66)
(DLA) Error 0x00000008: (propagating from ParseTest.cpp, function parseAndCompile(), line 252)
(DLA) Error 0x00000008: (propagating from main.cpp, function launchTest(), line 111)

the content of resnet-18.prototxt:

name: "resnet18-cifar10"
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param { shape: { dim: 1 dim: 3 dim: 32 dim: 32 } }
}
layer {
  name: "first_conv"
  type: "Convolution"
  bottom: "data"
  top: "first_conv"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "first_conv_bn"
  type: "BatchNorm"
  bottom: "first_conv"
  top: "first_conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "first_conv_scale"
  type: "Scale"
  bottom: "first_conv"
  top: "first_conv"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "first_conv_relu"
  type: "ReLU"
  bottom: "first_conv"
  top: "first_conv"
}
layer {
  name: "group0_block0_conv0"
  type: "Convolution"
  bottom: "first_conv"
  top: "group0_block0_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block0_conv0_bn"
  type: "BatchNorm"
  bottom: "group0_block0_conv0"
  top: "group0_block0_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block0_conv0_scale"
  type: "Scale"
  bottom: "group0_block0_conv0"
  top: "group0_block0_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block0_conv0_relu"
  type: "ReLU"
  bottom: "group0_block0_conv0"
  top: "group0_block0_conv0"
}
layer {
  name: "group0_block0_conv1"
  type: "Convolution"
  bottom: "group0_block0_conv0"
  top: "group0_block0_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block0_conv1_bn"
  type: "BatchNorm"
  bottom: "group0_block0_conv1"
  top: "group0_block0_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block0_conv1_scale"
  type: "Scale"
  bottom: "group0_block0_conv1"
  top: "group0_block0_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block0_sum"
  type: "Eltwise"
  bottom: "group0_block0_conv1"
  bottom: "first_conv"
  top: "group0_block0_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group0_block0_relu"
  type: "ReLU"
  bottom: "group0_block0_sum"
  top: "group0_block0_sum"
}
layer {
  name: "group0_block1_conv0"
  type: "Convolution"
  bottom: "group0_block0_sum"
  top: "group0_block1_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block1_conv0_bn"
  type: "BatchNorm"
  bottom: "group0_block1_conv0"
  top: "group0_block1_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block1_conv0_scale"
  type: "Scale"
  bottom: "group0_block1_conv0"
  top: "group0_block1_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block1_conv0_relu"
  type: "ReLU"
  bottom: "group0_block1_conv0"
  top: "group0_block1_conv0"
}
layer {
  name: "group0_block1_conv1"
  type: "Convolution"
  bottom: "group0_block1_conv0"
  top: "group0_block1_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block1_conv1_bn"
  type: "BatchNorm"
  bottom: "group0_block1_conv1"
  top: "group0_block1_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block1_conv1_scale"
  type: "Scale"
  bottom: "group0_block1_conv1"
  top: "group0_block1_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block1_sum"
  type: "Eltwise"
  bottom: "group0_block1_conv1"
  bottom: "group0_block0_sum"
  top: "group0_block1_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group0_block1_relu"
  type: "ReLU"
  bottom: "group0_block1_sum"
  top: "group0_block1_sum"
}
layer {
  name: "group0_block2_conv0"
  type: "Convolution"
  bottom: "group0_block1_sum"
  top: "group0_block2_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block2_conv0_bn"
  type: "BatchNorm"
  bottom: "group0_block2_conv0"
  top: "group0_block2_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block2_conv0_scale"
  type: "Scale"
  bottom: "group0_block2_conv0"
  top: "group0_block2_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block2_conv0_relu"
  type: "ReLU"
  bottom: "group0_block2_conv0"
  top: "group0_block2_conv0"
}
layer {
  name: "group0_block2_conv1"
  type: "Convolution"
  bottom: "group0_block2_conv0"
  top: "group0_block2_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group0_block2_conv1_bn"
  type: "BatchNorm"
  bottom: "group0_block2_conv1"
  top: "group0_block2_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group0_block2_conv1_scale"
  type: "Scale"
  bottom: "group0_block2_conv1"
  top: "group0_block2_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group0_block2_sum"
  type: "Eltwise"
  bottom: "group0_block2_conv1"
  bottom: "group0_block1_sum"
  top: "group0_block2_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group0_block2_relu"
  type: "ReLU"
  bottom: "group0_block2_sum"
  top: "group0_block2_sum"
}
layer {
  name: "group1_block0_conv0"
  type: "Convolution"
  bottom: "group0_block2_sum"
  top: "group1_block0_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block0_conv0_bn"
  type: "BatchNorm"
  bottom: "group1_block0_conv0"
  top: "group1_block0_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block0_conv0_scale"
  type: "Scale"
  bottom: "group1_block0_conv0"
  top: "group1_block0_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block0_conv0_relu"
  type: "ReLU"
  bottom: "group1_block0_conv0"
  top: "group1_block0_conv0"
}
layer {
  name: "group1_block0_conv1"
  type: "Convolution"
  bottom: "group1_block0_conv0"
  top: "group1_block0_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block0_conv1_bn"
  type: "BatchNorm"
  bottom: "group1_block0_conv1"
  top: "group1_block0_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block0_conv1_scale"
  type: "Scale"
  bottom: "group1_block0_conv1"
  top: "group1_block0_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block0_proj"
  type: "Convolution"
  bottom: "group0_block2_sum"
  top: "group1_block0_proj"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block0_proj_bn"
  type: "BatchNorm"
  bottom: "group1_block0_proj"
  top: "group1_block0_proj"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block0_proj_scale"
  type: "Scale"
  bottom: "group1_block0_proj"
  top: "group1_block0_proj"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block0_sum"
  type: "Eltwise"
  bottom: "group1_block0_proj"
  bottom: "group1_block0_conv1"
  top: "group1_block0_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group1_block0_relu"
  type: "ReLU"
  bottom: "group1_block0_sum"
  top: "group1_block0_sum"
}
layer {
  name: "group1_block1_conv0"
  type: "Convolution"
  bottom: "group1_block0_sum"
  top: "group1_block1_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block1_conv0_bn"
  type: "BatchNorm"
  bottom: "group1_block1_conv0"
  top: "group1_block1_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block1_conv0_scale"
  type: "Scale"
  bottom: "group1_block1_conv0"
  top: "group1_block1_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block1_conv0_relu"
  type: "ReLU"
  bottom: "group1_block1_conv0"
  top: "group1_block1_conv0"
}
layer {
  name: "group1_block1_conv1"
  type: "Convolution"
  bottom: "group1_block1_conv0"
  top: "group1_block1_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block1_conv1_bn"
  type: "BatchNorm"
  bottom: "group1_block1_conv1"
  top: "group1_block1_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block1_conv1_scale"
  type: "Scale"
  bottom: "group1_block1_conv1"
  top: "group1_block1_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block1_sum"
  type: "Eltwise"
  bottom: "group1_block1_conv1"
  bottom: "group1_block0_sum"
  top: "group1_block1_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group1_block1_relu"
  type: "ReLU"
  bottom: "group1_block1_sum"
  top: "group1_block1_sum"
}
layer {
  name: "group1_block2_conv0"
  type: "Convolution"
  bottom: "group1_block1_sum"
  top: "group1_block2_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block2_conv0_bn"
  type: "BatchNorm"
  bottom: "group1_block2_conv0"
  top: "group1_block2_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block2_conv0_scale"
  type: "Scale"
  bottom: "group1_block2_conv0"
  top: "group1_block2_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block2_conv0_relu"
  type: "ReLU"
  bottom: "group1_block2_conv0"
  top: "group1_block2_conv0"
}
layer {
  name: "group1_block2_conv1"
  type: "Convolution"
  bottom: "group1_block2_conv0"
  top: "group1_block2_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group1_block2_conv1_bn"
  type: "BatchNorm"
  bottom: "group1_block2_conv1"
  top: "group1_block2_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group1_block2_conv1_scale"
  type: "Scale"
  bottom: "group1_block2_conv1"
  top: "group1_block2_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group1_block2_sum"
  type: "Eltwise"
  bottom: "group1_block2_conv1"
  bottom: "group1_block1_sum"
  top: "group1_block2_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group1_block2_relu"
  type: "ReLU"
  bottom: "group1_block2_sum"
  top: "group1_block2_sum"
}
layer {
  name: "group2_block0_conv0"
  type: "Convolution"
  bottom: "group1_block2_sum"
  top: "group2_block0_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group2_block0_conv0_bn"
  type: "BatchNorm"
  bottom: "group2_block0_conv0"
  top: "group2_block0_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group2_block0_conv0_scale"
  type: "Scale"
  bottom: "group2_block0_conv0"
  top: "group2_block0_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group2_block0_conv0_relu"
  type: "ReLU"
  bottom: "group2_block0_conv0"
  top: "group2_block0_conv0"
}
layer {
  name: "group2_block0_conv1"
  type: "Convolution"
  bottom: "group2_block0_conv0"
  top: "group2_block0_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group2_block0_conv1_bn"
  type: "BatchNorm"
  bottom: "group2_block0_conv1"
  top: "group2_block0_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group2_block0_conv1_scale"
  type: "Scale"
  bottom: "group2_block0_conv1"
  top: "group2_block0_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group2_block0_proj"
  type: "Convolution"
  bottom: "group1_block2_sum"
  top: "group2_block0_proj"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 64
    pad: 0
    kernel_size: 1
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group2_block0_proj_bn"
  type: "BatchNorm"
  bottom: "group2_block0_proj"
  top: "group2_block0_proj"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group2_block0_proj_scale"
  type: "Scale"
  bottom: "group2_block0_proj"
  top: "group2_block0_proj"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group2_block0_sum"
  type: "Eltwise"
  bottom: "group2_block0_proj"
  bottom: "group2_block0_conv1"
  top: "group2_block0_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group2_block0_relu"
  type: "ReLU"
  bottom: "group2_block0_sum"
  top: "group2_block0_sum"
}
layer {
  name: "group2_block1_conv0"
  type: "Convolution"
  bottom: "group2_block0_sum"
  top: "group2_block1_conv0"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group2_block1_conv0_bn"
  type: "BatchNorm"
  bottom: "group2_block1_conv0"
  top: "group2_block1_conv0"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group2_block1_conv0_scale"
  type: "Scale"
  bottom: "group2_block1_conv0"
  top: "group2_block1_conv0"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group2_block1_conv0_relu"
  type: "ReLU"
  bottom: "group2_block1_conv0"
  top: "group2_block1_conv0"
}
layer {
  name: "group2_block1_conv1"
  type: "Convolution"
  bottom: "group2_block1_conv0"
  top: "group2_block1_conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "group2_block1_conv1_bn"
  type: "BatchNorm"
  bottom: "group2_block1_conv1"
  top: "group2_block1_conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
}
layer {
  name: "group2_block1_conv1_scale"
  type: "Scale"
  bottom: "group2_block1_conv1"
  top: "group2_block1_conv1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "group2_block1_sum"
  type: "Eltwise"
  bottom: "group2_block1_conv1"
  bottom: "group2_block0_sum"
  top: "group2_block1_sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "group2_block1_relu"
  type: "ReLU"
  bottom: "group2_block1_sum"
  top: "group2_block1_sum"
}
layer {
  name: "global_avg_pool"
  type: "Pooling"
  bottom: "group2_block1_sum"
  top: "global_avg_pool"
  pooling_param {
    pool: AVE
    global_pooling: true
  }
}
layer {
  name: "fc"
  type: "InnerProduct"
  bottom: "global_avg_pool"
  top: "fc"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "msra"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "softmax"
  type: "Softmax"
  bottom: "fc"
  top: "softmax"
}

I made a debug of the nvdla_compile, what does this numMeanBlobs mean ? as while i made another debug on resnet-18 for imagenet, the numMeanBlobs was 1 and can compiler successful, and this value is not affected when calibration was not setted.

image

@LeiWang1999
Copy link
Author

LeiWang1999 commented Jan 31, 2021

resolved, i delete all

param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }

and trained again, it seems that the input of the image is too large and it course some big value of weights

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant