-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fatal error: elements.count
must be greater than or equal to shape.volume
: elements.count = 1024, shape.volume = 3136:
#3
Comments
I guess you have some mismatch of shapes between in Python and in Swift. Probably 3136 means 7 * 7 * 64. It is the input size of FC1 in "Deep MNIST for Experts". But your tensor seems to have only 1024 elements. |
Do the models all convert in this way:
Only change the name 'W_conv1' to another? |
I guess you have something wrong about the shapes of the tensors. Could you show me both of your codes, in Python and in Swift? |
Deep mnist code I use is here. private func classify(plate plate:UIImage,withNSValueArray array:NSMutableArray)->String{
var result = ""
for rectValue in array.reverse(){
if rectValue is NSValue {
let image = OpenCVWrapper.cutOutCharFrom(plate, withRectValue: rectValue as! NSValue)
let grayImage = OpenCVWrapper.convertBGR2GRAY(image)
let cgImage = grayImage.CGImage
var pixels = [UInt8](count: inputSize * inputSize, repeatedValue: 0)
let context = CGBitmapContextCreate(&pixels, inputSize, inputSize, 8, inputSize, CGColorSpaceCreateDeviceGray()!, CGBitmapInfo.ByteOrderDefault.rawValue)!
CGContextClearRect(context, CGRect(x: 0.0, y: 0.0, width: CGFloat(inputSize), height: CGFloat(inputSize)))
let rect = CGRect(x: 0.0, y: 0.0, width: CGFloat(inputSize), height: CGFloat(inputSize))
CGContextDrawImage(context, rect, cgImage)
let input : Tensor
input = Tensor(shape: [Dimension(inputSize), Dimension(inputSize), 1], elements: pixels.map { (Float($0) / 255.0 )})
let resultInt = classifier.classify(input)
result += String(resultInt)
}
}
print("result is\(result)")
return result
} Before I replace the models , it runs successfully , but the accuracy is not good enough. So I try to use my models but it doesn't work. |
Is the I guess the
16x16x1 -> 8x8x32 -> 4x4x64 = 1024 |
I got it. How careless I am ! I changed the Thank you very much! By the way, I find that no matter the
|
👍
Because of 14x14x1 -> 7x7x32 -> 4x4x64 = 1024. |
I have used my own images to trained by Tensorflow's DeepMnist example code.
And I used the way you give in the other issue to output the Models.
But when I replace your Models with mine and run, I meet the error:
I think maybe there is something wrong with my models.
But the DeepMnist training is successful:
And the output of models is successful, too.
What can I do to solve this problem? In order to replace the models successfully, should I do something with the
Classifier
Struct?The text was updated successfully, but these errors were encountered: