New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image pixel value #22
Comments
Whereas most activation functions (ReLU, sigmoid, tanh) have their phase change at zero, and random weight initializers tend to be symmetric about this point also, it's best to keep your raw input data nominally centered around zero. |
so, like Batch Normalization? |
and another question is that in last layer, the output should be the probability of each char, but you use Relu as activation function. why not softmax |
Yes, similar to batch norm. The CTC loss layer utilizes raw scores as their input, taking the softmax internally. |
ho, i use your project to train my plate data, the num of chars classes is 66,i change the num from 63 to 67 in "logits = model.rnn_layers( features, sequence_length, 66) " [train.py, line number = 178 ],and the failure message occurs like this: Caused by op 'save/Assign_109', defined at: InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [67] rhs shape= [63] Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): Caused by op 'save/Assign_109', defined at: InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [67] rhs shape= [63] this means i load 67 elements into variable with 63 elements , so my channges is invalid. can you guide me to change this. thank you very much |
@oftenliu I don't have any clear ideas why this would happen. One long-shot possibility is that you'd trained and saved a 63-dim model checkpoint (it's automatic) before making the change. To verify you're starting from scratch, either make sure |
Perfectly, As you said,i saved a 63-dim model checkpoint before making the change. You are really something!!! thank you |
excuse me,in function _preprocess_image(image) (Mjsynth.py), why rescale the pixels value to float([-0.5,0.5]) not float([0,1]). can u tell me why? tks
The text was updated successfully, but these errors were encountered: