You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for this work i read you article on the nano nets site.
but i have this one doubt:
def onehot(idx, num_classes):
"""
1-hot encoding.
"""
encoding = np.zeros(num_classes)
encoding[1] = 1
return encoding
here you only set 1 to 1 position that only we got 1.0 accuracy but we need to set 1 based on the idx then we only get accuracy around 0.60. there is any way to improve the accuracy of the model
The text was updated successfully, but these errors were encountered:
Thank you for your ticket Anand, I will address the first problem (onehot) in my next pull request.
The second problem (improve model accuracy) I will try to address in a future PR.
The validation accuracy (val_acc) with the fixed onehot is 0.86, so it is still reasonable. Please see last lines of the training log, pasted below:
Epoch: 0061 train_loss= 0.03845 train_acc= 0.86364 val_loss= 0.03895 val_acc= 0.85620 time= 0.61268 Validation cost is not improving. Early stopping... Optimization Finished! Test set results: cost= 0.03870 accuracy= 0.85879 time= 0.37214
Accuracy is calculating np.argmax(labels) which will give all values as [1,1,1,1,...] . Since we have encoded labels [[0,,4,0,0,0,],[0,3,0,0,0] ...]. So the model is learning to get higher probability for the first position. When I test the model np.argmax(pred[test_mask],1) will give 1000+ values as 1 and some rest are zerop out of 1200+.
Thanks for this work i read you article on the nano nets site.
but i have this one doubt:
def onehot(idx, num_classes):
"""
1-hot encoding.
"""
encoding = np.zeros(num_classes)
encoding[1] = 1
return encoding
here you only set 1 to 1 position that only we got 1.0 accuracy but we need to set 1 based on the idx then we only get accuracy around 0.60. there is any way to improve the accuracy of the model
The text was updated successfully, but these errors were encountered: