You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to execute the code as it is .. but i dont know why i'm getting this error ..
"UserWarning: Gradient of Parameter ssd1_batchnorm0_beta on context gpu(0) has not been updated by backward since last step. This could mean a bug in your model that made it only use a subset of the Parameters (Blocks) for this iteration. If you are intentionally only using a subset, call step with ignore_stale_grad=True to suppress this warning and skip updating of Parameters with stale gradient " on trainer.step(stepsize)
The text was updated successfully, but these errors were encountered:
thanks for your response sir ... i have solved this issue .. i reduced the batch size from 32 to 20 .. i don't know why(i'm new in mxnet) but that was the reason ..
and Sir in IAMdataset there are 1500+ images but why only 967 data for training and 232 data for testing was used?? and how can i increase the training dataset number .
sir, i am a Bangladeshi student and currently doing my undergraduate thesis , my topic is "Bangla handwritten ocr system " , from page to word level segmentation i'm following (implementing) your system which i did , i ran the "line_word_segmentation" notebook for word segmentation and set epoch 400 .. but its not detecting word segments properly . how many epoch should i set for word segmentation ? and sir, " mahinqureship1@gmail.com " this is my email . it will be a great help for me if i could contact you through mail .
I'm trying to execute the code as it is .. but i dont know why i'm getting this error ..
"UserWarning: Gradient of Parameter
ssd1_batchnorm0_beta
on context gpu(0) has not been updated by backward since laststep
. This could mean a bug in your model that made it only use a subset of the Parameters (Blocks) for this iteration. If you are intentionally only using a subset, call step with ignore_stale_grad=True to suppress this warning and skip updating of Parameters with stale gradient " on trainer.step(stepsize)The text was updated successfully, but these errors were encountered: