You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, I'm doing something weird in the character n-gram model right now where I create a loss op once when the classifier is created, then once again every time new training starts; IIRC making the first op had something to do with getting something working properly with saving or summaries, but I'm not entirely sure anymore; I can try to see again if this is really necessary.
It's possible that it makes more sense to just pass in training parameters and such when creating the classifier, so the training and loss ops only need to be created once.
Also, I'm pretty sure that the way I'm doing things now, saving and reloading on a new classifier, then trying to train more, will end up resetting to the default loss op; and if there's a new training op for each loss op, I don't think the variables for Adam will get restored properly (but that probably only affects training time).
We'll probably follow approximately the same interface as the character n-gram model.
The text was updated successfully, but these errors were encountered: