New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to show the loss curve of training set and validation set at the same time using the customed estimator? #18858
Comments
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. |
I suspect this is a possible answer: ps - I think |
Is it just a documentation lack label? |
It is really helpful to see loss & accuracy right next to each other. I think it would be a great feature to have as a default setting. And it really is important to see the loss for training and validation together -- to see if they begin to diverge. A rough proposal (not styled for TensorBoard, but still): |
This seems like a feature request; @dsmilkov is this your territory? |
I didn't work on the charts in TensorBoard, but @jart would be able to help/delegate here. |
Seems like a good feature to have. Unsure if the above @tensorflowbutler message means the issue is going to get auto-closed or it means the issue will now get more attention. Either way -- saying 'seems like a good feature to have' 😉 |
@jart, gentle ping: could you please advise or delegate? |
I am also looking for this feature, it would be great to have it. |
+1 to have this feature out-of-the-box |
Evaluation runs on checkpoints. May be the reason you see only one evaluation step is there is only one checkpoint. Could you please play with |
I think that issue is deeper inside Estimator architecture. This is because EstimatorSpec returned in training stage did not contain eval_metric_ops (look at any estimator _Head). If we look at custom estimator guide accuracy will be shown in TensorBoard only if we use custom model_fn and log it ourselves with tf.summary.scalar('accuracy', accuracy[1]) |
Hi @zjy8006 I'm closing this issue since I think checkpointing is the main reason you couldn't see more evaluation points. |
Hi @ispirmustafa , in my experience setting the checkpoints with |
So what is the final solution to this? Has TensorFlow added this feature or fixed this "bug"? |
No, there is no easy solution. First way you can go - estimate and write metrics manually from custom model_fn. Second way i made for myself - estimator wrapper. |
I think the correct way is to use hooks or listeners. But this is non-trivial. |
Can someone please reopen this issue since it was never really resolved? |
Hi, recently I used custom_estimator.py to build regression model. In order to clear out the changes of loss value in the training set and validation set. I need to know that how to show the loss curve of training and validation set at the same time. I tried to use train_and_evaluate api of estimator and i got the following picture.
As it show, the result of evaluation is a point, but i want a line like the loss curve of training set. Just like the picture as shown below.
Here is my system information:
Have i written custom code: N/A
OS: Tested on windows 10 1709.
Tensorflow installed from Anaconda 5.1.0 with python 3.6.4
Tensorflow version-tested on tensorflow-gpu 1.7.0
CUDA/cuDNN version: 9.0 for TF 1.7
GPU mode: Nvidia Quadro K2100M, 2G of memory
Bazel version: N/A
Exact command to reproduce: N/A
Here is the customed estimator:
And here I used train_and_evaluate api like this:
Did I set the parameter properly? Or, is there other solution for this?
The text was updated successfully, but these errors were encountered: