Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't train for longer epochs with tf.data.Dataset #70

Open
abhi8893 opened this issue Jun 21, 2021 · 0 comments
Open

Can't train for longer epochs with tf.data.Dataset #70

abhi8893 opened this issue Jun 21, 2021 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@abhi8893
Copy link
Owner

Epoch 10/50
562/562 [==============================] - 5s 8ms/step - loss: 0.7888 - accuracy: 0.6960 - f1: 0.6745 - val_loss: 0.7480 - val_accuracy: 0.7151 - val_f1: 0.6897
Epoch 11/50
  7/562 [..............................] - ETA: 4s - loss: 0.8328 - accuracy: 0.6700 - f1: 0.6491WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 28100 batches). You may need to use the repeat() function when building your dataset.
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 28100 batches). You may need to use the repeat() function when building your dataset.
562/562 [==============================] - 1s 891us/step - loss: 0.8328 - accuracy: 0.6700 - f1: 0.6491 - val_loss: 0.7647 - val_accuracy: 0.6975 - val_f1: 0.6798

This is in Notebook 09-SkimLit-NLP-Mileston-Project-2.

@abhi8893 abhi8893 added the bug Something isn't working label Jun 21, 2021
@abhi8893 abhi8893 self-assigned this Jun 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant