Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM errors for large datasets #218

Open
piotrlaczkowski opened this issue Apr 23, 2024 · 0 comments
Open

OOM errors for large datasets #218

piotrlaczkowski opened this issue Apr 23, 2024 · 0 comments

Comments

@piotrlaczkowski
Copy link

piotrlaczkowski commented Apr 23, 2024

If we load a sufficiently big dataset (using tf.data.dataset ==> TFDS in "not all in memory mode"), the instance crashes with an OOM error. Since we are iteratively using TFDS in batches, this should not be the case, right ... ?

Thus, we can conclude that the model tries to load the entire dataset into memory. Is this behavior normal?
How can we scale this to big-data usage ?

THNX!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant