Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware requirements for Deepcell for training model on full HeLa_S3.npz dataset #159

Closed
manugarciaquismondo opened this issue May 3, 2019 · 2 comments

Comments

@manugarciaquismondo
Copy link

Greetings,

I have deployed the Deepcell notebook example Interior-Edge Segmentation 2D Fully Convolutional.ipynb on Google Colab. When I try to load the dataset HeLa_S3.npz from Deepcell's AWS example bucket, the computing environment runs out of memory and crashes. Can you please advise on the recommended hardware requirements on which Deepcell has been proven to work for this notebook, so that I can set up a computing environment that meets these requirements and run Deepcell?

Thank you very much,

@manugarciaquismondo and @cornhundred

@willgraf
Copy link
Contributor

willgraf commented May 4, 2019

We ran these notebooks on our own NVIDIA DGX station, on one GPU with 16GB of memory.

The HeLaS3 dataset is almost 6 GB. If you are unable to load the data into memory, we also have hosted the following 2 datasets which are both under 2GB:

3T3_NIH.npz

HEK293.npz


These are the smallest datasets we have hosted currently.

@manugarciaquismondo
Copy link
Author

Greetings,

I have downsampled the HeLa dataset to 1500 images and now it works fine on my notebook on Google Colab. I will close the issue now.

Thank you very much,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants