Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running out of memory #2

Closed
ahundt opened this issue Dec 21, 2016 · 3 comments
Closed

running out of memory #2

ahundt opened this issue Dec 21, 2016 · 3 comments

Comments

@ahundt
Copy link

ahundt commented Dec 21, 2016

I have an 8gb GTX 1080 and I'm running cifar10.py on TensorFlow, but it seems to run out of memory very easily. Is this to be expected? Once I shut down the gui to free up every last ounce of gpu memory and reduced the batch size to 32 it did start running and now I'm up to epoch 7 at about 4 minutes per epoch which seems a bit faster than what you have.

However, this is just with cifar10, so will it even be possible to load and train the imagenet version of DenseNet-40-12 without a smaller network choice?

@titu1994
Copy link
Owner

titu1994 commented Dec 21, 2016

The DenseNet models require tons of memory. However, I was able to train the model on a 980M with 4GB of GPU memory, on Theano, with garbage collection enabled to allow for such a large model to run.

I don't see a reason why TF should run out of memory with 8 GB gpu ram, but then I also experienced an OOM when I used the TF backend with 4GB of ram. Perhaps, 4GB is simply not enough to load such a model without GC. When I dropped the batch size to 16 instead of 64, it seemed to work at about the same speed as theano for me.

As to the ImageNet version, I think the model trained on imagenet was the Bottleneck Compressed version of the networks, which have a similar number of parameters (especially the DenseNet-BC-190-40). The authors do mention that they use a batch size of 128 for ImageNet due to GPU memory constraints (see page 7 of the paper, just a bit above the Discussion paragraph), but I do not understand how they can accomplish this on a single GPU (I doubt even a Titan X with 12GB gpu memory can handle this model with such a large batch size for ImageNet images (224x224)).

@ahundt
Copy link
Author

ahundt commented Dec 21, 2016

Hmm, okay good to at least know you've seen something similar. Perhaps if they had 8x titan X machines it might be achievable, assuming a single batch can be distributed across GPUs. The last author is at Facebook so it is conceivable, though I'd expect an imagenet network to be much larger.

@titu1994
Copy link
Owner

Almost all ImageNet results are generally tested on multiple GPU systems, dut to the 1.2 million images taking several weeks on even the most powerful GPUs.

DenseNets definitely are an efficient architecture in comparison to ResNets, but the author's show that the performance of even the largest DenseNet -161 with k=48 has a top 1 accuracy of approximately 78%, whereas Inception ResNet V2 surpasses that at 80.4%, albeit at the cost of a very large number of parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants