-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[train_task_classifyapp.py] GPU memory grows unlimitedly #12
Comments
As far as I know with TF, manually running I'm not sure whether the problem is with Keras or TF not deallocating memory, your dataset/loader, or that the GPU you are using simply runs out of memory. When we run the training process on our dataset we do not encounter this issue. What is your sequence length limit? Maybe that, or the minibatch size, should be reduced. |
On my dataset, the longest sequnce is of length 8618 and the mean sequence length is 425. I've tried to reduce the minibatch size to 1, but it didn't work. The reason why I manually run Another question is, could you please briefly explain which part those parameters will affect? I'm very upset finding that the accuracy on my dataset is only about 0.3 and remains almost unchanged until OOM Error occurs. I have tried many combinations of parameters, but none of them performs well. Is there possibly something wrong? |
Do you experience the same OOM problem when you run the code with the provided dataset? (POJ-104)?
How many classes does your ds have?
If you use the published embeddings as a starting point, you should be able to write your own LSTM with details from the paper fairly easily.
This way you would be independent of keras.
Zacharias
Am 21. Aug. 2019, 12:23 +0200 schrieb Hanye Zhao <notifications@github.com>:
… On my dataset, the longest sequnce is of length 8618 and the mean sequence length is 425. I've tried to reduce the minibatch size to 1, but it didn't work.
The reason why I manually run finalize on the graph is to test whether new nodes are added to the graph, and it does happen. I'm now trying other methods to solve this problem.
Another question is, could you please briefly explain which part those parameters will affect? I'm very upset finding that the accuracy on my dataset is only about 0.3 and remains almost unchanged until OOM Error occurs. I have tried many combinations of parameters, but none of them performs well. Is there possibly something wrong?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
In fact, the POJ-104 dataset used for the The histogram shows the number of statements per file for a subset of the dataset. As you can see, 8000 lines is an order of magnitude larger than any file that is included in the subset considered. In order to train on significantly longer sequences than that, you probably need a few tricks that go beyond the code provided here, but you can try training on the shorter sequences in your dataset. Hope this helps, |
Thanks for your patient reply. Finally I find that it is probably the function |
If this is an issue with the current code base, would you mind creating a pull request with your fix? Thanks! |
Strangely, I didn't encounter this OOM issue with the current code base. How to reproduce the problem and how did you eventually fix it? |
I do some other tests and now think it is because of some unknown error in my GPU server or keras/tensorflow. I finally give up finding the true reason why this bug appears. So just let this issue closed. |
Thanks for reporting anyway. Good luck |
When I try to train the model on another dataset, I find that the programme will take up more and more GPU memory and finally trigger Out of Memory Error. Then I add
before
model.train_gen
is called (line 402) to test whether new tensorflow ops are added to the graph while training. The result iswhich is located at
self.model.fit_generator
(line 237).I don't know whether it is a bug or not. Since all interfaces are written in Keras, it is difficult for me to find out the exact problem in tensorflow backend.
The text was updated successfully, but these errors were encountered: