Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.OutOfMemoryError: GC overhead limit exceeded #60

Closed
haokeliu opened this issue Dec 17, 2018 · 5 comments
Closed

java.lang.OutOfMemoryError: GC overhead limit exceeded #60

haokeliu opened this issue Dec 17, 2018 · 5 comments

Comments

@haokeliu
Copy link

haokeliu commented Dec 17, 2018

Hello, I am a beginner of Meka.
when I use the package meka.classifiers.multilabel.incremental.RTUpdateable,an error has occurred.
This is the wrong details:
image
This is my command line argument:
image
This is my data (a total of 100million generated by MOA):
image
My computer configuration information, it has 8G memory.
image
This situation also occurs in the BRU and PSU.
I would be grateful if I could give me any advice.

@fracpete
Copy link
Member

Try to train with less data (if you're using the GUI, it will load all of the data into memory). Start with 100,000 rows. If that works, try 1,000,000. etc.

@haokeliu
Copy link
Author

尝试使用较少的数据进行训练(如果您使用的是GUI,它会将所有数据加载到内存中)。从100,000行开始。如果可行,请尝试1,000,000。等等

Thank you for your prompt reply, I will try it out.

@haokeliu
Copy link
Author

I changed the data to 100K according to the method you provided, but similar errors still occurred. Can you give me some help?
image
image

@haokeliu haokeliu reopened this Dec 17, 2018
@fracpete
Copy link
Member

Are you using batch incremental or prequential as evaluation method in the Explorer? If not, then no incremental training is occurring.

@haokeliu
Copy link
Author

Are you using batch incremental or prequential as evaluation method in the Explorer? If not, then no incremental training is occurring.

I used Meavn's local repository before. I used the GUI under your suggestion. The 10k data can run normally, but the data will reach 100million and there will be an error. I will increase the amount of data according to the suggestions you gave. Thank you for your suggestion.
This is the wrong details:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants