New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recovery.db.mv.db size crashes Mist #339
Comments
Does it always crash after restart? |
Yes..once this starts,it never gets normal.I have to delete |
As a workaround you could increase |
I have some suggestion for the same: a. Bucketing recoverydb on basis of number of jobs and output size of it. |
Yes bucketing or limiting could help. There is also a complicated question - should we continue storing jobs results inside database? They can very large and using databases may be an inefficient approach. @spushkarev @mkf-simpson - This may be interesting for you - if we found another way how to store job results, it can be possible to build pipelines over datasets with that feature. |
I cannot understand how job history relates to pipelines? |
To invoke pipeline stages on different spark-contexts we need to store jobs results somewhere. |
Ok, MistWarehouse? :) But this discussion is for another ticket I guess. |
@mkf-simpson @dos65 @spushkarev Whichever way you choose, just one thing should be considered that job should not be deployed every time we hit it with endpoint due to this seperation. As of now,it takes 25 seconds for first run and less then 2 seconds for after run for same endpoint which is better in case of production use. |
I have setup a VM having following configuration : Redhat 7.4, 4 GB RAM
I have visualized that the size of
Recovery.db.mv.db
increases which is obvious as i run more jobs.It is crashing when the size reaches 37 MB with
java heap space
error.I wanted to know the reason of it.Is it due to browser loading this whole file or mist itself and what configuration changes/factors i have to keep in mind while deploying it?
The text was updated successfully, but these errors were encountered: