New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decrease cache size automatically when there are too many databases #2222
Comments
There is only one remedy for insanity... How would you protect against using 1000 servers per vm, or specifying -Xmx1m? I do agree that cache size should depend not on the total heap size but total_heap / number_of_servers. Block from starting is too drastic, IMHO. |
Actually we can have multiple instances of H2 engine itself in the different classloaders; for example, on an application server when each application has an own copy (bad practice, but sometimes it is necessary). This use case is quite special. In other use cases we have a list of all databases in one place ( |
@katzyn perhaps provide a setting and a value to view global cache usage? I agree blocking startup is drastic considering that many of the databases may not be utilizing their cache size fully. Perhaps having a global setting (GLOBAL_CACHE_SIZE) and a global value (GLOBAL_CACHE_USED) could prevent the OOM error by using the global usage value in addition the per-db setting to determine if more data can be added to the cache? On new database creation, if the requested CACHE_SIZE exceeds available H2 could WARN or fail when a url parameter is provided-- IFCACHEAVAIL=1 <-- only start the db if the cache can be allocated. |
I think we don't want performance regressions caused by interaction between databases. |
Actually the main overhead will be very low, but we need to ensure that counter will be always decreased. The situation is complicated by possible additional storages for large results, they have own small caches and their lifetime isn't tracked well enough. |
https://groups.google.com/forum/#!topic/h2-database/wL-Qo_OYCZY
The default cache size is 64 MiB per 1 GiB of memory. I thing the
Engine
should check count of open databases and if we have too may of them the default cache size for a new databases should be decreased and existing databases should be processed to decrease their cache size too (maybe only if it wasn't changed by the user).If number of databases is already beyond any sane limit for the current environment (memory size should be considered), we should block the following attempts to open another one.
The text was updated successfully, but these errors were encountered: