Skip to content

Commit

Permalink
合并最新H2中有关LIRS cache优化的代码
Browse files Browse the repository at this point in the history
  • Loading branch information
codefollower committed Jan 31, 2015
1 parent 8387bfc commit b3cbea3
Show file tree
Hide file tree
Showing 2 changed files with 87 additions and 128 deletions.
Expand Up @@ -53,15 +53,14 @@
- test and possibly improve compact operation (for large dbs) - test and possibly improve compact operation (for large dbs)
- is data kept in the stream store if the transaction is not committed? - is data kept in the stream store if the transaction is not committed?
- automated 'kill process' and 'power failure' test - automated 'kill process' and 'power failure' test
- compact: avoid processing pages using a counting bloom filter
- defragment (re-creating maps, specially those with small pages) - defragment (re-creating maps, specially those with small pages)
- store number of write operations per page (maybe defragment - store number of write operations per page (maybe defragment
if much different than count) if much different than count)
- r-tree: nearest neighbor search - r-tree: nearest neighbor search
- use a small object value cache (StringCache), test on Android - use a small object value cache (StringCache), test on Android
for default serialization for default serialization
- MVStoreTool.dump: dump values (using a callback) - MVStoreTool.dump should dump the data if possible;
- close the file on out of memory or disk write error (out of disk space or so) possibly using a callback for serialization
- implement a sharded map (in one store, multiple stores) - implement a sharded map (in one store, multiple stores)
to support concurrent updates and writes, and very large maps to support concurrent updates and writes, and very large maps
- to save space when persisting very small transactions, - to save space when persisting very small transactions,
Expand All @@ -72,8 +71,6 @@
- remove features that are not really needed; simplify the code - remove features that are not really needed; simplify the code
possibly using a separate layer or tools possibly using a separate layer or tools
(retainVersion?) (retainVersion?)
- MVStoreTool.dump should dump the data if possible;
possibly using a callback for serialization
- optional pluggable checksum mechanism (per page), which - optional pluggable checksum mechanism (per page), which
requires that everything is a page (including headers) requires that everything is a page (including headers)
- rename "store" to "save", as "store" is used in "storeVersion" - rename "store" to "save", as "store" is used in "storeVersion"
Expand All @@ -98,7 +95,6 @@ requires that everything is a page (including headers)
- write a LSM-tree (log structured merge tree) utility on top of the MVStore - write a LSM-tree (log structured merge tree) utility on top of the MVStore
with blind writes and/or a bloom filter that with blind writes and/or a bloom filter that
internally uses regular maps and merge sort internally uses regular maps and merge sort
- LIRS cache: maybe remove 'mask' field, and dynamically grow the arrays
- chunk metadata: maybe split into static and variable, - chunk metadata: maybe split into static and variable,
or use a small page size for metadata or use a small page size for metadata
- data type "string": maybe use prefix compression for keys - data type "string": maybe use prefix compression for keys
Expand All @@ -122,7 +118,6 @@ to a map (possibly the metadata map) -
- rollback of removeMap should restore the data - - rollback of removeMap should restore the data -
which has big consequences, as the metadata map which has big consequences, as the metadata map
would probably need references to the root nodes of all maps would probably need references to the root nodes of all maps
- combine MVMap and MVMapConcurrent
*/ */


Expand Down Expand Up @@ -317,11 +312,10 @@ public class MVStore {
int mb = o == null ? 16 : (Integer) o; int mb = o == null ? 16 : (Integer) o;
if (mb > 0) { if (mb > 0) {
int maxMemoryBytes = mb * 1024 * 1024; int maxMemoryBytes = mb * 1024 * 1024;
int averageMemory = Math.max(10, pageSplitSize / 2);
int segmentCount = 16; int segmentCount = 16;
int stackMoveDistance = maxMemoryBytes / averageMemory * 2 / 100; int stackMoveDistance = 8;
cache = new CacheLongKeyLIRS<Page>(maxMemoryBytes, averageMemory, segmentCount, stackMoveDistance); cache = new CacheLongKeyLIRS<Page>(maxMemoryBytes, segmentCount, stackMoveDistance);
cacheChunkRef = new CacheLongKeyLIRS<PageChildren>(maxMemoryBytes / 4, 20, segmentCount, stackMoveDistance); cacheChunkRef = new CacheLongKeyLIRS<PageChildren>(maxMemoryBytes / 4, segmentCount, stackMoveDistance);
} }
o = config.get("autoCommitBufferSize"); o = config.get("autoCommitBufferSize");
int kb = o == null ? 1024 : (Integer) o; int kb = o == null ? 1024 : (Integer) o;
Expand Down

0 comments on commit b3cbea3

Please sign in to comment.