You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File temporaryFile = File.createTempFile("chronicle-test", ".map");
ChronicleMapBuilder<Long, Long> builder = ChronicleMapBuilder
.of(Long.class, Long.class)
.maxBloatFactor(10)
.entries(1_000_000);
ChronicleMap<Long, Long> map = builder.createPersistedTo(temporaryFile);
int entries = 3_000_000;
for (long i = 0; i < entries; i++)
{
map.put(i, i);
}
map.close();
//ChronicleMap<Long, Long> reopenedMap = builder.createPersistedTo(temporaryFile);
ChronicleMap<Long, Long> reopenedMap = builder.recoverPersistedTo(temporaryFile, false);
assertEquals(3_000_000, reopenedMap.size());
for (long i = 0; i < entries; i++)
{
assertEquals((Long)i, reopenedMap.get(i));
}
The persisted map is created with initial entries of 1 million, with a bloat of 10.
After writing and closing the map, if you attempt to re-open the map using recoverPersistedTo, then the map is incorrect. The size is wrong (1_048_434 instead of 3_000_000), and entries after 1_018_447 are gone.
But, if you re-open the map using createPersistedTo, then it behaves as expected. The size correctly returns 3 million, and all entries using get return as expected.
According to the documentation, .recoverPersistedTo() is harmless if the previous process accessing the Chronicle Map terminated normally. In this case close() was called and all seemed normal.
The same behaviour occurs if you write the map in one process, then re-open for reading in a separate process. recoverPersistedTo doesn't return the correct map but createPersistedTo returns the correct map.
I've experimented a little and the problem seems to occur if you use recover with a persisted map that is grown and then closed, then re-opened for reading. If there is no growth, i.e. entries is initialised with more than what you put in, then using recoverPersistedTo behaves as expected.
The text was updated successfully, but these errors were encountered:
I've performed a quick test and the problem no longer appears to exist in 3.22ea5.
In the example above, recoverPersistedTo now correctly returns a size of 3_000_000 on the re-opened map. Checking the contents of the map also shows all the values are as expected.
Hi,
Here is a test case that illustrates the problem.
The persisted map is created with initial entries of 1 million, with a bloat of 10.
After writing and closing the map, if you attempt to re-open the map using recoverPersistedTo, then the map is incorrect. The size is wrong (1_048_434 instead of 3_000_000), and entries after 1_018_447 are gone.
But, if you re-open the map using createPersistedTo, then it behaves as expected. The size correctly returns 3 million, and all entries using get return as expected.
According to the documentation, .recoverPersistedTo() is harmless if the previous process accessing the Chronicle Map terminated normally. In this case close() was called and all seemed normal.
The same behaviour occurs if you write the map in one process, then re-open for reading in a separate process. recoverPersistedTo doesn't return the correct map but createPersistedTo returns the correct map.
I've experimented a little and the problem seems to occur if you use recover with a persisted map that is grown and then closed, then re-opened for reading. If there is no growth, i.e. entries is initialised with more than what you put in, then using recoverPersistedTo behaves as expected.
The text was updated successfully, but these errors were encountered: