Very high memory commit amount #241
Comments
I managed to figure out a few additional things. The redis server is configured to persist via snapshot aswell as appendonly, but since the last crash ( issue: #167 ) it did not do either of those, even when explicitly forcing it with bgsave etc.. While panicking because of possible data loss, I configured and started a slave which replicated the whole dataset and managed to save it correctly. Afterwards, possibly because replication forced the saving in another way, the persisting of the master worked again and the memory commit amount went down to a normal amount. |
Was the AOF file being rewritten while you observed the high memory commit amount? You should be able to verify the size of the buffer with the INFO command, in the aof_rewrite_buffer_length field. |
I don't think so, but I am not quite sure. The high memory commit amount was in effect for at least 12 hours and all persisting attempts seem to not even have been started (no log entries whatsoever about persisting), but INFO showed the following: aof_enabled:1 Which looks like nothing was in progress. |
@nmehlei how is memory configured in the configuration file? Do you set maxheap, maxmemory? How much physical RAM does the machine have? |
Closing this for inactivity and since it refers to an old release. |
With the newest version (2.8.19.1), we are experiencing very high amounts of committed memory for redis, as apparent in the following screenshot: https://dl.dropboxusercontent.com/u/19676954/RedisPerfMonScreenshot.png
Based on INFO output (used_memory_human), memory usage is around 692 MB, but as can be seen in the screenshot the Commit amount for the redis process is more than 3 GB.
This doesn't look like expected behavior and currently forces us to have massive memory reserves even though our dataset would fit in a fraction of that.
The text was updated successfully, but these errors were encountered: