You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The docs mention that performance can be improved in some cases by setting network.inode-lru-limit to a value of 50,000 or even 200,000.
After looking through the GlusterFS code I noticed that this is impossible as the value is clamped between 0 and 32,768. Here's the code excerpt from libglusterfs/src/inode.c:
// mem_pool_size is the same as network.inode-lru-limit.// DEFAULT_INODE_MEMPOOL_ENTRIES is defined as a constant with value 32*1024.// Another piece of code ensures that mem_pool_size cannot be less than 0.if (!mem_pool_size|| (mem_pool_size>DEFAULT_INODE_MEMPOOL_ENTRIES))
mem_pool_size=DEFAULT_INODE_MEMPOOL_ENTRIES;
I think the documentation should state clearly that the maximum number is 32,768 and setting higher numbers will not yield any additional benefits.
Additionally I think it's worth mentioning that the inode LRU cache cannot be deactivated in any way. This is probably a sane decision, but there is no further documentation available for this setting and other caches can often be disabled. I naively assumed that this might be the case here as well and had to check the GlusterFS code to see for myself.
The text was updated successfully, but these errors were encountered:
Actually after spending a bit more time with the GlusterFS code I'm not entirely sure what happens.
The LRU limit is definately clamped when the inode table is created, but after that it seems like the limit can be arbitrarily increased.
I found this piece of code here in xlators/protocol/server/src/server.c in the function server_reconfigure:
/* traverse through the xlator graph. For each xlator in the graph check whether it is a bound_xl or not (bound_xl means the xlator will have its itable pointer set). If so, then set the lru limit for the itable. */xlator_foreach(this, xlator_set_inode_lru_limit, &inode_lru_limit);
I don't know enough about GlusterFS to tell if that function is only called when settings change or if it also called when the volume is being initialized. I'll try to clarify this. Depending on the answer my observation might be wrong and the docs are actually correct.
Sorry for the confusion. The following explains what happens.
The mem_pool is allocated by default with space for 32*1024 inodes (or if lru_limit is lower than the exact value of the lru_limit).
When this limit is hit the mem_pool increases dynamically.
There is a inode_table_prune function that checks if the mem_pool has grown beyond the lru_limit and purges the oldest entries. This ensures that the mem_pool respects the configured limit.
So, my initial observation is wrong and the docs are actually correct. Everything works as expected. 🙂
The following docs mention settings that are no longer configurable (if they ever were at all):
The docs mention that performance can be improved in some cases by setting
network.inode-lru-limit
to a value of 50,000 or even 200,000.After looking through the GlusterFS code I noticed that this is impossible as the value is clamped between 0 and 32,768. Here's the code excerpt from
libglusterfs/src/inode.c
:I think the documentation should state clearly that the maximum number is 32,768 and setting higher numbers will not yield any additional benefits.
Additionally I think it's worth mentioning that the inode LRU cache cannot be deactivated in any way. This is probably a sane decision, but there is no further documentation available for this setting and other caches can often be disabled. I naively assumed that this might be the case here as well and had to check the GlusterFS code to see for myself.
The text was updated successfully, but these errors were encountered: