Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misleading values for network.inode-lru-limit in different parts of the documentation #753

Closed
rluetzner opened this issue Jun 20, 2022 · 2 comments

Comments

@rluetzner
Copy link

The following docs mention settings that are no longer configurable (if they ever were at all):

  • docs/Administrator-Guide/Performance-Tuning.md
  • docs/Administrator-Guide/Accessing-Gluster-from-Windows.md

The docs mention that performance can be improved in some cases by setting network.inode-lru-limit to a value of 50,000 or even 200,000.

After looking through the GlusterFS code I noticed that this is impossible as the value is clamped between 0 and 32,768. Here's the code excerpt from libglusterfs/src/inode.c:

// mem_pool_size is the same as network.inode-lru-limit.
// DEFAULT_INODE_MEMPOOL_ENTRIES is defined as a constant with value 32*1024.
// Another piece of code ensures that mem_pool_size cannot be less than 0.
if (!mem_pool_size || (mem_pool_size > DEFAULT_INODE_MEMPOOL_ENTRIES))
        mem_pool_size = DEFAULT_INODE_MEMPOOL_ENTRIES;

I think the documentation should state clearly that the maximum number is 32,768 and setting higher numbers will not yield any additional benefits.

Additionally I think it's worth mentioning that the inode LRU cache cannot be deactivated in any way. This is probably a sane decision, but there is no further documentation available for this setting and other caches can often be disabled. I naively assumed that this might be the case here as well and had to check the GlusterFS code to see for myself.

@rluetzner
Copy link
Author

Actually after spending a bit more time with the GlusterFS code I'm not entirely sure what happens.

The LRU limit is definately clamped when the inode table is created, but after that it seems like the limit can be arbitrarily increased.

I found this piece of code here in xlators/protocol/server/src/server.c in the function server_reconfigure:

        /* traverse through the xlator graph. For each xlator in the
           graph check whether it is a bound_xl or not (bound_xl means
           the xlator will have its itable pointer set). If so, then
           set the lru limit for the itable.
        */
        xlator_foreach(this, xlator_set_inode_lru_limit, &inode_lru_limit);

I don't know enough about GlusterFS to tell if that function is only called when settings change or if it also called when the volume is being initialized. I'll try to clarify this. Depending on the answer my observation might be wrong and the docs are actually correct.

@rluetzner
Copy link
Author

Sorry for the confusion. The following explains what happens.

  1. The mem_pool is allocated by default with space for 32*1024 inodes (or if lru_limit is lower than the exact value of the lru_limit).
  2. When this limit is hit the mem_pool increases dynamically.
  3. There is a inode_table_prune function that checks if the mem_pool has grown beyond the lru_limit and purges the oldest entries. This ensures that the mem_pool respects the configured limit.

So, my initial observation is wrong and the docs are actually correct. Everything works as expected. 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant