Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(nginx-conf) add a 'kong_locks' shm for resty-lock users #3550

Merged
merged 1 commit into from
Jun 15, 2018

Conversation

thibaultcha
Copy link
Member

This commit introduces a new shm, kong_locks, to be used by instances
of lua-resty-lock across Kong or its libraries, if possible.

Since #3543 bumped the mlcache dependency, the DB cache is the first
consumer of this shm to store its locks, which helps in limiting the
effects of LRU churning in certain workloads. More components should use
this shm to store their resty-lock instances, especially instead of
relying on the kong shm, which is to be considered "immutable".

The chosen size for this shm is 8Mb:

  1. In the not too far future, users should be able to customize the size
    of any shm via configuration properties. Some additional work on top
    of feat(conf) add support to configure nginx directives via kong.conf #3530 is needed for this.
  2. Such a size allows to store (based on estimations with a test script)
    about 65.000 lua-resty-lock items. While this number is very much
    workload specific, it should be appropriate even in situations where
    requests trigger cache misses in the runloop and the core, which
    would amount for up to 10, or maybe 15 locks (with very bad luck),
    which means we could still process several thousands of such cold
    requests concurrently, and comfortably.
  3. Ideally, abstraction layers on top of eviction-sensitive shms should
    be built, in order to monitor LRU eviction on locks, or forbid them
    entirely on immutable shms.

This commit is considered backwards-compatible and thus does not require
the kong_locks shm to be defined.

This commit introduces a new shm, `kong_locks`, to be used by instances
of lua-resty-lock across Kong or its libraries, if possible.

Since #3543 bumped the mlcache dependency, the DB cache is the first
consumer of this shm to store its locks, which helps in limiting the
effects of LRU churning in certain workloads. More components should use
this shm to store their resty-lock instances, especially instead of
relying on the `kong` shm, which is to be considered "immutable".

The chosen size for this shm is 8Mb:

1. In the not too far future, users should be able to customize the size
   of _any_ shm via configuration properties. Some additional work on top
   of #3530 is needed for this.
2. Such a size allows to store (based on estimations with a test script)
   about 65.000 lua-resty-lock items. While this number is very much
   workload specific, it should be appropriate even in situations where
   requests trigger cache misses in the runloop and the core, which
   would amount for up to 10, or maybe 15 locks (with very bad luck),
   which means we could still process several thousands of such cold
   requests concurrently, and comfortably.
3. Ideally, abstraction layers on top of eviction-sensitive shms should
   be built, in order to monitor LRU eviction on locks, or forbid them
   entirely on immutable shms.

This commit is considered backwards-compatible and thus does not require
the `kong_locks` shm to be defined.
@thibaultcha thibaultcha merged commit e889936 into master Jun 15, 2018
@thibaultcha thibaultcha deleted the feat/new-kong-locks-shm branch June 15, 2018 23:12
thibaultcha added a commit that referenced this pull request Jun 18, 2018
Render newly added shared dicts required. As 0.14 ships with breaking
changes and other nginx configuration changes, now is a good time to
render recent shared dicts mandatory.

See #3550
See #3311
thibaultcha added a commit that referenced this pull request Jun 19, 2018
Render newly added shared dicts required. As 0.14 ships with breaking
changes and other nginx configuration changes, now is a good time to
render recent shared dicts mandatory.

See #3550
See #3311
From #3557
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant