Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps) bump mlcache to 2.1.0 #3543

Merged
merged 1 commit into from
Jun 15, 2018
Merged

chore(deps) bump mlcache to 2.1.0 #3543

merged 1 commit into from
Jun 15, 2018

Conversation

thibaultcha
Copy link
Member

Changelog 2.0.2...2.1.0:

thibaultcha/lua-resty-mlcache@2.0.2...2.1.0

Mainly useful to take advantage of the new shm_locks option.

Changelog 2.0.2...2.1.0:

  thibaultcha/lua-resty-mlcache@2.0.2...2.1.0

Mainly useful to take advantage of the new `shm_locks` option.
@thibaultcha thibaultcha merged commit e595800 into master Jun 15, 2018
@thibaultcha thibaultcha deleted the chore/bump-mlcache branch June 15, 2018 19:55
thibaultcha added a commit that referenced this pull request Jun 15, 2018
This commit introduces a new shm, `kong_locks`, to be used by instances
of lua-resty-lock across Kong or its libraries, if possible.

Since #3543 bumped the mlcache dependency, the DB cache is the first
consumer of this shm to store its locks, which helps in limiting the
effects of LRU churning in certain workloads. More components should use
this shm to store their resty-lock instances, especially instead of
relying on the `kong` shm, which is to be considered "immutable".

The chosen size for this shm is 8Mb:

1. In the not too far future, users should be able to customize the size
   of _any_ shm via configuration properties. Some additional work on top
   of #3530 is needed for this.
2. Such a size allows to store (based on estimations with a test script)
   about 65.000 lua-resty-lock items. While this number is very much
   workload specific, it should be appropriate even in situations where
   requests trigger cache misses in the runloop and the core, which
   would amount for up to 10, or maybe 15 locks (with very bad luck),
   which means we could still process several thousands of such cold
   requests concurrently, and comfortably.
3. Ideally, abstraction layers on top of eviction-sensitive shms should
   be built, in order to monitor LRU eviction on locks, or forbid them
   entirely on immutable shms.

This commit is considered backwards-compatible and thus does not require
the `kong_locks` shm to be defined.
thibaultcha added a commit that referenced this pull request Jun 15, 2018
This commit introduces a new shm, `kong_locks`, to be used by instances
of lua-resty-lock across Kong or its libraries, if possible.

Since #3543 bumped the mlcache dependency, the DB cache is the first
consumer of this shm to store its locks, which helps in limiting the
effects of LRU churning in certain workloads. More components should use
this shm to store their resty-lock instances, especially instead of
relying on the `kong` shm, which is to be considered "immutable".

The chosen size for this shm is 8Mb:

1. In the not too far future, users should be able to customize the size
   of _any_ shm via configuration properties. Some additional work on top
   of #3530 is needed for this.
2. Such a size allows to store (based on estimations with a test script)
   about 65.000 lua-resty-lock items. While this number is very much
   workload specific, it should be appropriate even in situations where
   requests trigger cache misses in the runloop and the core, which
   would amount for up to 10, or maybe 15 locks (with very bad luck),
   which means we could still process several thousands of such cold
   requests concurrently, and comfortably.
3. Ideally, abstraction layers on top of eviction-sensitive shms should
   be built, in order to monitor LRU eviction on locks, or forbid them
   entirely on immutable shms.

This commit is considered backwards-compatible and thus does not require
the `kong_locks` shm to be defined.

From #3550
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants