Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(plugins-iterator) execute iterator only on configured plugins #3794

Closed
wants to merge 2 commits into from

Conversation

Projects
None yet
4 participants
@p0pr0ck5
Copy link
Contributor

p0pr0ck5 commented Sep 20, 2018

Summary

Reduce the number of invocations of the plugins iterator by attempting
to load configurations only for plugins we know are configured on the
cluster. A map of distinct names of configs plugins is maintained by
iterating over each row in the plugins table. Plugins CRUD events are
handled and delete a node cache entry representing the current version
of the plugins map; in the request path this value is used to determine
if a rebuild of the map is necessary. This logic mimics that of the
router rebuild mechanism. Additionally, plugin maps rebuilds are wrapped
in a worker-level mutex (implemented via ngx.semaphore); this prevents
stampeding database connections during rebuild events in high-concurrency
situations.

Full Changelog

  • Execute iterator only on configured plugins
  • Add integration test to validate plugin map cache key invalidation

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from a6420c9 to a70def5 Sep 20, 2018

@p0pr0ck5 p0pr0ck5 requested a review from thibaultcha Sep 24, 2018

@bungle

This comment has been minimized.

Copy link
Member

bungle commented Sep 27, 2018

@p0pr0ck5 It seems this is going to land on 0.15.0, so can you please rebase on next (which is not that straighforward as a lot have changed)? @thibaultcha, am I correct (or does it matter)?

@bungle
Copy link
Member

bungle left a comment

Some miscellaneous comments.

local plugins_map_semaphore

local PLUGINS_MAP_CACHE_OPTS = { ttl = 0 }
local PLUGINS_MAP_PAGE_SIZE = 1000

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

This is used only once and not publicly exposed, so I guess it is OK to just have that 1000 inline where it is used.

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Sep 27, 2018

Author Contributor

trying to avoid magic numbers :) a meaningful description is useful IMO?

local loaded_plugins

local function build_plugins_map(dao, version)

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

On next branch the plugins are moved to new dao, so a lot of here needs to be changed if when rebased to next.

local plugins_map_version
local plugins_map_semaphore

local PLUGINS_MAP_CACHE_OPTS = { ttl = 0 }

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

router also seems to use these same options, well maybe make it generic and use there as well, e.g. just CACHE_OPTS or find a better name that describes what it does.

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Sep 27, 2018

Author Contributor

the router rebuild that uses this logic is in a different module, so making a generic value doesnt seem helpful IMO

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

What do you mean by different module? They are even inside same function (e.g. you could use function locals instead of module locals).

https://github.com/Kong/kong/blob/perf/plugins-iterator/kong/init.lua#L354-L387

E.g.:

local ok, err = cache:get("router:version", { ttl = 0 }, function()
    return "init"
  end

Could be written, as this repeates in 3 places (ok, api_router is going away) now:

local ok, err = cache:get("router:version", CACHE_OPTS, CACHE_INIT)

And that:

plugins_map_version, err = cache:get("plugins_map:version",
                                     PLUGINS_MAP_CACHE_OPTS,
                                     function() return "init" end)

changed to

plugins_map_version, err = cache:get("plugins_map:version", CACHE_OPTS, CACHE_INIT)

But not a big deal. Just repeating here so that we were in a same page.

@@ -323,6 +382,14 @@ function Kong.init_worker()
kong.db:set_events_handler(worker_events)
kong.dao:set_events_handler(worker_events)

plugins_map_version, err = cache:get("plugins_map:version",
PLUGINS_MAP_CACHE_OPTS,
function() return "init" end)

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

same here function() return "init" end) is also used in router, so maybe just declare a local function and use that on all of these.

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Sep 27, 2018

Author Contributor

same thing, that means declaring and sharing a function to return a string 'init', which doesnt seem useful

elseif plugins_map_version ~= version then
-- try to acquire the mutex (semaphore)

local ok, err = plugins_map_semaphore:wait(10)

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

Should the timeout be configurable?

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Sep 27, 2018

Author Contributor

Indeed there's some relevant discussion about the same kind of behavior at #3782 (comment); will update this PR to match that behavior once that's settled and approved.

map[row.name] = true
end

if offset == nil then

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

Can this ever be false?

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Sep 27, 2018

Author Contributor

yes

This comment has been minimized.

Copy link
@bungle

bungle Sep 27, 2018

Member

offset can be false, nil or something else? What does it mean to offset being false and why it is handled differently than it being nil?

Show resolved Hide resolved kong/runloop/plugins_iterator.lua

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from a70def5 to cc1b783 Sep 27, 2018

@p0pr0ck5 p0pr0ck5 changed the base branch from master to next Sep 27, 2018

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from cc1b783 to 4209cbd Sep 27, 2018

@p0pr0ck5

This comment has been minimized.

Copy link
Contributor Author

p0pr0ck5 commented Sep 27, 2018

@bungle just FYI this is rebased off next. Seeing how CI plays with it and then will adjust the remaining hard-coded semaphore timeout in a bit.

@p0pr0ck5

This comment has been minimized.

Copy link
Contributor Author

p0pr0ck5 commented Sep 28, 2018

I'll need to update the new tests as well. Will take a look tomorrow morning.

thibaultcha added a commit that referenced this pull request Sep 28, 2018

perf(router) wrap router rebuilds in a worker mutex
Guard calls to build_router in a worker-level mutex (via ngx.semaphore)
in order to prevent database dogpiling during rebuild events in
high-concurrency traffic patterns. Requests on a worker process
rebuilding the router will be queued and resumed once the rebuild is
complete.

Fix #3634
From #3794

Signed-off-by: Thibault Charbonnier <thibaultcha@me.com>

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from 4209cbd to 917a604 Oct 1, 2018

thibaultcha added a commit that referenced this pull request Oct 1, 2018

perf(router) wrap router rebuilds in a worker mutex
Guard calls to build_router in a worker-level mutex (via ngx.semaphore)
in order to prevent database dogpiling during rebuild events in
high-concurrency traffic patterns. Requests on a worker process
rebuilding the router will be queued and resumed once the rebuild is
complete.

Fix #3634
From #3794

Signed-off-by: Thibault Charbonnier <thibaultcha@me.com>

thibaultcha added a commit that referenced this pull request Oct 1, 2018

perf(router) wrap router rebuilds in a worker mutex
Guard calls to build_router in a worker-level mutex (via ngx.semaphore)
in order to prevent database dogpiling during rebuild events in
high-concurrency traffic patterns. Requests on a worker process
rebuilding the router will be queued and resumed once the rebuild is
complete.

Fix #3634
From #3794

Signed-off-by: Thibault Charbonnier <thibaultcha@me.com>

thibaultcha added a commit that referenced this pull request Oct 1, 2018

feat(core) use 'pg_timeout' as router rebuild semaphore timeout
Complementary commit for:

* 11d69be feat(conf) add 'pg_timeout' property
* 85946ba perf(router) wrap router rebuilds in a worker mutex

We now use the configured timeout for PostgreSQL as the timeout to our
router rebuilt semaphore. Note that the default value of `60` is
preserved to be friendly to any fork out there that may support other
configuration backends.

See #3794
See #3808

thibaultcha added a commit that referenced this pull request Oct 1, 2018

feat(core) use 'pg_timeout' as router rebuild semaphore timeout
Complementary commit for:

* 11d69be feat(conf) add 'pg_timeout' property
* 85946ba perf(router) wrap router rebuilds in a worker mutex

We now use the configured timeout for PostgreSQL as the timeout to our
router rebuild semaphore. Note that the default value of `60` is
preserved to be friendly to any fork out there that may support other
configuration backends.

See #3794
See #3808
elseif plugins_map_version ~= version then
-- try to acquire the mutex (semaphore)

local ok, err = plugins_map_semaphore:wait(10)

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 1, 2018

Author Contributor

I need to update this to use the same logic presented in #3820

thibaultcha added a commit that referenced this pull request Oct 1, 2018

feat(core) use 'pg_timeout' as router rebuild semaphore timeout
Complementary commit for:

* 11d69be feat(conf) add 'pg_timeout' property
* 85946ba perf(router) wrap router rebuilds in a worker mutex

We now use the configured timeout for PostgreSQL as the timeout to our
router rebuild semaphore. Note that the default value of `60` is
preserved to be friendly to any fork out there that may support other
configuration backends.

See #3794
See #3808 
From #3820
perf(plugins-iterator) execute iterator only on configured plugins
Reduce the number of invocations of the plugins iterator by attempting
to load configurations only for plugins we know are configured on the
cluster. A map of distinct names of configs plugins is maintained by
iterating over each row in the plugins table. Plugins CRUD events are
handled and delete a node cache entry representing the current version
of the plugins map; in the request path this value is used to determine
if a rebuild of the map is necessary. This logic mimics that of the
router rebuild mechanism. Additionally, plugin maps rebuilds are wrapped
in a worker-level mutex (implemented via ngx.semaphore); this prevents
stampeding database connections during rebuild events in high-concurrency
situations.

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from 917a604 to 668899e Oct 2, 2018

@p0pr0ck5

This comment has been minimized.

Copy link
Contributor Author

p0pr0ck5 commented Oct 2, 2018

@bungle this is ready for another round of review following rebasing to next :)

@bungle

bungle approved these changes Oct 4, 2018

Copy link
Member

bungle left a comment

LGTM, but maybe @thibaultcha would like to have his final say?

@thibaultcha

This comment has been minimized.

Copy link
Member

thibaultcha commented Oct 4, 2018

Yes, I would very much like to have a glance at it as well! I will try my best to squeeze it in today :)

@thibaultcha
Copy link
Member

thibaultcha left a comment

Ugh, sorry to be such a bummer and leave more concerns, I was really hoping to come back with a green tick...
This is a very interesting change and an exciting one to merge for sure, but it also means that it needs to be taken very seriously, so please pardon me for the extra cautiousness here. Rest assured I too want to see this merged! Let's talk about some of the posted concerns :)

local function build_plugins_map(db, version)
local map = {}

for plugin, err in db.plugins:each() do

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

I am concerned by this, in the sense that it can significantly increase the startup time for Kong. Routes and Services in Cassandra are partitioned together (i.e. they reside on the same node), but Plugins are not partitioned. This means that for every 100 plugins, we will make subsequent queries to the cluster to retrieve all rows from all partitions. On top of being potentially slow (we know of some users having in the (tens of) thousands of plugins), this also increases the chances of a failure to start due to a query timeout or such. We are most likely not in a position to experience any pain from this change, but I am fairly confident that some users will.

The impact of the query at runtime is less concerning I believe, as it will rarely be needed to rebuild the plugins map in actual production usage (and other operations esp. in the load balancer are making use of a complete iterator like this one as well).

I would suggest making a custom query for this operation (the new DAO is quite extensible and supports such custom operation per entity), and that would be helpful for PostgreSQL as well, however, I am really unsure there actually is any improved query we could make for Cassandra... I am referring to something like:

SELECT DISTINCT name from plugins;

Unfortunately, our partitioning of the plugins column family isn't allowing us to do so in Cassandra... Ugh. I haven't found a workaround quite yet, or thought of an alternative approach.

What thoughts do you have about the concern raised here?

This comment has been minimized.

Copy link
@bungle

bungle Oct 5, 2018

Member

I feel that for such a core thing, it would be nice to make as efficient queries as we can. E.g. just returning the information the logic needs, like the above query clause in PG. The Cassandra issue is a bit more harder to answer. For Cassandra and PG we do have index on name already. So reading just name would be just reading the index, right?

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 5, 2018

Author Contributor

@bungle Regarding a DISTINCT query for Postgres, yes this definitely makes sense. My initial thought was to avoid a strategy-specific solution in order to reduce complexity. And @thibaultcha had a comment in a related PR (around router rebuild mutexes) about backward comparability for in-the-wild custom DAO strategies. So we’ll still need a fallback that uses a generic solution. I will need a bit of study time to learn how we can implement custom queries as this doesn’t appear to be documented at this point, from what I can tell (and I may just be missing something, not being well steeped in this codebase in the last few months- if it’s documented please point me in the right direction :)).

Regarding Cassandra... I don’t have enough experience or knowledge to offer a helpful opinion. I can definitely sympathize with not introducing a problematic data access pattern here. If there’s a suggestion on a better way to approach this I’m 100% 👂one alternative I can think of (a bad idea, just spitballing) is maintaining a separate table for “currently configured plugins in the cluster”, that would be maintained by cluster_event hooks watching for changes to the DAO. The size of this table is no larger than the number of loaded plugins; this would seem to align with Cassandra’s philosophy of writing schemas to fit the intended application queries. But it would be more complexity to maintain and troubleshoot, and it would introduce two distinct sources of truth for plugin data.

This comment has been minimized.

Copy link
@bungle

bungle Oct 5, 2018

Member

Yes, what you suggested for C* is basically what you usually do with proper NoSQL design, e.g. you have to start denormalize when you need to have read performance. But atm I do not see us doing that a lot (at all?) as it introduces insert/update complexities.

About custom strategies, here is example:
https://github.com/Kong/kong/blob/master/kong/db/strategies/cassandra/routes.lua

And here is example of doing generic dao customization:
https://github.com/Kong/kong/blob/master/kong/db/dao/consumers.lua

This comment has been minimized.

Copy link
@hbagdi

hbagdi Oct 5, 2018

Member

Just leaving a thought and not a review.

The concerns raised here are valid but if I'm not wrong the pattern already exists in the code.

While going through this thread, I recalled that when Kong starts up, we check if all plugins configured in DB are enabled or not, to guard against:

  • A custom plugin which was enabled before but is not anymore
  • With plugins property in Kong 0.14.0, someone could disable a bundled plugin after configuring it in Kong.

In both cases, Kong will fail to start up.

The code I'm referring to in master branch is here and the next branch is here. I see that it has undergone change with the plugins being moved to the new DAO.

Since both, this PR and the check plugins on startup logic need a set of plugin names that are configured in the DB, we could optimize this and load the plugins names from DB once (rather than in both places as happening currently), cache them and then use them to configure this PR and also verify if Kong should exit.

This is just a thought on how we can optimize the code and not an ask of any sorts.

Thoughts?

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

Ugh, yes indeed, that is correct! I am not sure if it is a good thing or a bad thing, tbh, but at the very least we could take advantage of it, indeed!

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 5, 2018

Author Contributor

The iterator invoked in the next branch for the plugins load uses the same iterator in question here. A large part of the design of this PR was to use existing facilities in preparation for a stable release- if these facilities are not sufficient, we can address that.

I would be much more concerned about optimizing the runtime query times as opposed to startup- the data access model in Kong already means that slow start concerns are the responsibility of infrastructure, and not software, architects (and I would agree). That given, the complexity and confusion to optimize the load-time query pattern is not worth it.

Here’s my semi final thought in this: if we are deeply concerned about performance in this context, we should build strategy-specific solutions, both of which had been alluded to here. Otherwise, we should acknowledge the utility and value of Kong’s underlying data access patterns, and leave it at that(and any potential optimization’s to that central code can come outside the scope of this pr).

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 10, 2018

Member

Will be merging as-is, but a future improvement could be to leverage the first iterator already invoked from the init phase to build the first version of the plugins map.


for plugin, err in db.plugins:each() do
if err then
return nil, tostring(err)

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

The new DAO guarantees that this value will be a string (no trick, ever, promise!). It is one of its design goals to get rid of such patterns like tostring(err) on the consumer-side (from the caller's POV). Can we remove the tostring wrap?

This comment has been minimized.

Copy link
@bungle

bungle Oct 5, 2018

Member

I think so. If it is not the case, then we need to fix it elsewhere. So, yes, this should be removed.

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 5, 2018

Author Contributor

This was a copy paste leftover from old code. Will fix.

ngx_log(ngx_CRIT, "could not acquire plugins map update mutex: ", err)

return false
end

This comment was marked as resolved.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

Would you mind hoisting this branch up below the :wait() call? We always try to write failure branches first (which are much shorter), as per our style guide, and the rest of this codebase. Also, this avoids the need for two branches to be written, since instead, we can do:

local ok, err = ...
if not ok then
  -- ...
  return
end

-- rest goes here, unindented
@@ -508,10 +617,16 @@ function Kong.rewrite()

runloop.rewrite.before(ctx)

local ok = plugins_map_wrapper()

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

Any reason why we are not invoking this from ssl_certificate as well? My understanding is that rewrite catches non-SSL connections (that's great), but when a new connection is opened just after a new plugin was configured, then are we failing to execute it during the SSL handshake?

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 5, 2018

Author Contributor

Gooood catch. I didn’t think about this, largely because we have no plugins that uses ssl phase.

Tangentially- to my understanding the ssl phase historically was provided for plugins so we could use dynamic ssl before ssl_by_lua was a Thing. Given that we have certs and snis as first class objects, should this phase handler for plugins go away?

Show resolved Hide resolved kong/init.lua
log_level = "debug",
prefix = "servroot1",
database = strategy,
proxy_listen = "0.0.0.0:8000, 0.0.0.0:8443 ssl",

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

side note: ouch, it is a realization for me that other existing test suites already make use of the default ports used by Kong, which breaks the test suite when a development instance is running... No action required here, just a note...

This comment has been minimized.

Copy link
@bungle

bungle Oct 5, 2018

Member

Yes, I have hit this many times :-(.

This comment has been minimized.

Copy link
@p0pr0ck5

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

Yep, that's how I realized we were already using those ports for existing test suites.

end)

teardown(function()
helpers.stop_kong("servroot1", true)

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

The true arguments here are a debug-only variable, the current policy in tests is to cleanup servroot prefixes after the tests run. Mind removing them?

This comment has been minimized.

Copy link
@p0pr0ck5

local ok, err = build_plugins_map(kong.db, version)
if not ok then
ngx_log(ngx_CRIT, "could not rebuild plugins map: ", err)

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 5, 2018

Member

Is this branch lacking a return statement? It seems like it will fall-through to the return true at the bottom of this function, and not thus not trigger then unhappy code path of the caller?

@bungle bungle removed the pr/ready label Oct 5, 2018

@p0pr0ck5

This comment has been minimized.

Copy link
Contributor Author

p0pr0ck5 commented Oct 9, 2018

@thibaultcha ive pushed up a separate commit addressing your comments for easy of review.

-- we have the lock but we might not have needed it. check the
-- version again and rebuild if necessary
if not ok then
return nil, err

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 9, 2018

Member

Readers of this error will be lacking some crucial context, and may just see a "timeout" message. My comment mentioned preserving this context:

if not ok then
  return nil, "failed to acquire mutex: " .. err
end

The final error message will read:

could not ensure plugins map is up to date: failed to acquire mutex: timeout

Sometimes we can get fancy here and play with a combination of colons and parenthesis to "hierarchize" the errors, but well.

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 10, 2018

Author Contributor

Ah gotcha. I wasn't sure about the double colon design; felt odd to me but i see the understanding (now i know why it lives in other places in the codebase). And i missed the initial part of that comment, my bad.

Fixed!


return false
return false, err

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 9, 2018

Member

We should stay consistent with the other return values for an error (nil, err) - mind updating this statement?

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 10, 2018

Author Contributor

missed this one, thanks :)

local ok, err = plugins_map_wrapper()
if not ok then
ngx_log(ngx_CRIT, "could not ensure plugins map is up to date: ", err)
return responses.send_HTTP_INTERNAL_SERVER_ERROR()

This comment has been minimized.

Copy link
@thibaultcha

thibaultcha Oct 9, 2018

Member

This will be trouble in this context (certificate). Mind following the same pattern as we do for this phase? See: https://github.com/Kong/kong/blob/0.14.1/kong/runloop/certificate.lua#L106-L107

This comment has been minimized.

Copy link
@p0pr0ck5

p0pr0ck5 Oct 10, 2018

Author Contributor

whups! thanks!

@p0pr0ck5 p0pr0ck5 force-pushed the perf/plugins-iterator branch from 6d013b9 to 8344bfc Oct 10, 2018

thibaultcha added a commit that referenced this pull request Oct 11, 2018

perf(plugins-iterator) only execute iterator on configured plugins
Reduce the number of invocations of the plugins iterator by attempting
to load configurations only for plugins we know are configured on the
cluster.

A map of distinct names of configs plugins is maintained by iterating
over each row in the plugins table. Plugins CRUD events are handled and
delete a node cache entry representing the current version of the
plugins map; in the request path this value is used to determine if a
rebuild of the map is necessary. This logic mimics that of the router
rebuild mechanism. Additionally, plugin maps rebuilds are wrapped in a
worker-level mutex (implemented via ngx.semaphore); this prevents
stampeding database connections during rebuild events in
high-concurrency situations.

From #3794

Signed-off-by: Thibault Charbonnier <thibaultcha@me.com>
@thibaultcha

This comment has been minimized.

Copy link
Member

thibaultcha commented Oct 11, 2018

Merged! 🎉

Thank you @p0pr0ck5!

@thibaultcha thibaultcha deleted the perf/plugins-iterator branch Oct 11, 2018

@thibaultcha

This comment has been minimized.

Copy link
Member

thibaultcha commented Oct 11, 2018

The resulting commit on next is 9ea6d0b

kikito added a commit that referenced this pull request Oct 16, 2018

perf(plugins-iterator) only execute iterator on configured plugins
Reduce the number of invocations of the plugins iterator by attempting
to load configurations only for plugins we know are configured on the
cluster.

A map of distinct names of configs plugins is maintained by iterating
over each row in the plugins table. Plugins CRUD events are handled and
delete a node cache entry representing the current version of the
plugins map; in the request path this value is used to determine if a
rebuild of the map is necessary. This logic mimics that of the router
rebuild mechanism. Additionally, plugin maps rebuilds are wrapped in a
worker-level mutex (implemented via ngx.semaphore); this prevents
stampeding database connections during rebuild events in
high-concurrency situations.

From #3794

Signed-off-by: Thibault Charbonnier <thibaultcha@me.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.