Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(dbless): load declarative schema during init() (#10932) [backport] #10945

Merged
merged 1 commit into from May 29, 2023

Conversation

flrgh
Copy link
Contributor

@flrgh flrgh commented May 26, 2023

This is a backport of #10932.

summary

With this change we are no longer reloading the declarative schema for every POST /config request (something that has been the case since 3.0, I think).

The schema load operation adds 150-200ms of latency to each request in my local testing:

# serialize requests (otherwise we're just fighting lock contention)
$ wrk -c 1 -t 1 -s wrk.lua --latency -d 60 http://localhost:8001/config
wrk.lua
local cjson = require "cjson"

local conf = {
  _transform = false,
  _format_version = "3.0",
  services = {},
}

for i = 1, 500 do
  conf.services[i] = {
    name     = "my-service-" .. tostring(i),
    protocol = "http",
    host     = "127.0.0.1",
    port     = 80,
    routes   = {
      {
        name = "my-service-route-" .. tostring(i),
        hosts = { "my-service-route-" .. tostring(i) .. ".test" },
        paths = { "/" },
      }
    }
  }
end

local body = cjson.encode(conf)

local headers = {
  ["content-type"] = "application/json",
}

local wrk = assert(_G.wrk)

wrk.scheme = "http"
wrk.host = "127.0.0.1"
wrk.port = 8001
wrk.method = "POST"
wrk.path = "/config"
wrk.body = body
wrk.headers = headers

Results for cc3b056:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   361.96ms   53.54ms 439.67ms   55.15%
    Req/Sec     2.41      0.57     5.00     60.00%
  Latency Distribution
     50%  391.33ms
     75%  406.90ms
     90%  416.41ms
     99%  433.45ms
  165 requests in 1.00m, 59.94MB read
Requests/sec:      2.75
Transfer/sec:      1.00MB

Results for c339a8c:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   160.62ms   27.79ms 229.26ms   78.06%
    Req/Sec     6.60      2.50    10.00     59.26%
  Latency Distribution
     50%  146.25ms
     75%  171.50ms
     90%  210.79ms
     99%  221.30ms
  351 requests in 1.00m, 127.51MB read
Requests/sec:      5.84
Transfer/sec:      2.12MB

why backport?

For an API-centric product, something that decreases the latency on a heavily-used API endpoint by triple digit milliseconds is worth a backport, especially given 3.3 is destined to be the next LTS.

* perf(dbless): load declarative schema during init()

This updates the logic in Kong.init() to load the declarative config
schema and store it in the kong global at `kong.db.declarative_config`.

This brings a substantial perf improvement to the /config endpoint,
which was previously reloading the schema on every request.

* docs(changelog): add entry for 10932

(cherry picked from commit c339a8c)
@bungle bungle merged commit 59a21fc into release/3.3.x May 29, 2023
28 checks passed
@bungle bungle deleted the backport-10932-to-release/3.3.x branch May 29, 2023 06:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants