Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not limit payload size for requests that are proxied to the legacy Kibana. #21750

Merged
merged 4 commits into from
Aug 14, 2018

Conversation

azasypkin
Copy link
Member

@azasypkin azasypkin commented Aug 7, 2018

In this PR we set a very large max payload size for the core route that proxies requests to the "legacy" Kibana that will do proper size checks based on legacy route max size overrides if any.

PR includes test that fails on master and should pass with this change. I'd like to incorporate changes that I introduced in max_payload_size.test.js to kbnTestServer itself so that other tests that rely on kbnTestServer inject requests through the core to be as close as possible to real setup. I haven't done that yet, hence WIP label for this PR.

Fixes #21737

@azasypkin azasypkin added Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc Feature:New Platform v6.4.0 WIP Work in progress labels Aug 7, 2018
@@ -0,0 +1,3 @@
// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`fails with 400 if payload size is larger than default and route config allows 1`] = `"{\\"statusCode\\":400,\\"error\\":\\"Bad Request\\",\\"message\\":\\"Payload content length greater than maximum allowed: 200\\"}"`;
Copy link
Member Author

@azasypkin azasypkin Aug 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: Hapi 17 finally returns 413 🎉 hapijs/hapi@044f8b3

function makeRequest(opts) {
// Inject request through new platform http server that we can access through `proxyListener`
// provided by the new platform.
return kbnServer.newPlatform.proxyListener.root.server.http.service.httpServer.server.inject(opts);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: sooooooooooooo awful, I know 🙈

@epixa
Copy link
Contributor

epixa commented Aug 7, 2018

Does this mean that endpoints registered through the new platform will by default have a 1GB max payload size?

@azasypkin
Copy link
Member Author

Does this mean that endpoints registered through the new platform will by default have a 1GB max payload size?

Nope, it's route-specific override, the rest of the routes registered in the new platform will use default set on the server level.

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@alexfrancoeur
Copy link

@azasypkin @epixa @rashidkpc tested this PR with Canvas master and it worked like a charm. I uploaded an 11mb workpad without any errors or issues.

@rashidkpc
Copy link
Contributor

@azasypkin thank you so much jumping on this, it really means a lot when something gets fixed this fast.

@@ -186,7 +181,7 @@ describe('xsrf request filter', function () {
});

expect(resp.statusCode).toBe(400);
expect(resp.result.message).toBe('Request must contain a kbn-xsrf header.');
expect(resp.result).toMatchSnapshot(`${method} reject response`);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: I switched to snapshot here since resp.result is "raw" string now: request is proxied (and "abandoned") from core to the legacy Kibana and hence response isn't deserialized by the Hapi we use in core.

await this.ready();

const { server } = this;
Copy link
Member Author

@azasypkin azasypkin Aug 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: not sure why it was done this way before (server property may not be created before ready is called), so I'm changing that to not require consumers (tests in this case) to always call ready before they call listen. So please let me know what you think.

@@ -25,7 +25,10 @@ import KbnServer from '../../src/server/kbn_server';

const DEFAULTS_SETTINGS = {
server: {
autoListen: false,
autoListen: true,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: I have mixed feelings on that, but can't think of any other better way :/ We should start core's root to have access to the underlying server and it's only started when legacy hapi servers calls listen.

Even after #19994, we won't register routes and create Hapi server if autoListen is false, and we don't want to expose Hapi and hence inject either. So we need to think of a better alternative to testKbnServer at some point.

Copy link
Contributor

@spalger spalger Aug 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is there value in splitting up the logic in the new platform like it's split in the legacy platform, so the server is setup (routes registered, etc.) without listening, and then when calling listen() on the legacy platform would somehow tell the new platform to start listening too?

The primary reason I ask is that given the new platforms dependency on listen for setup, it seems that there is little point in having a non-listening KbnServer anymore... If we can't or don't want to support a non-listening by "ready" version of the new platform maybe we should just move the listen() logic into ready() and get rid of the autoListen config option...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is there value in splitting up the logic in the new platform like it's split in the legacy platform, so the server is setup (routes registered, etc.) without listening, and then when calling listen() on the legacy platform would somehow tell the new platform to start listening too?

Maybe, we can discuss that when we come closer to the plugin support when we start thinking where in the server lifecycle we do route registration. IIRC one of the reasons of putting this kind of staff in start/stop was based on the idea that someday we may want to stop service/server, reload config (not only logging, but the one that can affect routes, http config etc.) and start service/server again.

The primary reason I ask is that given the new platforms dependency on listen for setup, it seems that there is little point in having a non-listening KbnServer anymore...

I think it's still needed in dev mode when kbnServer is created by optimizer worker process or what you mean?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's still needed in dev mode when kbnServer is created by optimizer worker process or what you mean?

But the optimizer starts a listening version of the Kibana server, which is proxied to by the actual server, so it can serve the optimizer output.

What I mean is that even if the KbnServer consumer doesn't need HTTP to be bound for its purposes, the KbnServer must bind to a port to function properly, so I feel like we should get rid of the listen() method and just make it a part of KbnServer initialization, part of the promise that is returned by ready(), and remove the server.autoListen config option.

Copy link
Contributor

@spalger spalger Aug 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops, I'm totally wrong... see

'--server.autoListen=false',

Please ignore my request to remove server.autoListen :)

We pass --server.autoListen=false to the optimizer, which writes the bundle assets to the bundle directory and the actual server just pauses requests while the optimizer is running, then serves the requests from the bundle directory... This means that the server is semi-functional, it just can't handle routing via the kbnServer.inject() method, which won't be a problem as we move further toward the new platform.

@azasypkin
Copy link
Member Author

@azasypkin @epixa @rashidkpc tested this PR with Canvas master and it worked like a charm. I uploaded an 11mb workpad without any errors or issues.

Awesome, thanks for verifying this!

@azasypkin thank you so much jumping on this, it really means a lot when something gets fixed this fast.

No problem! The plan is to eventually cover such core functionality with various tests so that we're not hit by something like this again :/

@elasticmachine
Copy link
Contributor

💔 Build Failed

@azasypkin
Copy link
Member Author

azasypkin commented Aug 9, 2018

packages/kbn-plugin-helpers tests failed, it doesn't look like something that may be related. Retrying.

@azasypkin
Copy link
Member Author

retest

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@azasypkin azasypkin requested review from spalger and epixa August 9, 2018 15:37
@azasypkin azasypkin added review v6.5.0 and removed WIP Work in progress labels Aug 9, 2018
@@ -77,6 +77,9 @@ export class HttpServer {
output: 'stream',
parse: false,
timeout: false,
// Having such a large value (1 GB) here will allow legacy routes to override
// maximum allowed payload size set in the core http server if needed.
maxBytes: 1024 * 1024 * 1024,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is Inifinty out of the question here? Just not sure I see the point in putting a hard 1GB limit on something that should be customizable by routes and even users at some level.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Infinity won't work (including positive Infinity) as it's validated by Joi.number().integer().positive().default(1024 * 1024), I can probably use Number.MAX_SAFE_INTEGER if you prefer, I just used what we use for base path proxy.

Copy link
Contributor

@spalger spalger Aug 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that'd be fine too. Another alternative might be to use the max limit from any route (determined at runtime somehow so plugins can be included), but I'm guessing we don't have a good way to determine that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another alternative might be to use the max limit from any route (determined at runtime somehow so plugins can be included), but I'm guessing we don't have a good way to determine that.

Yeah, I can't think of any easy way to determine from core route what route will be used by the "legacy" Kibana. Let's try Number.MAX_SAFE_INTEGER and see how it goes.

@@ -28,11 +27,7 @@ const version = require(src('../package.json')).version;

describe('version_check request filter', function () {
function makeRequest(kbnServer, opts) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind making this an async function and awaiting the return value of makeRequest() (same for other instances)? I find things less ambiguous when promise returning functions are clearly indicated as such by using async/await, even when it's not technically necessary.

Copy link
Member Author

@azasypkin azasypkin Aug 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, don't have strong opinion on that, I'll make this async with throw err; in catch.

// Since all requests to Kibana hit core http server first and only after that
// are proxied to the "legacy" Kibana we should inject requests through the top
// level Hapi server used by the core.
return kbnServer.newPlatform.proxyListener.root.server.http.service.httpServer.server.inject(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😨 Maybe kbnServer.newPlatform.proxyListener could expose a method that is a little less... hacky?

Copy link
Member Author

@azasypkin azasypkin Aug 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can, but then we'll have hacky stuff in non-test code since most of these nested properties are private in TS as they should be. Aaaand I'll get rid of kbnServer.newPlatform entirely for 6.5 and master (still need a couple of PRs). Or you have some specific suggestion in mind already?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, it sounds like you have a plan, I'll avert my eyes and let you do what needs to be done

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I have some plan, still in flux though, but I'll see what I can come up with in the scope of #19994.

beforeEach(async () => {
const kbnServer = createServer();
kbnServer = createServer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only place we were using createServer() that wasn't calling kbnServer.close() which I assume is only really necessary because of the change to autoListen: true by default.

Copy link
Member Author

@azasypkin azasypkin Aug 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's the only place I found.

… async/await for test functions returning promises.
@azasypkin
Copy link
Member Author

@spalger I've pushed changes we discussed, should be ready for another pass.

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

Copy link
Contributor

@spalger spalger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@azasypkin azasypkin merged commit e26cfc6 into elastic:master Aug 14, 2018
@azasypkin azasypkin deleted the issue-21737-max-payload branch August 14, 2018 05:37
azasypkin added a commit to azasypkin/kibana that referenced this pull request Aug 14, 2018
azasypkin added a commit that referenced this pull request Aug 14, 2018
…legacy Kibana. (#21952)

* Do not limit payload size for requests that are proxied to the legacy Kibana. (#21750)
* Adapt basepath tests to the 6.x behaviour.
azasypkin added a commit that referenced this pull request Aug 14, 2018
…legacy Kibana. (#21951)

* Do not limit payload size for requests that are proxied to the legacy Kibana. (#21750)

* Adapt basepath tests to the 6.x behaviour.
@azasypkin
Copy link
Member Author

6.4: c50e7bd
6.x/6.5: f7462fc

cjcenizal pushed a commit to cjcenizal/kibana that referenced this pull request Aug 21, 2018
cjcenizal pushed a commit to cjcenizal/kibana that referenced this pull request Aug 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backported Feature:New Platform review Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc v6.4.0 v6.5.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New platform broke setting maxBytes on routes
6 participants