Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting server.basePath causes an optimize (transpilation) cycle #10724

Closed
rokka-n opened this issue Mar 7, 2017 · 27 comments
Closed

Setting server.basePath causes an optimize (transpilation) cycle #10724

rokka-n opened this issue Mar 7, 2017 · 27 comments
Labels
bug Fixes for quality problems that affect the customer experience high hanging fruit Team:Operations Team label for Operations Team

Comments

@rokka-n
Copy link

rokka-n commented Mar 7, 2017

Edit by @epixa: The root cause of this issue is that setting server.basePath results in an optimizer run, which it should not. See this comment.


Kibana version:
5.2.2 in official docker image

Elasticsearch version:
5.2.2

Server OS version:
linux

Description of the problem including expected versus actual behavior:
When SERVER_BASEPATH is enabled in env vars, containers takes 10x longer to start and uses all much more CPU (that's restricted via cgroup). Removing SERVER_BASEPATH makes container start almost instant.

Steps to reproduce:
Run kibana in nomad.

job "kibana" {
  region = "us"
  datacenters = ["dc1"]
  type = "service"
  
  group "es" {
    count = 1
    task "container" {
      
      driver = "docker"
      config {

        dns_servers = ["${NOMAD_IP_http}"]
        image = "docker.elastic.co/kibana/kibana:5.2.2"

        port_map {
          http = 5601
        }
      }

      env {
        ELASTICSEARCH_URL = "http://elasticsearch.service.consul"
        SERVER_BASEPATH = "/kibana"
        XPACK_MONITORING_ENABLED = "false"
        XPACK_SECURITY_ENABLED = "false"
        XPACK_MONITORING_KIBANA_COLLECTION_ENABLED = "false"
      }
      resources {
        cpu = 2500 # has to be much larger when SERVER_BASEPATH is added
        memory = 1900
        network {
          port "http" {
          }
          mbits = 50
        }
      }

      service {
        name = "kibana"
        tags = [
          "traefik.tags=access-http",
          "traefik.protocol=http",
          "traefik.frontend.entryPoints=https",
          "traefik.frontend.rule=Host:internal-xxx.com;PathPrefixStrip:/kibana"
        ]
        port = "http"
      }
    }
  }
}

Provide logs and/or server output (if relevant):
This request is broken (502) in chrome debugger.

Request URL:https://internal-xxx.com/kibana/es_admin/.kibana/index-pattern/_search?stored_fields=

Same 502 is in the kibana logs:

"referer":"https://internal-xxxx/kibana/app/kibana"},"res":{"statusCode":502,"responseTime":8,"contentLength":9},"message":"POST /es_admin/.kibana/index-pattern/_search?stored_fields= 502 8ms - 9.0B"}
@rokka-n
Copy link
Author

rokka-n commented Mar 7, 2017

ES url should include default port, I guess ELASTICSEARCH_URL = "http://elasticsearch.service.consul:9200"

It's still takes minutes to start.

@rokka-n
Copy link
Author

rokka-n commented Mar 8, 2017

Why does it need 3000 MHz of CPU and 1.5 GB to start??
And then the load drops immediately.

image

@epixa
Copy link
Contributor

epixa commented Mar 8, 2017

@rokka-n Can you attempt this setup without a docker container? We'd like to pin down whether this is an issue with the docker image or with the core Kibana build.

@jarpy Have you seen anything like this with the kibana-docker images?

@epixa epixa added Team:Operations Team label for Operations Team feedback_needed labels Mar 8, 2017
@rokka-n
Copy link
Author

rokka-n commented Mar 9, 2017

I've started to forget how to run things without docker ¯\_(ツ)_/¯

@jarpy
Copy link
Contributor

jarpy commented Mar 9, 2017

Setting server.basepath is triggering this:

log   [08:33:08.755] [info][optimize] Optimizing and caching bundles for graph, monitoring, kibana, timelion, login, logout and status_page. This may take a few minutes

And yes. It is, indeed, taking quite a while.

Edit: Almost exactly two minutes on my laptop.

@jarpy
Copy link
Contributor

jarpy commented Mar 9, 2017

Presumably, this would only be seen on the first "boot" in a traditional installation, so it wouldn't be a problem long term. However, since each container starts with a pristine filesystem, the optimization step is running on each launch.

@jarpy
Copy link
Contributor

jarpy commented Mar 9, 2017

Building a custom image that is pre-optimized seems like the way forward, but the existing techniques are a little hacky.

Refer: #6057

@epixa
Copy link
Contributor

epixa commented Mar 16, 2017

Setting server.basepath should definitely not trigger an optimize cycle, and I agree that it's a big problem if it does. @jarpy is that only happening using the docker image, or did you verify that setting server.basepath on one of the download page releases triggered an optimize? If the former, then this is a bug in the docker setup, if the latter then this is a bug in Kibana (and one we need to fix asap).

@jarpy
Copy link
Contributor

jarpy commented Mar 17, 2017

Easy to reproduce outside Docker.

$ ./bin/kibana --server.basePath=/proxy
  log   [03:48:32.155] [info][optimize] Optimizing and caching bundles for kibana, timelion and status_page. This may take a few minutes
  log   [03:49:48.274] [info][optimize] Optimization of bundles for kibana, timelion and status_page complete in 76.11 seconds

It's just exacerbated by Docker because of the stateless filesystem.

@epixa
Copy link
Contributor

epixa commented Mar 18, 2017

@jarpy Thanks for checking. I was traveling at the time, but I really wanted to get an answer to this :)

That is definitely a bug in Kibana. I'm not sure how or why that started happening, but it's a bad one.

@epixa epixa added bug Fixes for quality problems that affect the customer experience and removed feedback_needed labels Mar 18, 2017
@epixa epixa changed the title Kibana behind proxy (traefik) Setting server.basePath causes an optimize (transpilation) cycle Mar 18, 2017
@jarpy
Copy link
Contributor

jarpy commented Mar 19, 2017

No problem. I was guilty of speculating without empirical evidence. Thanks for reminding me to science! :)

@epixa
Copy link
Contributor

epixa commented Mar 20, 2017

@spalger Any thoughts on why basePath would be kicking off an optimize run?

@jbudz
Copy link
Member

jbudz commented Mar 22, 2017

It looks like it's for stylesheets that end up using url() and the file-loader - here, and here

@pemontto
Copy link

pemontto commented Apr 7, 2017

I can confirm changing the basePath has triggered an optimise cycle since at least 4.4, and from memory much earlier.

@spalger
Copy link
Contributor

spalger commented Apr 7, 2017

Not sure why this is labeled a bug. It was intentional and required if we want webpack to know the URL it should be loading output from.

#10854 might remove this restriction, but so far it has not proven that webpack behaves properly when it needs to load additional files from it's build output from arbitrary urls.

e.g.:

  • basePath: /logs/k5
  • page: /logs/k5/login
  • how is webpack supposed to know that it's build output is at /logs/k5/bundles/
    • option 1: knows the basePath is /logs/k5 so it creates relative paths based on that
    • option 2: knows that the bundle is served from the root, so created relative urls, like ./bundles/chunk2.js

Kibana currently utilized Option 1, because with Option 2 the relative urls need to be different when the bundle is loaded from /logs/k5/app/kibana (./bundles/chunk2.js needs to become ../bundles/chunk2.js) but I don't think there is any way for the webpack bundle to know that.

@rokka-n
Copy link
Author

rokka-n commented Apr 7, 2017 via email

@spalger
Copy link
Contributor

spalger commented Apr 7, 2017

@rokka-n that is why we are moving in this direction: #7322

@VelorumS
Copy link

VelorumS commented May 2, 2017

A parameter change that needs a rebuild...

Why is the server.basePath needed at all? That build makes it useless in the setting where it's needed the most.

Maybe someone can make an nginx config (with sub_filter or something) that fixes this problem?

@mengzyou
Copy link

mengzyou commented Jun 15, 2017

Got the same issue in v5.4.1, when set server.basePath, the kibana can not startup

{"type":"ops","@timestamp":"2017-06-15T01:58:38Z","tags":[],"pid":13,"os":{"load":[0.66748046875,0.6220703125,0.4365234375],"mem":{"total":12494004224,"free":446656512},"uptime":6901},"proc":{"uptime":211.723,"mem":{"rss":1207947264,"heapTotal":1129152512,"heapUsed":1003845312,"external":17366038},"delay":0.6792040001600981},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 957.3MB uptime: 0:03:32 load: [0.67 0.62 0.44] delay: 0.679"}
{"type":"log","@timestamp":"2017-06-15T01:58:40Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2017-06-15T01:58:42Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"ops","@timestamp":"2017-06-15T01:58:43Z","tags":[],"pid":13,"os":{"load":[0.61376953125,0.611328125,0.43408203125],"mem":{"total":12494004224,"free":408821760},"uptime":6906},"proc":{"uptime":216.725,"mem":{"rss":1208487936,"heapTotal":1129152512,"heapUsed":1004380744,"external":17368834},"delay":0.6427759993821383},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 957.9MB uptime: 0:03:37 load: [0.61 0.61 0.43] delay: 0.643"}
{"type":"log","@timestamp":"2017-06-15T01:58:45Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2017-06-15T01:58:47Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"ops","@timestamp":"2017-06-15T01:58:48Z","tags":[],"pid":13,"os":{"load":[0.72509765625,0.634765625,0.44287109375],"mem":{"total":12494004224,"free":618180608},"uptime":6911},"proc":{"uptime":221.727,"mem":{"rss":979275776,"heapTotal":900734976,"heapUsed":257971328,"external":17358052},"delay":0.4118200000375509},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 246.0MB uptime: 0:03:42 load: [0.73 0.63 0.44] delay: 0.412"}
{"type":"log","@timestamp":"2017-06-15T01:58:50Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2017-06-15T01:58:52Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"ops","@timestamp":"2017-06-15T01:58:53Z","tags":[],"pid":13,"os":{"load":[0.6669921875,0.6240234375,0.4404296875],"mem":{"total":12494004224,"free":618397696},"uptime":6916},"proc":{"uptime":226.728,"mem":{"rss":980094976,"heapTotal":901783552,"heapUsed":258632624,"external":17360848},"delay":0.5480329999700189},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 246.7MB uptime: 0:03:47 load: [0.67 0.62 0.44] delay: 0.548"}
{"type":"log","@timestamp":"2017-06-15T01:58:55Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2017-06-15T01:58:57Z","tags":["plugin","debug"],"pid":13,"message":"Checking Elasticsearch version"}
{"type":"ops","@timestamp":"2017-06-15T01:58:58Z","tags":[],"pid":13,"os":{"load":[0.61328125,0.61328125,0.43798828125],"mem":{"total":12494004224,"free":651354112},"uptime":6921},"proc":{"uptime":231.729,"mem":{"rss":979582976,"heapTotal":901783552,"heapUsed":258238464,"external":17357759},"delay":0.6117400005459785},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 246.3MB uptime: 0:03:52 load: [0.61 0.61 0.44] delay: 0.612"}

@rocketraman
Copy link

rocketraman commented Jun 28, 2017

Using the docker container 5.4.3 and have the same issue. For some reason, running it with XPACK_SECURITY_ENABLED=false also causes the optimize step at startup.

docker run -e XPACK_SECURITY_ENABLED=false docker.elastic.co/kibana/kibana:5.4.3
{"type":"log","@timestamp":"2017-06-28T20:34:34Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, monitoring, ml, kibana, timelion and status_page. This may take a few minutes"} 

@redchuck
Copy link

This is a pretty significant breaking change. Suddenly requiring kibana to have 5X the previous memory available to start/optimize and not printing a reason has been a huge headache trying to switch out 2GB containers for the new version. Please at least print the reason for the optimization run.

@rbosneag
Copy link

rbosneag commented Aug 1, 2017

I understand the reasoning behind the optimization kibana needs at the very beginning but this becomes annoying in the initial setup of the docker-elk stack. I keep changing the docker-compose.yml file to fine-tune my setup and each time I change something in the elasticsearch service config I have to run docker-compose up which triggers this optimization that crushes my dev server that I can't even ssh into it.

Last optimization result is:
Optimization of bundles for kibana, stateSessionStorageRedirect, timelion and status_page complete in 654.36 seconds

Please note the 654.36 seconds I had to wait for the server to come alive just to check if my last setting was right.

The elasticsearch data is persistent in a volume mapped to the host disk so its always available to kibana.

Is there a way to extract the bundle or the optimization result of kibana out of the docker container in a volume on host disk so this optimization should run only once?

@jbudz
Copy link
Member

jbudz commented Aug 1, 2017

Yes, we ship builds similarly pre-optimized. If you need to change any settings or install plugins that will cause a bundling cycle you can do so on a different/host machine, and after the optimize step zip it back up if necessary and send it over.

@rbosneag
Copy link

rbosneag commented Aug 1, 2017

I'd gladly do that but I am now trying to figure out the best settings for my project and kibana optimization slows down this process terribly.

I don't have something to bundle yet, just a docker-compose file which I keep tweaking, together with the config ymls of ES and kibana (which are mapped to the host disk, they live outside the docker images). I can't zip the container as it keeps rebuilding the bundle inside and I'd love to take this out of the container but it's not clear what folders should be extracted out. I also don't understand why kibana thinks it's starting from scratch since I only tweak the elasticsearh service.

Any hints on this kind of setup, extracting out of kibana docker the folder with the optimization result? This way we'd be able to optimize it once and then deploy the dockers together with the associated folders with the optimization.

Thanks for the quick answer, by the way.

@rocketraman
Copy link

rocketraman commented Aug 1, 2017

@rbosneag You can build your own image using the workaround here: #6057 (comment)

For example, my Dockerfile looks like this to do the optimization with XPACK_SECURITY_ENABLED=false since setting this triggers an optimization run:

FROM docker.elastic.co/kibana/kibana:5.4.3

# Workaround https://github.com/elastic/kibana/issues/10724 and https://github.com/elastic/kibana/issues/6057
RUN XPACK_SECURITY_ENABLED=false /usr/local/bin/kibana-docker 2>&1 | grep -m 1 "Optimization of .* complete in .* seconds"

@jstangroome
Copy link

Just encountered this issue with 6.0.0 Beta 2, would be great to have a solution before 6.0 is finalised.

docker run -e SERVER_BASEPATH=/account7 docker.elastic.co/kibana/kibana:6.0.0-beta2

@gavenkoa
Copy link

gavenkoa commented Sep 28, 2017

So far started Kibana with:

NODE_OPTIONS="--max-old-space-size=150"

But after changing server.basePath it started "optimizing" which finished with:

C:\MyProjects\elastic\kibana-5.6.1-windows-x86\bin>kibana.bat
  log   [10:53:59.578] [info][optimize] Optimizing and caching bundles for kibana, stateSessionStorageRedirect, timelion and status_page. This may take a few minutes

<--- Last few GCs --->

  156924 ms: Mark-sweep 220.6 (123.9) -> 220.6 (123.9) MB, 243.4 / 0.0 ms (+ 0.4 ms in 1 steps since start of marking, biggest step 0.4 ms) [allocation failure]
 [GC in old space requested].
  157172 ms: Mark-sweep 220.6 (123.9) -> 227.0 (120.9) MB, 247.8 / 0.0 ms (+ 0.3 ms in 1 steps since start of marking, biggest step 0.3 ms) [last resort gc].
  157408 ms: Mark-sweep 227.0 (120.9) -> 233.5 (120.9) MB, 236.1 / 0.0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 2532B18D <JS Object>
    2: eval [C:\MyProjects\elastic\kibana-5.6.1-windows-x86\node_modules\less\lib\less\tree\ruleset.js:~33] [pc=06E5A73B] (this=3B69F6B5 <a Ruleset with map 112
E68E1>,context=36372BE9 <JS Object>)
    3: /* anonymous */ [C:\MyProjects\elastic\kibana-5.6.1-windows-x86\node_modules\less\lib\less\transform-tree.js:67] [pc=06E5827D] (this=253D5CCD <JS Global
Object>,root=3B69F6B5 <a Ruleset with map 112E...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

To resolve issue I reinstalled Kibana and left experiments with base path for better time ((

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience high hanging fruit Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests