Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch version too old for kibana? #56

Closed
cboettig opened this issue Feb 13, 2015 · 22 comments
Closed

Elasticsearch version too old for kibana? #56

cboettig opened this issue Feb 13, 2015 · 22 comments
Assignees
Labels

Comments

@cboettig
Copy link

Thanks for providing a great dockerized version of these services, it's been super helpful.

Elasticsearch and logstash are working just fine, but Kibana does not seem to be working for me. When I go to http://:9292 I get the error

Upgrade required. Your version of elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or higher

Which is particularly surprising given that your documentation says it provides Kibana 1.1.1. Not quite sure what is happening. It seems that this error is a bit misleading and probably due just to not finding elasticsearch at all, since Kibana gives the second error message:

could not reach http://127.0.0.1:9200/_nodes. If you are using a proxy, ensure it is configured correctly

Should Kibana be using the external server IP here instead? How would I go about configuring that?

(My apologies if this is all really elementary stuff. Everything else just worked so smoothly by following the directions you provided in the README). Thanks again for sharing this excellent resource!

@cboettig
Copy link
Author

Okay, it appears this was just caused by me not setting ES_HOST appropriately, my apologies for the trouble and thanks again.

@pblittle
Copy link
Owner

@cboettig, good deal. I was about to take a look at this issue. Please let me know if anything else comes up.

@cboettig
Copy link
Author

@pblittle Thanks.

I was able to work around this by doing a docker exec into the container and manually setting the server in /opt/logstash/vendor/kibana/config.js, but I wasn't able to get this to set itself correctly automatically even when setting host explicitly in the config: https://github.com/ropensci/fishbaseapi/blob/86d5f3111ee42f3d2f808546698d39c5d7265459/logstash.conf.

Leaving the conf with host => "ES_HOST" meant that the elasticsearch call worked fine, but required the manual tweak for kibana to work. did I miss something obvious?

@pblittle
Copy link
Owner

@cboettig are we good now?

@pblittle pblittle self-assigned this Feb 17, 2015
@cboettig
Copy link
Author

I'm still stuck setting the kibana host manually with a docker exec. It would be great if that could be done with an environmental variable at runtime instead...

@pblittle
Copy link
Owner

@cboettig did you try setting the ES_HOST[1] env var with ES_HOST in your config file? That should allow you to write you Elasticsearch host to both logstash.conf [2] and config.js [3][4].

There is a bug if that doesn't work. You may need to set LOGSTASH_TRACE [5] to help debug.

[1] https://github.com/pblittle/docker-logstash/blob/master/1.4/base/elasticsearch.sh#L11-L16
[2] https://github.com/pblittle/docker-logstash/blob/master/1.4/base/bin/boot#L53-L70
[3] https://github.com/pblittle/docker-logstash/blob/master/1.4/base/kibana.sh#L11-L15
[4] https://github.com/pblittle/docker-logstash/blob/master/1.4/base/kibana.sh#L29-L38
[5] https://github.com/pblittle/docker-logstash/blob/master/1.4/base/bin/boot#L7

@cboettig
Copy link
Author

Ah, hadn't tried running with -e ES_HOST before, that should have been obvious. For some reason though, that seems to cause my container to simply crash, eg:

docker run --name logstash -d -v /root  -p 9292:9292  -p 9200:9200 -e ES_HOST=http://server.carlboettiger.info -e LOGSTASH_CONFIG_URL=https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf pblittle/docker-logstash

and then the logs show:

converted 'https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf' (ANSI_X3.4-1968) -> 'https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf' (UTF-8)
--2015-02-17 19:09:39--  https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.79.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.79.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 270 [text/plain]
Saving to: '/opt/logstash/conf.d/logstash.conf'

     0K                                                       100% 6.85M=0s

2015-02-17 19:09:39 (6.85 MB/s) - '/opt/logstash/conf.d/logstash.conf' saved [270/270]

sed: -e expression #3, char 17: unknown option to `s'

@pblittle
Copy link
Owner

@cboettig I see two small issues; one in your run command and one in your config:

In your run command, ES_HOST should be server.carlboettiger.info. You shouldn't include the protocol, http://. The FQDN will be built using the host, port, and schema attributes.

In your logstash.conf, protocol should be set to a real protocol rather than ES_PROTOCOL. The protocol attribute doesn't use interpolation right now. In hindsight, maybe it should and default to http.

@cboettig
Copy link
Author

Thanks for the help debugging, that gets me farther but the container still crashes when I set ES_HOST env var.

Here's the whole log in case it's helpful:

converted 'https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf' (ANSI_X3.4-1968) -> 'https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf' (UTF-8)
--2015-02-17 22:42:58--  https://raw.githubusercontent.com/ropensci/fishbaseapi/master/logstash.conf
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.79.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.79.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 263 [text/plain]
Saving to: '/opt/logstash/conf.d/logstash.conf'

     0K                                                       100% 6.31M=0s

2015-02-17 22:42:58 (6.31 MB/s) - '/opt/logstash/conf.d/logstash.conf' saved [263/263]

converted 'https://gist.githubusercontent.com/pblittle/8994708/raw/insecure-logstash-forwarder.key' (ANSI_X3.4-1968) -> 'https://gist.githubusercontent.com/pblittle/8994708/raw/insecure-logstash-forwarder.key' (UTF-8)
--2015-02-17 22:42:58--  https://gist.githubusercontent.com/pblittle/8994708/raw/insecure-logstash-forwarder.key
Resolving gist.githubusercontent.com (gist.githubusercontent.com)... 199.27.79.133
Connecting to gist.githubusercontent.com (gist.githubusercontent.com)|199.27.79.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1674 (1.6K) [text/plain]
Saving to: '/opt/ssl/logstash-forwarder.key'

     0K .                                                     100% 75.0M=0s

2015-02-17 22:42:59 (75.0 MB/s) - '/opt/ssl/logstash-forwarder.key' saved [1674/1674]

converted 'https://gist.githubusercontent.com/pblittle/8994726/raw/insecure-logstash-forwarder.crt' (ANSI_X3.4-1968) -> 'https://gist.githubusercontent.com/pblittle/8994726/raw/insecure-logstash-forwarder.crt' (UTF-8)
--2015-02-17 22:42:59--  https://gist.githubusercontent.com/pblittle/8994726/raw/insecure-logstash-forwarder.crt
Resolving gist.githubusercontent.com (gist.githubusercontent.com)... 199.27.79.133
Connecting to gist.githubusercontent.com (gist.githubusercontent.com)|199.27.79.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1345 (1.3K) [text/plain]
Saving to: '/opt/ssl/logstash-forwarder.crt'

     0K .                                                     100% 16.3M=0s

2015-02-17 22:42:59 (16.3 MB/s) - '/opt/ssl/logstash-forwarder.crt' saved [1345/1345]

Sending logstash logs to /var/log/logstash/logstash.log.
ThreadError: current thread not owner
  mon_check_owner at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:246
         mon_exit at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:195
          require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:143
          require at /opt/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
           (root) at /opt/logstash/lib/logstash/kibana.rb:6
          require at org/jruby/RubyKernel.java:1085
          require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
          require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
              run at /opt/logstash/lib/logstash/runner.rb:123
             call at org/jruby/RubyProc.java:271
              run at /opt/logstash/lib/logstash/runner.rb:175
             main at /opt/logstash/lib/logstash/runner.rb:92
           (root) at /opt/logstash/lib/logstash/runner.rb:215

@pblittle
Copy link
Owner

@cboettig the only thing that strikes me as odd is that you are mounting /root and writing logs to it. Your config works fine on my end when I don't mount /root. Maybe you should mount and write to /var/log/ instead?

@pblittle
Copy link
Owner

Closing due to inactivity. @cboettig please reopen if I can do anything to help.

@cboettig
Copy link
Author

Thanks, this does seem to work with a more sensible mount point; though it's not obvious why. Thanks again for the help.

@pblittle
Copy link
Owner

@cboettig that's great. Let me know if anything else comes up.

@cboettig
Copy link
Author

Thanks! The only thing I'm struggling with now is probably just my ignorance, but I don't really see why Kibana has to connect to the ElasticSearch client over a publicly exposed network connection. E.g. I'd like to just set ES_HOST=localhost and expose only port 9292 when running the container, so that my ElasticSearch port isn't just exposed for the world to attack. It seems I can work around this by adding an authentication layer over the ElasticSearch and then teaching Kibana the user/password needed to authenticate, but that seems kind of convoluted to me. With other services like MySQL or something, I would always just connect to the service over the internal network without ever exposing the service to a public-facing port. Am I missing something here?

@pblittle
Copy link
Owner

@cboettig if I'm following you correctly, I may have a branch ready to merge in that fixes that problem.

In the new branch, there are separate Elasticsearch service and proxy settings. Previously Kibana and Elasticsearch both used the same ES_HOST and ES_PORT env vars.

So, to set the Kibana ES host, you would set ES_PROXY_HOST to localhost or use the default window.location.hostname.

https://github.com/pblittle/docker-logstash/blob/hotfix/elasticsearch-port-fix/1.4/base/kibana.sh#L11

Does this help?

@cboettig
Copy link
Author

Hmm, a proxy sounds promising. I just tried cloning
the hotfix/elasticsearch-port-fix branch and building the dockerfile in
1.4/base, and running it with the default configuration:

docker run -d -p 9292:9292 logstash

The container seems to build fine and run with no error log, but I don't
see anything at the kibana URL now (9292). I must still not be
understanding something about how this is supposed to work.

On Thu, Feb 26, 2015 at 12:42 PM P. Barrett Little notifications@github.com
wrote:

@cboettig https://github.com/cboettig if I'm following you correctly, I
may have a branch
master...hotfix/elasticsearch-port-fix
ready to merge in that fixes that problem.

In the new branch, there are separate Elasticsearch service and proxy
settings. Previously Kibana and Elasticsearch both used the same ES_HOST
and ES_PORT env vars.

So, to set the Kibana ES host, you would set ES_PROXY_HOST to localhost
or use the default window.location.hostname.

https://github.com/pblittle/docker-logstash/blob/hotfix/
elasticsearch-port-fix/1.4/base/kibana.sh#L11

Does this help?


Reply to this email directly or view it on GitHub
#56 (comment)
.

@pblittle
Copy link
Owner

@cboettig, good deal. Thanks for testing. You aren't seeing any logs because the default config isn't pulling in old syslog messages now. It was a pain and slowed down building the containers.

You can add a record using curl to make sure everything is wired up correctly. Something like:

curl -XPUT '<your_elasticsearch_ip>:9200/twitter/user/emmet' -d '{ "name" : "Emmet" }'

The entry should show up in Kabana if everything worked correctly.

@cboettig
Copy link
Author

Sorry to be dense, but what is <your_elasticsearch_ip>? I thought the whole idea here was that the elasticsearch service was not exposed outside of the container. In my run command I'm just exporting the kibana port, -p 9292:9292.

@pblittle
Copy link
Owner

@cboettig I was just giving an example of how to easily insert data into Elasticsearch. You can run the curl command from inside of the contain and receive the same result. Just change <your_elasticsearch_ip> to 127.0.0.1.

curl -XPUT '127.0.0.1:9200/twitter/user/emmet' -d '{ "name" : "Emmet" }'

@cboettig
Copy link
Author

@pblittle Thanks for clarifying; yes I figured that is what you meant.

First having built the new image I've tried running: docker run -d -p 9292:9292 logstash and then
I docker exec into the logstash container and I'm able to use your example to add something to the record and query it (e.g. curl -L localhost:9200/_search). From inside and outside the container, I can get kibana to respond on the appropriate port as well.

However, when visiting the Kibana page, I'm still getting the same error as before at the top of this issue: Kibana says that it cannot connect to ElasticSearch:

Error Could not reach http://server.carlboettiger.info:9200/_nodes. If you are using a proxy, ensure it is configured correctly

I see that Kibana is using the window.location.hostname to get the FQDN of the server, and of course it cannot reach it since I did not expose port 9200 when running the docker command.

So, following your other comment, I attempt to run the container while setting the ES_PROXY_HOST env var to localhost: docker run -d -p 9292:9292 -e ES_PROXY_HOST=localhost logstash

I docker exec in as before, and I can curl -L localhost:9200 successfully, add an entry and view it as before. However, it seems that now Kibana is not running. Even from inside the container I try:

 curl -L localhost:9292
curl: (7) Failed to connect to localhost port 9292: Connection refused

Likewise outside of the container, and the page doesn't load. Where have I gone wrong?

Thanks again for the help and sorry to be such a nuisance.

@pblittle
Copy link
Owner

@cboettig do you mind showing me you /opt/logstash/conf.d/logstash.conf and /opt/logstash/vendor/kibana/config.js config files?

@cboettig
Copy link
Author

Sure, though they are just the defaults from running the container as I described above:

logstash.conf

root@2f8dd465901b:/# cat /opt/logstash/conf.d/logstash.conf 

input {
  stdin {
    type => "stdin-type"
  }

  file {
    type => "syslog"
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
  }

  file {
    type => "logstash"
    path => [ "/var/log/logstash/logstash.log" ]
    start_position => "beginning"
  }
}

filter {
  if [type] == "docker" {
    json {
      source => "message"
    }
    mutate {
      rename => [ "log", "message" ]
    }
    date {
      match => [ "time", "ISO8601" ]
    }
  }
}

output {
  stdout {
    codec => rubydebug
  }

  elasticsearch {
    embedded => true
    host => "127.0.0.1"
    port => "9200"
    protocol => "http"
  }
}

And kibana config.js:

root@2f8dd465901b:/# cat /opt/logstash/vendor/kibana/config.js 
/** @scratch /configuration/config.js/1
 *
 * == Configuration
 * config.js is where you will find the core Kibana configuration. This file contains parameter that
 * must be set before kibana is run for the first time.
 */
define(['settings'],
function (Settings) {


  /** @scratch /configuration/config.js/2
   *
   * === Parameters
   */
  return new Settings({

    /** @scratch /configuration/config.js/5
     *
     * ==== elasticsearch
     *
     * The URL to your elasticsearch server. You almost certainly don't
     * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
     * the same host. By default this will attempt to reach ES at the same host you have
     * kibana installed on. You probably want to set it to the FQDN of your
     * elasticsearch host
     *
     * Note: this can also be an object if you want to pass options to the http client. For example:
     *
     *  +elasticsearch: {server: "http://localhost:9200", withCredentials: true}+
     *
     */
    elasticsearch: "http://localhost:9200",

    /** @scratch /configuration/config.js/5
     *
     * ==== default_route
     *
     * This is the default landing page when you don't specify a dashboard to load. You can specify
     * files, scripts or saved dashboards here. For example, if you had saved a dashboard called
     * `WebLogs' to elasticsearch you might use:
     *
     * default_route: '/dashboard/elasticsearch/WebLogs',
     */
    default_route     : '/dashboard/file/default.json',

    /** @scratch /configuration/config.js/5
     *
     * ==== kibana-int
     *
     * The default ES index to use for storing Kibana specific object
     * such as stored dashboards
     */
    kibana_index: "kibana-int",

    /** @scratch /configuration/config.js/5
     *
     * ==== panel_name
     *
     * An array of panel modules available. Panels will only be loaded when they are defined in the
     * dashboard, but this list is used in the "add panel" interface.
     */
    panel_names: [
      'histogram',
      'map',
      'goal',
      'table',
      'filtering',
      'timepicker',
      'text',
      'hits',
      'column',
      'trends',
      'bettermap',
      'query',
      'terms',
      'stats',
      'sparklines'
    ]
  });
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants