Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support underscores in hostnames #400

Open
cooniur opened this issue Mar 25, 2016 · 49 comments
Open

support underscores in hostnames #400

cooniur opened this issue Mar 25, 2016 · 49 comments
Assignees
Labels

Comments

@cooniur
Copy link

cooniur commented Mar 25, 2016

Elasticsearch version: 1.5.0
Logstash version: 2.2.0
OS: Ubuntu_14.04.3_LTS_HVM (uname -a = Linux my-test-logstash-1-us-west-2 3.13.0-63-generic #103-Ubuntu SMP Fri Aug 14 21:42:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux)
Cloud platform: AWS

Details

For the following two configs, Logstash gives different errors respectively. The 3rd config works well, though, but is not practical in a cloud environment like AWS.

Config 1 (failed)

output {
  elasticsearch {
      hosts => ["http://es_myapp.us-west-2.test.mydomain.net:9200"]
      index => "logstash-%{+YYYYMMdd}"
      document_type => "%{[@metadata][type]}"
  }
}

Error message:

Attempted to send a bulk request to Elasticsearch configured at '["http://http://es_myapp.us-west-2.test.mydomain.net:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["http://http://es_myapp.us-west-2.test.mydomain.net:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, ...

Note the extra "http://" in the :hosts=> field in the error message. I believe this should definitely be a bug.

Config 2 (failed)

output {
  elasticsearch {
      hosts => ["es_myapp.us-west-2.test.mydomain.net:9200"]
      index => "logstash-%{+YYYYMMdd}"
      document_type => "%{[@metadata][type]}"
  }
}

Error message:

The error reported is:
  the scheme http does not accept registry part: es_myapp.us-west-2.test.mydomain.net:9200 (or bad hostname?)
/apps/logstash-2.2.0/vendor/jruby/lib/ruby/1.9/uri/generic.rb:214:in `initialize'
/apps/logstash-2.2.0/vendor/jruby/lib/ruby/1.9/uri/http.rb:84:in `initialize'
/apps/logstash-2.2.0/vendor/jruby/lib/ruby/1.9/uri/common.rb:214:in `parse'
/apps/logstash-2.2.0/vendor/jruby/lib/ruby/1.9/uri/common.rb:747:in `parse'
/apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:155:in `__extract_hosts'
org/jruby/RubyArray.java:2414:in `map'
/apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:151:in `__extract_hosts'
/apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:115:in `initialize'
/apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport.rb:26:in `new'
/apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:136:in `build_client'

Config 3 (passed)

output {
  elasticsearch {
      hosts => ["100.10.10.10:9200"]               # 100.10.10.10 is the IP address of es_myapp.us-west-2.test.mydomain.net
      index => "logstash-%{+YYYYMMdd}"
      document_type => "%{[@metadata][type]}"
  }
}

This works, however, I'm in AWS environment and cannot rely on IP addresses.

The minimum requirement is that for Config 1 the parsing should work.

Please take a look at. Thanks in advance!

=================
Updated:

With Logstash 2.2.2 and logstash-output-elasticsearch 2.1, the issue still exists (see the 5th reply).

Tried to update the plugin to 2.5.5, however, the upgrade may have corrupted Logstash, and it now fails to start with error:

The error reported is:

        you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command

no such file to load -- org/apache/httpcomponents/httpcore/4.4.1/httpcore-4.4.1 (LoadError)

================
Updated 11/1/16: put the above update under the original post to avoid confusion.

@cooniur cooniur changed the title elasticsearch output plugin cannot parse hosts correctly in these conditions elasticsearch output plugin fails to parse hosts correctly in these conditions Mar 25, 2016
@untergeek
Copy link
Contributor

Thank you for reporting this, but this issue was already patched here and released in v2.5.0 of the plugin.

Logstash 2.2.0 shipped with v2.4.1 of the plugin. If you're not running the latest version of the plugin (which is presently 2.5.5), you should update it by running:

bin/plugin update logstash-output-elasticsearch

...from the directory where Logstash was installed.

@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

@untergeek I'm glad to hear that the bug has been fixed! Will do the update. Thanks!

@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

Hey @untergeek , here is a quick feedback: after upgrading, I got this error while starting Logstash:

The error reported is:


    you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command

no such file to load -- org/apache/httpcomponents/httpcore/4.4.1/httpcore-4.4.1 (LoadError)

And I noticed there is a Gemfile.jruby-1.9.lock.origin file generated. I tried to remove it, but got another error:

Bundler::GemNotFound: Could not find gem 'ci_reporter_rspec (= 1.0.0) java' in any of the gem sources listed in your Gemfile or installed on this machine.
  verify_gemfile_dependencies_are_found! at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/resolver.rb:328
                                    each at org/jruby/RubyArray.java:1613
  verify_gemfile_dependencies_are_found! at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/resolver.rb:307
                                   start at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/resolver.rb:199
                                 resolve at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/resolver.rb:182
                                 resolve at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/definition.rb:192
                                   specs at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/definition.rb:132
                               specs_for at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/definition.rb:177
                         requested_specs at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/definition.rb:166
                         requested_specs at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/environment.rb:18
                                   setup at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler/runtime.rb:13
                                   setup at /apps/logstash-2.2.0/vendor/bundle/jruby/1.9/gems/bundler-1.9.10/lib/bundler.rb:122
                                  setup! at /apps/logstash-2.2.0/lib/bootstrap/bundler.rb:64
                                  (root) at /apps/logstash-2.2.0/lib/bootstrap/environment.rb:65

@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

Another quick feedback: Even after I upgraded to logstash 2.2.2, I still get error with these configs

Elasticsearch: 1.5.0
Logstash: 2.2.2

Config 1

Running bin/logstash -f $PWD/conf.d --verbose in /apps/logstash-2.2.2:

output {
  elasticsearch {
      hosts => ["http://es_myapp.us-west-2.test.mydomain.net:9200"]
      index => "logstash-%{+YYYYMMdd}"
      document_type => "%{[@metadata][type]}"
  }
}

Error is:

The error reported is:
  the scheme http does not accept registry part: es_myapp.us-west-2.test.mydomain.net:9200 (or bad hostname?)

Config 2

Running bin/logstash -f $PWD/conf.d --verbose in /apps/logstash-2.2.2:

output {
  elasticsearch {
      hosts => ["es_myapp.us-west-2.test.mydomain.net:9200"]
      index => "logstash-%{+YYYYMMdd}"
      document_type => "%{[@metadata][type]}"
  }
}

Error is:

Error: Host 'es_myapp.us-west-2.test.mydomain.net:9200' was specified, but is not valid! Use either a full URL or a hostname:port string!
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.

Ping the DNS name:

$: ping es_myapp.us-west-2.test.mydomain.net
PING es_myapp.us-west-2.test.mydomain.net (100.10.10.10) 56(84) bytes of data.
64 bytes from 100.10.10.10: icmp_seq=1 ttl=60 time=14.0 ms
64 bytes from 100.10.10.10: icmp_seq=2 ttl=60 time=14.0 ms

@untergeek
Copy link
Contributor

Please report the output of:

bin/plugin list --verbose logstash-output-elasticsearch

@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

Output is:

logstash-output-elasticsearch (2.5.1)

@untergeek
Copy link
Contributor

We should still try to upgrade to 2.5.5, which was released today. Please run the bin/plugin update logstash-output-elasticsearch now that you've updated to Logstash 2.2.2.

If it persists, we'll have to update the logic to allow long, multi-level subdomain syntax.

@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

Tried to update, it gives some error:

$ bin/plugin update logstash-output-elasticsearch
Updating logstash-output-elasticsearch
Error Bundler::InstallError, retrying 1/10
An error occurred while installing manticore (0.5.5), and Bundler cannot continue.
Make sure that `gem install manticore -v '0.5.5'` succeeds before bundling.
WARNING: SSLSocket#session= is not supported
Updated logstash-output-elasticsearch 2.5.1 to 2.5.5

Although,

 $ bin/plugin list --verbose logstash-output-elasticsearch
 logstash-output-elasticsearch (2.5.5)

And when I tried to start Logstash, it gives error:

$ bin/logstash -f conf.d --verbose
The error reported is:

        you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command

no such file to load -- org/apache/httpcomponents/httpcore/4.4.1/httpcore-4.4.1 (LoadError)

I'll let you guys handle this issue. Please re-open the ticket.

@untergeek
Copy link
Contributor

We'll see what's going on.

@untergeek untergeek reopened this Mar 25, 2016
@cooniur
Copy link
Author

cooniur commented Mar 25, 2016

Thank you!

@borian
Copy link

borian commented Apr 8, 2016

I'm having the same problem
also on the newest version logstash-output-elasticsearch (2.5.5)

@borian
Copy link

borian commented Apr 11, 2016

I can't specify a valid hostname in the config
output { elasticsearch {hosts => ["172.17.0.3:9200"]} } works
output { elasticsearch {hosts => ["elasticsearch_server:9200"]} } fails
error message
ConfigurationError: Host 'elasticsearch_server:9200' was specified, but is not valid! Use either a full URL or a hostname:port string!>

This happens with the latest version of Logstash 2.3.0 with logstash-output-elasticsearch (2.5.5)

I am testing this with with docker:
docker run -it --rm --link elasticsearch_server logstash:latest logstash -e 'input { stdin { } } output { elasticsearch {hosts => ["elasticsearch_server:9200"]} }'

even older versions of logstash fail
2.2: Error: Host 'elasticsearch_server:9200' was specified, but is not valid! Use either a full URL or a hostname:port string! {:level=>:error}
2.1: The error reported is: the scheme http does not accept registry part: elasticsearch_server:9200 (or bad hostname?)
1.5: actually works, but causes problems with the latest elasticsearch version

The hostname and connection to it is working
running ping elasticsearch_server works
running wget -qO- elasticsearch_server:9200 gives me a response from the elasticsearch instance

@Xylakant
Copy link

the problem seems to be that the expression in https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/http_client.rb#L132 does not accept underscores - which by the way is correct according to the RFC.

@Musashisan
Copy link

Hi there.

I have the same problem.
By the RFC Hostname convention it is correct, as a DNS name RFC resolution it is not.

¿ So we have to limit our dns resolution to the hostname RFC ? ¿ there any possibility to add a 'url' field instead of hosts list ?

Im currently working with consul , and the service name has "_" in its name.

@drather19
Copy link

For a quick workaround until the issue is officially resolved, you can permit underscores by patching the following two files:

/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-x.y.z-java/lib/logstash/outputs/elasticsearch/http_client.rb (Line 132 in v2.5.5):

Modify this regular expression to allow '_' as well:

-- HOSTNAME_PORT_REGEX=/\A(?<hostname>([A-Za-z0-9\.\-]+)|\[[0-9A-Fa-f\:]+\])(:(?<port>\d+))?\Z/
++ HOSTNAME_PORT_REGEX=/\A(?<hostname>([A-Za-z0-9\.\-_]+)|\[[0-9A-Fa-f\:]+\])(:(?<port>\d+))?\Z/


/vendor/jruby/lib/ruby/1.9/uri/common.rb (Line 368):

Modify this regexp as well to allow '_':

-- ret[:HOSTNAME] = hostname = "(?:[a-zA-Z0-9\\-.]|%\\h\\h)+"
++ ret[:HOSTNAME] = hostname = "(?:[a-zA-Z0-9\\-._]|%\\h\\h)+"

@drather19
Copy link

Regarding a potential fix, could we include the addressable gem and call Addressable::URI.parse() instead of URI.parse() (in addition to the HOSTNAME_PORT_REGEX change)?

@suyograo suyograo added the P1 label Apr 26, 2016
@jsvd jsvd self-assigned this Apr 26, 2016
@jsvd jsvd changed the title elasticsearch output plugin fails to parse hosts correctly in these conditions support underscores in hostnames Apr 26, 2016
@jsvd jsvd added the bug label Apr 26, 2016
@suyograo suyograo added P2 and removed P1 labels May 17, 2016
@ddrozdov
Copy link

Quick solution for the ones who use docker-compose:

logstash:
  image: logstash
  command: sh -c "sed -i '368s/\./\._/' /opt/logstash/vendor/jruby/lib/ruby/1.9/uri/common.rb && logstash ..."

@ip2k
Copy link

ip2k commented Nov 1, 2016

https://bugs.ruby-lang.org/issues/8241 is related if you'd like to go down the rabbit hole of why Ruby doesn't like the underscore. My solution was to set up a CNAME DNS record without the underscore, which points to the DNS name with the underscore.

@bryanspaulding
Copy link

For anyone using the logstash:5.0.1 Docker image here's the RUN command you can put into your Dockerfile to patch both files:

RUN sed -i '368s/\./\._/' /usr/share/logstash/vendor/jruby/lib/ruby/1.9/uri/common.rb \
    && sed -i '145s/\./\._/' /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb

@uschtwill
Copy link

@bryanspaulding: on 5.1.1 this throws

Step 1 : FROM logstash:5.1.1
 ---> 1a77dd2de440
Step 2 : RUN sed -i '368s/\./\._/' /usr/share/logstash/vendor/jruby/lib/ruby/1.9/uri/common.rb     && sed -i '145s/\./\._/' /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb
 ---> Running in e194625602b1
sed: can't read /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb: No such file or directory

What gives?

@jsvd
Copy link
Member

jsvd commented Dec 22, 2016

this seems to be a characteristic of ruby's 1.9.3 URI parsing library, which jruby-1.7 conforms to:

% rvm use 1.9.3
% ruby -e 'require "uri"; p URI.parse("http://es_myapp.us-west-2.test.mydomain.net:9200")'
/Users/joaoduarte/.rvm/rubies/ruby-1.9.3-p551/lib/ruby/1.9.1/uri/generic.rb:213:in `initialize': the scheme http does not accept registry part: es_myapp.us-west-2.test.mydomain.net:9200 (or bad hostname?) (URI::InvalidURIError)
% rvm use jruby-1.7.25
% ruby -e 'require "uri"; p URI.parse("http://es_myapp.us-west-2.test.mydomain.net:9200")'
URI::InvalidURIError: the scheme http does not accept registry part: es_myapp.us-west-2.test.mydomain.net:9200 (or bad hostname?)
% rvm use 2.3.1
% ruby -e 'require "uri"; p URI.parse("http://es_myapp.us-west-2.test.mydomain.net:9200")'
#<URI::HTTP http://es_myapp.us-west-2.test.mydomain.net:9200>

@jsvd
Copy link
Member

jsvd commented Dec 22, 2016

One alternative is to use the addressable gem's uri parser. this gem is already included in the logstash release and seems to conform better with the rfcs:

/tmp/logstash-5.1.1 % bin/logstash -i irb
jruby-1.7.25 :001 > URI.parse("http://es_myapp.us-west-2.test.mydomain.net:9200")
URI::InvalidURIError: the scheme http does not accept registry part: es_myapp.us-west-2.test.mydomain.net:9200 (or bad hostname?)
	from /private/tmp/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/uri/generic.rb:214:in `initialize'
	from /private/tmp/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/uri/http.rb:84:in `initialize'
	from /private/tmp/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/uri/common.rb:214:in `parse'
	from /private/tmp/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/uri/common.rb:747:in `parse'
	from (irb):7:in `evaluate'
	from org/jruby/RubyKernel.java:1079:in `eval'
	from org/jruby/RubyKernel.java:1479:in `loop'
jruby-1.7.25 :002 > require "addressable/uri"
=> true
jruby-1.7.25 :003 > Addressable::URI.parse("http://es_myapp.us-west-2.test.mydomain.net:9200")
 => #<Addressable::URI:0x3005e URI:http://es_myapp.us-west-2.test.mydomain.net:9200> 

@andrewvc thoughts?

@zsimic
Copy link

zsimic commented Dec 27, 2016

I'm running into the same issue, using logstash 5.0.2
Any chance the fix for this gets submitted soon?

@dortizesquivel
Copy link

Hi I'm running into the same issue too.
Logstash version is 5.3.0 and logstash-output-elasticsearch 6.2.6

I was wondering if there is any plan to fix this.

@cooniur
Copy link
Author

cooniur commented Aug 30, 2017

Hi @jsvd , it's been quite a while. Any update to this issue?
Thanks!

@jsvd
Copy link
Member

jsvd commented Nov 6, 2017

This is still an issue. We have since moved the code that deals with URI to the native java classes, but they also have troubles dealing with hostnames with underscores (which goes against the rfc952):

2.3.0 :028 > Java::JavaNet::URI.new("http://a:b@esmy_app.us-west-2.test.mydomain.net:9200").host
 => nil 
2.3.0 :029 > Java::JavaNet::URI.new("http://a:b@esmy_app.us-west-2.test.mydomain.net:9200").authority
 => "a:b@esmy_app.us-west-2.test.mydomain.net:9200" 
2.3.0 :030 > Java::JavaNet::URI.new("http://a:b@esmy_app.us-west-2.test.mydomain.net:9200").parse_server_authority
Java::JavaNet::URISyntaxException: Illegal character in hostname at index 15: http://a:b@esmy_app.us-west-2.test.mydomain.net:9200
	from java.net.URI$Parser.fail(java/net/URI.java:2848)
	from java.net.URI$Parser.parseHostname(java/net/URI.java:3387)
	from java.net.URI$Parser.parseServer(java/net/URI.java:3236)
	from java.net.URI$Parser.parseAuthority(java/net/URI.java:3155)
	from java.net.URI$Parser.parseHierarchical(java/net/URI.java:3097)
	from java.net.URI$Parser.parse(java/net/URI.java:3053)

@jordansissel
Copy link
Contributor

https://bugs.openjdk.java.net/browse/JDK-8170265 was closed "Not an issue" with the implication that we must parse this ourselves and call new URL(protocol, host, port, ...) ourselves. Which seems like the problem report was completely misunderstood.

We're probably on our own, here. We'll need to parse the URL external to java.net.URI and create the URI/URL instance ourselves :\

@jordansissel
Copy link
Contributor

@jsvd let's see if we can reopen the JDK issue. I don't see how to create an account on that jira instance, though. Can you drive this?

@jsvd
Copy link
Member

jsvd commented Nov 7, 2017

Yes. one option could be considering using the java.net.URL instead of URI. the URL class for some reason accepts the host with underscores correctly:

2.3.0 :001 > Java::JavaNet::URL.new("http://a:b@esmy_app.us-west-2.test.mydomain.net:9200").host
 => "esmy_app.us-west-2.test.mydomain.net" 
2.3.0 :002 > Java::JavaNet::URI.new("http://a:b@esmy_app.us-west-2.test.mydomain.net:9200").host
 => nil 

OTOH there's a hack:

    URI uriObj = new URI("https://pmi_artifacts_prod.s3.amazonaws.com");
    if (uriObj.getHost() == null) {
        final Field hostField = URI.class.getDeclaredField("host");
        hostField.setAccessible(true);
        hostField.set(uriObj, "pmi_artifacts_prod.s3.amazonaws.com");
    }

These two "solutions" aside, from what I understand there's a pretty high bar to enter the jdk jira. For the mortals, the entry point I see is http://bugreport.java.com/, I'll start there.

@ntim
Copy link

ntim commented Jan 31, 2018

Hi, it seems, this issue is still not fixed in logstash 6.1.2, are there any updates? DNS specification specifically allows underscore and in fact they are very common.

@fanjieqi
Copy link

fanjieqi commented Jul 2, 2018

Hi, this issue is still not fixed in logstash 6.3.0.

@spavezv
Copy link

spavezv commented Sep 1, 2018

Issue still present on logstash 6.4.0

@kevin-bennett-ags
Copy link

What the heck - just encountered this issue on Docker Swarm, where underscore in hostname is automatic when using stack deploy - and found this issue from 2016! What to do? : (

@EDV-Eberhardt
Copy link

It's logstash 6.6 and this is not yet fixed...guys this is a bad joke...we are living in the container world...no one wants to type fixed IPs !!!

@andrewvc
Copy link
Contributor

This link might be useful: docker/compose#229

Seems like docker's not moving on this, despite _ not being valid in hostnames.

@zsimic
Copy link

zsimic commented Feb 13, 2019

The confusion probably comes I think from the fact that _ is legal in DNS, but not in a hostname theoretically (per their respective RFCs).

In practice, I see _ used all the time (and not just by docker), the distinction between hostname and general URI is blurry (is anyone really trying to use strict hostnames nowadays to configure stuff like where logstash/ES is?).

@alphaDev23
Copy link

alphaDev23 commented May 2, 2019

Pure nonsense that this this is still an issue after 2 years. Using Docker Swarm where underscores are appended after the stack name.

@kevin-bennett-ags, one way to resolve is to install dnsutils, add a known string for the elasticsearch host name (e.g. ELASTICSEARCH_HOST) in the output.conf file, and create a bootstrap file with the following command:

sed -i "s|ELASTICSEARCH_HOST|$(dig +short )|g" /etc/logstash/conf.d/30-output.conf

@NashMiao
Copy link

NashMiao commented Jul 25, 2019

Issue still present on logstash 7.2.0.

However, Elasticsearch 7.2.0 and Kibana 7.2.0 support underscores in hostname.

I'm confused about this issue, it doesn't be mentioned even in document.

@athlan
Copy link

athlan commented Oct 14, 2019

Issue still persist in Logstash 7.4 and plugin logstash-output-elasticsearch (10.1.0).

output {
  elasticsearch {
    hosts => ["http://elasticsearch.infra_network:9200"]
    index => "myindex"
  }
}

Results with:

logstash_1  | [2019-10-14T10:03:59,447][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ArgumentError) URI is not valid - host is not specified", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:100)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:26)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
logstash_1  | warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
logstash_1  | LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
logstash_1  |           create at org/logstash/execution/ConvergeResultExt.java:109
logstash_1  |              add at org/logstash/execution/ConvergeResultExt.java:37
logstash_1  |   converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
logstash_1  | [2019-10-14T10:03:59,457][ERROR][logstash.agent           ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
logstash_1  | [2019-10-14T10:03:59,495][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
logstash_1  | [2019-10-14T10:03:59,566][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

@diegorosano
Copy link

@athlan Try remiving "http://", just use

hosts => ["elasticsearch.infra_network:9200"]

@athlan
Copy link

athlan commented Dec 7, 2019

@diegorosano seems that scheme must be defined, now error says:

Illegal character in scheme name at index 19: elasticsearch.infra_network:9200

logstash_1  | [2019-12-07T13:26:16,301][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: Illegal character in scheme name at index 19: elasticsearch.infra_network:9200", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:100)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:26)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
logstash_1  | warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
logstash_1  | LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
logstash_1  |           create at org/logstash/execution/ConvergeResultExt.java:109
logstash_1  |              add at org/logstash/execution/ConvergeResultExt.java:37
logstash_1  |   converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
logstash_1  | [2019-12-07T13:26:16,322][ERROR][logstash.agent           ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
logstash_1  | [2019-12-07T13:26:16,365][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
logstash_1  | [2019-12-07T13:26:16,461][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

Also, issue still occurs on 7.4

@mbudge
Copy link

mbudge commented Feb 19, 2020

Still in Logstash 7.6

Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: Illegal character in scheme name at index 7: ansible_es01:9200"

Filter

output {
elasticsearch {
hosts => ["ansible_es01:9200"]
}
}

@brad-dre
Copy link

brad-dre commented Mar 3, 2020

deploying a stack into docker swarm with docker-compose.yml, the service name is stackname_servicename. But, using Docker 18.09.9 I tried using just the service name in my logstash 7.6 config and it worked fine. So, using the default ingress overlay network, you can refer to another service container from within a service container by either its full service name (mystack_myservice) or by just the service (myservice) which avoids the underscore problem

@feketegy
Copy link

TL/DR: Logstash 7.6.2 doesn't like hostnames with underscores, change your hostnames.

I'm running logstash (7.6.2), elastic and filebeat in docker using docker compose, with the usual setup: filebeat --> logstash --> elasticsearch

I had the same problem with the hostname in logstash. My container names were defined with underscored in docker-compose.yml and I was using a common network so each container has access to the other by using the container name as the hostname.

So filebeat connects to logstash just fine using underscore hostnames, elasticsearch starts up fine with underscores. It turns out this logstash plugin has a problem with underscored hostnames.

Once I changed everything from my_container_x to my-container-x and define it in the output as: output { hosts => ["http://my-container-x:9200"] } everything works now.

@broadaxe
Copy link

TL/DR: Logstash is implementing the correct host naming conventions expressed in RFCs, etc. These do not allow underscores. In later years these have been ignored and people have implemented stuff using underscores where they were not supposed to be. I have come up with a workaround (I'm not sure if this has been talked about before in here), in which I define a custom pattern definition, local to the matching pattern that I need to enable for these hostnames. This could be a potential workaround, YMMV. Standard disclaimers apply:
"BKHOSTN" => "\b(?:[_0-9A-Za-z][0-9A-Za-z_-]{0,62})(?:\.(?:[_0-9A-Za-z][0-9A-Za-z_-]{0,62}))*(\.?|\b)"

@jnovack
Copy link

jnovack commented Apr 7, 2021

For those of you looking to do this in Docker Swarm, docker's internal DNS has a tasks.$servicename provider which returns A records.

From any docker service within a single stack, you can find the ip addresses of other docker services within that same stack with tasks.$servicename. Just reference the service by the local servicename, NOT the stack_servicename convention.

> ping tasks.mysql
PING localhost (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.035 ms
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.040 ms

> ping tasks.redis
PING localhost (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.041 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.039 ms

docker-stack.yml

version: '3'
services:
  spatula:
    image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
... truncated ...
  logstash:
    image: docker.elastic.co/logstash/logstash:${VERSION}
...

pipelines/sample.conf

...
output {
  elasticsearch {
    hosts => ["http://tasks.spatula:9200"]
  }
}

The service name for all my elasticsearch nodes is named spatula, and thus I reference it with tasks.spatula to get all the A records.

Reference:

@lukaszmoskwa
Copy link

From any docker service within a single stack,

@jnovack While your answer about the internal DNS with tasks.$service_name is correct, I would like to point out that tasks will return every service that shares the same network, not the same stack.

I'm experiencing this issue now, and my logstash service is finding with curl http://tasks.elasticsearch:9200 the elasticsearch service of another stack.

@vector-mj
Copy link

vector-mj commented Feb 20, 2023

Hi
You can use this script to convert service names to valid service names without underscore.

#!/bin/bash

SERVICES=(
    "management-nodes-elastic_shahriar-es-node-1"
    "management-nodes-elastic_shahriar-es-node-2"
    "management-nodes-elastic_shahriar-es-node-3"
)


for i in ${SERVICES[@]};do
        RESPONSE=$(ping $i -c 1)
        HOSTNAME=$(echo "$RESPONSE" | head -n 1 | grep -oP "(?<=PING ).[^\(].*(?= \()" | sed "s/_/-/g")
        IP=$(echo "$RESPONSE" | head -n 1 | grep -m 1 -oP "(?<=\().*?(?=\) )" | head -n 1)

        grep -qxF "$IP $HOSTNAME" /etc/hosts || echo "$IP $HOSTNAME" >> /etc/hosts
done

then

output {
        elasticsearch {
            hosts => ["management-nodes-elastic-shahriar-es-node-1:9200"]
            ...
            ...
        }
}

also if you use docker-compose file you must change Logstash entrypoint to this:

version: '3.8'
services:
  logstash:
    image: logstash:8.2.2
    configs:
      - source: logstash-configs
        target: /usr/share/logstash/config/logstash.yml
      - source: service-resolver
        target: /usr/share/logstash/service-resolver.sh
    entrypoint: bash -c "apt update -y && \
                         apt install iputils-ping -y && \
                         /bin/bash /usr/share/logstash/service-resolver.sh &&
                         /bin/bash /usr/share/logstash/service-resolver.sh &&
                         /bin/bash /usr/share/logstash/service-resolver.sh &&
                         /usr/local/bin/docker-entrypoint -r"

@clairmont32
Copy link

For those of you looking to do this in Docker Swarm, docker's internal DNS has a tasks.$servicename provider which returns A records.

From any docker service within a single stack, you can find the ip addresses of other docker services within that same stack with tasks.$servicename. Just reference the service by the local servicename, NOT the stack_servicename convention.

> ping tasks.mysql
PING localhost (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.035 ms
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.040 ms

> ping tasks.redis
PING localhost (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.041 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.039 ms

docker-stack.yml

version: '3'
services:
  spatula:
    image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
... truncated ...
  logstash:
    image: docker.elastic.co/logstash/logstash:${VERSION}
...

pipelines/sample.conf

...
output {
  elasticsearch {
    hosts => ["http://tasks.spatula:9200"]
  }
}

The service name for all my elasticsearch nodes is named spatula, and thus I reference it with tasks.spatula to get all the A records.

Reference:

This solved my problem today. Thank you!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests