-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Illegal character in authority at index 8 #129
Comments
I also came across this issue, but utilizing the new I have found this issue occurs when multiple workers are used. Due to object permanence, I think the following line is resulting in the endlessly nested hosts each time a worker initializes an ElasticSearch client: logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch/client.rb:23 hosts.map! {|h| { host: h, scheme: 'https' } } if ssl Since this directly maps the # logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch.rb:175
def new_client
LogStash::Filters::ElasticsearchClient.new(@logger, @hosts, client_options)
end
# logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch/client.rb:13
def initialize(logger, hosts, options = {})
@hosts.object_id # => 70313334057280
hosts.object_id # => 70313334057280
...
end First Worker# @hosts = ['server.example.com']
# hosts = ['server.example.com']
hosts.map! {|h| { host: h, scheme: 'https' } } if ssl
# hosts = [{ host: 'server.example.com', scheme: 'https' }]
# @hosts = [{ host: 'server.example.com', scheme: 'https' }] Second Worker# @hosts = [{ host: 'server.example.com', scheme: 'https' }]
# hosts = [{ host: 'server.example.com', scheme: 'https' }]
hosts.map! {|h| { host: h, scheme: 'https' } } if ssl
# hosts = [{ host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}]
# @hosts = [{ host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}] Third Worker# @hosts = [{ host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}]
# hosts = [{ host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}]
hosts.map! {|h| { host: h, scheme: 'https' } } if ssl
# hosts = [{ host: { host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}, scheme: 'https'}]
# @hosts = [{ host: { host: { host: 'server.example.com', scheme: 'https' }, scheme: 'https'}, scheme: 'https'}] (I'm not a ruby programmer by nature, so I know my syntax is funky) It may be necessary to deep copy the logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch/client.rb def initialize(logger, hosts, options = {})
ssl = options.fetch(:ssl, false)
user = options.fetch(:user, nil)
password = options.fetch(:password, nil)
api_key = options.fetch(:api_key, nil)
=> serialized_hosts = Marshal.dump(hosts)
=> hosts_copy = Marshal.load(serialized_hosts)
@hosts.object_id # => 70313334057280
hosts.object_id # => 70313334057280
hosts_copy.object_id # => 70313334057613
transport_options = {:headers => {}}
transport_options[:headers].merge!(setup_basic_auth(user, password))
transport_options[:headers].merge!(setup_api_key(api_key))
=> hosts_copy.map! {|h| { host: h, scheme: 'https' } } if ssl
... I think this would allow for parallel execution of pipeline workers that doesn't result in the obfuscated nested host address. This may not be the correct approach to fix the issue, but hopefully it heads in the right direction. |
Thanks @unrinfosec for spot it! |
Probably we can avoid the multiple nesting of Hash with: if ssl
hosts.map! do |h|
h.is_a?(Hash) ? h : { host: h, scheme: 'https' }
end
end |
That's much simpler and memory efficient. I'm testing that approach now on my local instance. |
PR #133 submitted |
I've the same issue with logstash 7.7.1 and filter version 3.9.0. |
Hi, I've got the same issue , I'm using logstash 7.3.2 and ES 7.4.2 I can't upgrade the version. There is a patch or something I can do? My filter starts okay, but when I sent the data via stdin or HTTP, I've got that same warning. Thanks! |
I faced same issue. |
Thanks @rahulsinghai for the workaround, I found the same information in https://discuss.elastic.co/t/illegal-character-authority-error-elasticsearch-filter/190825. The documentation for this plugin should be updated to reflect the correct way of configuration for this plugin. |
Is there a fix or workaround for this when using api_keys? I have an ingestion pipeline that inserts into a datastream. The source data contains duplicates. I would like to use the filter to not spam the log with already exists messages. |
I' trying to use this filter plugin. When i run the it with logstash 7.6.2 and newest filter version 3.7.1, I receive the following error:
[WARN ] 2020-04-16 09:05:18.073 [[main]>worker1] elasticsearch - Failed to query elasticsearch for previous event {:index=>"proxy-blacklist", :error=>"Illegal character in authority at index 8: https://{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>{:host=>\"elastic01.example.org\", :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\"}, :scheme=>\"https\"}, :scheme=>\"https\"}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\"}, :scheme=>\"https\"}, :scheme=>\"https\"}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\"}, :scheme=>\"https\"}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\", :protocol=>\"https\", :port=>9200}, :scheme=>\"https\"}, :scheme=>\"https\"}:9200/proxy-blacklist/_search?q=domain.keyword%3A%2F%28www.%29%3Fchat.example.org%2F&size=1&sort=%40timestamp%3Adesc"}
So it comes down to:
Illegal character in authority at index 8
I've found many users which seem to have this problem but no solution at all. Some solve this by not using "ssl => true" and using the prefix "https://" in the hosts section instead. This does not solve the error for me. My filter config looks like this:
The text was updated successfully, but these errors were encountered: