Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect to GCS (certificate verify failed) #20

Closed
neuromantik33 opened this issue Mar 22, 2018 · 16 comments · Fixed by #21
Closed

Unable to connect to GCS (certificate verify failed) #20

neuromantik33 opened this issue Mar 22, 2018 · 16 comments · Fixed by #21
Labels

Comments

@neuromantik33
Copy link

neuromantik33 commented Mar 22, 2018

I'm unable to use with plugin within a docker container, using the 6.1.0 and 5.6.8 tags. I'm pretty sure it isn't the key since I've generated 3 different p12 keys for the same service account and they all fail, and the stacktrace fails beforehand.

Here is my configuration (I've changed all bucket names and project ids for obvious reasons) :

  • Version: v3.0.4
  • Java version (in container):
bash-4.2$ java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
  • Ruby version (in container):
jruby 9.1.13.0 (2.3.3) 2017-09-06 8e1c115 OpenJDK 64-Bit Server VM 25.151-b12 on 1.8.0_151-b12 +indy +jit [linux-x86_64]
  • Operating System: docker.elastic.co/logstash/logstash:6.1.0
  • logstash.yml:
http.host: 0.0.0.0
log.format: json
path.config: /usr/share/logstash/pipeline
config:
  debug: true
  reload:
    automatic: true
    interval: 5s
queue:
  type: persisted
  drain: true
  • logstash.conf:
input {
  beats {
    port => 5044
  }
}
output {
  google_cloud_storage {
    bucket => "my-bucket"
    flush_interval_secs => 5
    gzip => true
    key_path => "/shh/key.p12"
    max_file_size_kbytes => 102400
    output_format => "plain"
    service_account => "logstash-elk@my-project.iam.gserviceaccount.com"
    temp_directory => "/usr/share/logstash/data/tmp"
  }
}
  • Error message:
...
2018/03/22 11:01:25 Setting 'log.format' from environment.
2018/03/22 11:01:25 Setting 'xpack.monitoring.elasticsearch.url' from environment.
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-03-22T11:02:02,331][WARN ][logstash.runner          ] --config.debug was specified, but log.level was not set to 'debug'! No config info will be logged.
[2018-03-22T11:02:02,373][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-22T11:02:02,397][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-22T11:02:03,460][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.1.0-java/modules/arcsight/configuration"}
[2018-03-22T11:02:04,125][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-03-22T11:02:05,238][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.1.0"}
[2018-03-22T11:02:05,845][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-03-22T11:02:08,562][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>["http://elasticsearch:9200"], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>"false", document_type=>"%{[@metadata][document_type]}", sniffing=>"false", id=>"482e2490d75257e21cd2b7d49268d74674b3b7c32f0cdb0eef4694242d57f5fb">}
[2018-03-22T11:02:09,380][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-03-22T11:02:09,411][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-03-22T11:02:09,731][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-03-22T11:02:09,810][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-03-22T11:02:09,874][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2018-03-22T11:02:09,914][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0xca43c07 run>"}
[2018-03-22T11:02:10,053][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-03-22T11:02:10,054][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-03-22T11:02:10,074][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-03-22T11:02:10,112][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>nil}
[2018-03-22T11:02:10,292][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2018-03-22T11:02:16,086][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x270a138 @namespaced_metric=#<LogStash::Instrument::NamespacedMetric:0xd76dfbf @metric=#<LogStash::Instrument::Metric:0x7be2e6f @collector=#<LogStash::Instrument::Collector:0x7827e615 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x7f59f148 @store=#<Concurrent::Map:0x00000000000fb4 entries=3 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x75828e9e>, @fast_lookup=#<Concurrent::Map:0x00000000000fb8 entries=63 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs, :\"09d2e504e8de6887836a4879cca23b984f165252a32f136bcf5d24ff1cc04bb1\"]>, @metric=#<LogStash::Instrument::NamespacedMetric:0x4b9708da @metric=#<LogStash::Instrument::Metric:0x7be2e6f @collector=#<LogStash::Instrument::Collector:0x7827e615 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x7f59f148 @store=#<Concurrent::Map:0x00000000000fb4 entries=3 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x75828e9e>, @fast_lookup=#<Concurrent::Map:0x00000000000fb8 entries=63 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs]>, @logger=#<LogStash::Logging::Logger:0xe7a6303 @logger=#<Java::OrgApacheLoggingLog4jCore::Logger:0x44474f51>>, @out_counter=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 -  name: out value:0, @strategy=#<LogStash::OutputDelegatorStrategies::Single:0x373893af @mutex=#<Mutex:0x6a078c30>, @output=<LogStash::Outputs::GoogleCloudStorage bucket=>\"my-bucket\", flush_interval_secs=>5, gzip=>true, key_path=>\"/shh/key.p12\", log_file_prefix=>\"wt2\", max_file_size_kbytes=>102400, output_format=>\"plain\", service_account=>\"logstash-elk@my-project.iam.gserviceaccount.com\", temp_directory=>\"/usr/share/logstash/data/tmp\", id=>\"09d2e504e8de6887836a4879cca23b984f165252a32f136bcf5d24ff1cc04bb1\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_8dd97555-2f0c-4eb1-ab7d-e3d8650557d1\", enable_metric=>true, charset=>\"UTF-8\">, workers=>1, key_password=>\"notasecret\", date_pattern=>\"%Y-%m-%dT%H:00\", uploader_interval_secs=>60>>, @in_counter=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 -  name: in value:0, @id=\"09d2e504e8de6887836a4879cca23b984f165252a32f136bcf5d24ff1cc04bb1\", @time_metric=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 -  name: duration_in_millis value:0, @metric_events=#<LogStash::Instrument::NamespacedMetric:0x14c47be6 @metric=#<LogStash::Instrument::Metric:0x7be2e6f @collector=#<LogStash::Instrument::Collector:0x7827e615 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x7f59f148 @store=#<Concurrent::Map:0x00000000000fb4 entries=3 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x75828e9e>, @fast_lookup=#<Concurrent::Map:0x00000000000fb8 entries=63 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs, :\"09d2e504e8de6887836a4879cca23b984f165252a32f136bcf5d24ff1cc04bb1\", :events]>, @output_class=LogStash::Outputs::GoogleCloudStorage>", :error=>"certificate verify failed", :thread=>"#<Thread:0x7a1c234e run>"}
[2018-03-22T11:02:16,116][ERROR][logstash.pipeline        ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Faraday::SSLError>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:228:in `connect_nonblock'", "/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:938:in `connect'", "/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:868:in `do_start'", "/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:857:in `start'", "/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:1409:in `request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:82:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:40:in `block in call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:32:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/request/url_encoded.rb:15:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in `build_response'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in `run_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:177:in `post'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/signet-0.8.1/lib/signet/oauth_2/client.rb:967:in `fetch_access_token'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/signet-0.8.1/lib/signet/oauth_2/client.rb:1005:in `fetch_access_token!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/google-api-client-0.8.7/lib/google/api_client/auth/jwt_asserter.rb:105:in `authorize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-google_cloud_storage-3.0.4/lib/logstash/outputs/google_cloud_storage.rb:374:in `initialize_google_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-google_cloud_storage-3.0.4/lib/logstash/outputs/google_cloud_storage.rb:132:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:10:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:343:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:354:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:354:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:743:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:364:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:248:in `block in start'"], :thread=>"#<Thread:0x7a1c234e run>"}
[2018-03-22T11:02:16,146][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
[2018-03-22T11:02:16,241][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-03-22T11:02:20,620][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"undefined method `system?' for nil:NilClass", :error=>"NoMethodError"}

As I said it doesn't seem to be a p12 key issue (that line is never reached) but some other odd behviour. Is this plugin still supported with Google Cloud's current storage API, or even logstash 5.x or 6.x for that matter. Any help would be much appreciated (which may include alternatives for shipping filebeat logs to GCS)

Thanks in advance

@colinsurprenant
Copy link
Contributor

Thanks for the report @neuromantik33.

Sorry if this is a stupid question: are you providing a proper key_password option?

@neuromantik33
Copy link
Author

I didn't originally since I'm using the default password supplied by google (notasecret) but I did add the key_password => "notasecret" to the pipeline.conf and the problem is the same. I noticed that that the google cloud api lib is horribly out of date but I'm not a ruby developer and wouldn't know how to upgrade and test with the new lib. Has anyone tested this plugin with the official logstash image?

@colinsurprenant
Copy link
Contributor

This plugin was community contributed and is not officially supported by Elastic.

I looked at the lib and yeah, it has seen lots of changes and unfortunately it is not a trivial update because the lib APIs have changed too.

It is hard to say if the problem is related to the lib version but in any cases upgrading would probably not hurt and should be done at some point.

If you are ready to help, I could try and update it and give you instructions on how to test it? let me know if you are willing to help for that!

@josephlewis42
Copy link
Contributor

I created a PR to use a newer version of the (Java) GCP library to see if that helps. I built and tested the plugin on 6.2.3.

Travis is failing for Logstash 5.6 reporting a ruby-maven issue. @colinsurprenant would you be willing to take a look? I'm not that familiar with JRuby and I hope you might have seen this before.

@josephlewis42
Copy link
Contributor

As a temporary solution, dblommesteijn's solution might work: googleapis/google-api-ruby-client#235

@neuromantik33
Copy link
Author

Hi. Sorry for the lateness of my reply (French vacations :P).. Anyhow @colinsurprenant @josephlewis42 I would be happy to test out anything you have to give me. I have an environment ready to test and a build pipeline for the logstach docker image w/ gcs-out plugin.

@josephlewis42
Copy link
Contributor

@neuromantik33 enjoy vacation if you've got it!

I've published a gem of the proposed changes I've made in a personal repo: https://github.com/josephlewis42/personal_codebase/releases/tag/logstash-release

The caveat is rather than supplying a key_path, key_password and service_account you'll use json_key_file and a path to the service account's JSON key file. It's a breaking change, but hopefully a good one.

You should be able to install the plugin using /path/to/logstash-plugin install logstash-output-google_cloud_storage-4.0.0-java.gem manually.

@neuromantik33
Copy link
Author

@josephlewis42 So I've been testing out your gem and it seems to work great aside from a few annoyances.

  • It is impossible to specify a path prefix within a bucket, i.e. adding <bucket>/path/to/log/<file_name>. I looked at the code and it seems that the plugin would require an additional optional key rather than creating directories in the tmp directory.
  • Unfortunately the plugin as stated isn't interrupt resilient. I'm running logstash using the official docker container, and when performing a docker stop -t 90 logstash (trying to simulate the graceful period for k8s pods), I was hoping that it would upload the current file before quitting. My current workaround would be to never use emptyDir, and occasionally perform (in a shutdown hook for instance) a gcloud rsync on any straggling files which were not uploaded if the logstash server were to restart after a period sufficiently long for the file to rotate.

Besides that thank you! Hope it gets merged soon! 👍

@josephlewis42
Copy link
Contributor

@neuromantik33 sweet, I'm glad things are working well! Let's get this merged in and you can throw them in the backlog (unless either of them are regressions) so one of us can do a smaller PR against them, does that work?

@colinsurprenant do you have any hesitations about merging this (#21) and doing a release now that it's been verified? I know there's the outstanding Travis issue but I don't think it's a blocker because it's a known issue.

@colinsurprenant
Copy link
Contributor

Sorry for the delay - @josephlewis42 I did not look into the specifics of the shutdown handling @neuromantik33 is reporting above but I would prefer we make sure that shutdown situations are correctly handled before merging. Let me know if you need help with that, it can be a bit tricky.

@josephlewis42
Copy link
Contributor

josephlewis42 commented Apr 17, 2018

I can do that. I'd like to run the approach by you before coding it up if I can (@neuromantik33 I'd love your feedback too):

  • The thread calling stop sets a stopping flag on the plugin then calls close
  • The uploader thread now exits if it kicks off while the stopping flag is set
  • If @upload_queue exists (it won't if uploads are already synchronous) then upload each item if the file isn't empty.

@colinsurprenant this plugin suffers from some of that same crazy upload queue/sleep logic the BigQuery one had, I can add an item to my backlog to move this over to a worker pool which should fix #2, #5, and #19

@neuromantik33
Copy link
Author

@josephlewis42 Sounds good to me, I agree that after browsing the code, an n-thread ExecutorService of some sort would be preferable (and easier to read) instead of interrupting/joining threads everywhere which is tricky and error prone. I'm not too familiar with the lifecycle states of a logstash pipeline so I can't offer any other suggestions, besides the fact that best-effort is what I'm looking for, and I'll be happy to test it out if needed (I can now build the pipeline so I can just checkout your javabackend branch if necessary)

@josephlewis42
Copy link
Contributor

Whoops, I'm confusing my input and my output plugins. The plugin already has a close which is called on shutdown. I'll modify that instead.

@josephlewis42
Copy link
Contributor

@neuromantik33 I got that patch in for testing if you'd like to try it out. I built another gem you can grab from here: https://github.com/josephlewis42/personal_codebase/releases/download/logstash-release/logstash-output-google_cloud_storage-4.0.0-java.gem

@neuromantik33
Copy link
Author

@josephlewis42 After extended testing the plugin works fine under moderate load which is great. A graceful shutdown indeed triggers a last effort upload and deletion, I've tried shutting down filebeat, resuming it, shutting down logstash, killing logstash, all work as expected.

The only thing that should be noted although for my use case it isn't a problem is that under docker-compose (my initial tests before Kubernetes), when stopping the logstash container using docker stop logstash -t 60, the current file is indeed uploaded and deleted. However when restarting the container using docker start logstash the file is recreated with the same name (as the IP doesn't change) and thus eventually overwrites the file previously uploaded.

Again this is a extremely small issue, as when stopping pods hostnames are not reused. Anyhow thanks again and hope this time it gets merged quickly.

@josephlewis42
Copy link
Contributor

@neuromantik33 awesome!

I just opened up #23 as a potential fix for the clobber, I'd love your input if you have time.

josephlewis42 added a commit that referenced this issue Mar 18, 2019
This change removes support for the legacy PKCS GCP authentication key format in favor of ADC or JSON keys. In the process of upgrading the plugin got an overhaul to use the Java GCP libraries which will improve stability and platform compatibility.

Fixes #20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants