Skip to content
This repository has been archived by the owner on Aug 13, 2023. It is now read-only.

the volume for host logstash configuration folder does not match the container folder. #29

Closed
jcastillo725 opened this issue Mar 24, 2021 · 17 comments
Assignees
Labels
enhancement New feature or request troubleshoot Troubleshooting

Comments

@jcastillo725
Copy link

Describe the bug
I followed the steps for setting up PFELK on docker but I don't see logs coming in. It looks like the correct container folder for .conf files is in "/usr/share/logstash/pipeline" so I updated the volume in the docker compose yml file and was able to get logs but logstash shuts down after a few seconds.

Original volume container folder: /etc/pfelk/conf.d:ro
What I changed it to: /usr/share/logstash/pipeline:ro

To Reproduce
Steps to reproduce the behavior:

  1. Install fresh ELK on docker using latest version

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env) 7.11

**Service logs

  • docker-compose logs pfelk01

  • docker-compose logs pfelk02

  • docker-compose logs pfelk03

  • docker-compose logs logstash
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:15:02.768 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [INFO ] 2021-03-24 12:15:02.844 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    logstash | [INFO ] 2021-03-24 12:15:02.878 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
    logstash | [INFO ] 2021-03-24 12:15:04.238 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"e49647e6-91e5-4042-bace-5479b6fe76c0", :path=>"/usr/share/logstash/data/uuid"}
    logstash | [WARN ] 2021-03-24 12:15:05.083 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:15:05.860 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:15:07.623 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:15:08.269 [LogStash::Runner] licensereader - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es01:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash | [WARN ] 2021-03-24 12:15:08.436 [LogStash::Runner] licensereader - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://es01:9200/, :error_message=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
    logstash | [ERROR] 2021-03-24 12:15:08.465 [LogStash::Runner] licensereader - Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash | [ERROR] 2021-03-24 12:15:08.512 [LogStash::Runner] internalpipelinesource - Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
    logstash | [INFO ] 2021-03-24 12:15:08.641 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/.conf"}
    logstash | [ERROR] 2021-03-24 12:15:08.643 [Agent thread] sourceloader - No configuration found in the configured sources.
    logstash | [INFO ] 2021-03-24 12:15:09.042 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:15:13.812 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:16:03.525 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:16:04.090 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:16:04.260 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:04.738 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:04.975 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:05.278 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:05.279 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:05.358 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:16:05.359 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:16:05.403 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/
    .conf"}
    logstash | [INFO ] 2021-03-24 12:16:06.457 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 53 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:16:06.734 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:06.769 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:06.775 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:06.785 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:06.785 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:06.857 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:16:06.858 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:16:06.910 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x60e35d64@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:16:08.002 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.08}
    logstash | [INFO ] 2021-03-24 12:16:08.010 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:08.046 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:16:08.199 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:16:09.862 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:10.135 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:16:37.265 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:16:38.002 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:16:38.162 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:38.477 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:38.642 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:38.935 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:38.937 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:39.035 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:16:39.039 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:16:39.072 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/.conf"}
    logstash | [INFO ] 2021-03-24 12:16:40.005 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 60 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:16:40.177 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:40.201 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:40.209 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:40.223 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:40.226 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:40.275 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:16:40.278 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:16:40.359 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x4f08277@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:16:41.464 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.1}
    logstash | [INFO ] 2021-03-24 12:16:41.485 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:41.564 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:16:41.689 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:16:43.273 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:43.643 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:17:05.330 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:17:06.011 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:17:06.178 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:17:06.485 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:17:06.646 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:17:06.918 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:17:06.919 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:17:07.003 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:17:07.005 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:17:07.041 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/
    .conf"}
    logstash | [INFO ] 2021-03-24 12:17:07.940 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 77 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:17:08.095 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:17:08.131 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:17:08.141 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:17:08.150 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:17:08.150 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:17:08.213 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:17:08.215 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:17:08.340 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x5395f7d0@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:17:09.470 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.13}
    logstash | [INFO ] 2021-03-24 12:17:09.479 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:17:09.522 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:17:09.605 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:17:11.211 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:17:11.600 [LogStash::Runner] runner - Logstash shut down.

  • docker-compose logs kibana

Additional context
Add any other context about the problem here.

@a3ilson a3ilson self-assigned this Mar 24, 2021
@a3ilson a3ilson added enhancement New feature or request troubleshoot Troubleshooting labels Mar 24, 2021
@a3ilson
Copy link

a3ilson commented Mar 24, 2021

Take a look at issue #28 - I plan to reorganize but am also fiddling merging the docker and host installation into one repo.

@jcastillo725
Copy link
Author

jcastillo725 commented Mar 25, 2021

Logstash doesn’t see any configuration file on the path “/etc/pfelk/conf.d/*.conf”

logstash | [INFO ] 2021-03-25 03:03:17.111 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/*.conf"}

The host folder it’s referencing to is “./etc/logstash/conf.d/”
- ./etc/logstash/conf.d/:/etc/pfelk/conf.d:ro

But it does not contain the .conf files.

image

The .conf files are located in “/etc/pfelk/conf.d” in the host (This is where it was extracted when I unzipped pfelkdocker.zip)
image

I copied the files to “./etc/logstash/conf.d/” and ran into some filter errors. I copied databases and patterns to the logstash/conf.d folder and got rid of the filter errors but now I'm getting different messages:

elastic@ubuntu:~/pfelk/etc/pfelk/conf.d$ sudo docker attach logstash
[WARN ] 2021-03-25 03:55:35.576 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.614 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.642 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.675 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.710 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.736 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.772 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.789 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-03-25 03:55:35.800 [Converge PipelineAction::Create] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-03-25 03:55:35.836 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:35.844 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:35.860 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:35.860 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:35.917 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:35.938 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:35.957 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:35.984 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.002 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.057 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.064 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.078 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.089 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.105 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.154 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.159 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.174 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.201 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.208 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.275 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.283 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.317 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.323 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.324 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.369 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.379 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.402 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.433 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.442 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.471 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.477 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.488 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.508 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.509 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.551 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.556 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.580 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.606 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.606 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.651 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.656 [[pfelk]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
[WARN ] 2021-03-25 03:55:36.677 [[pfelk]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://es01:9200/"}
[INFO ] 2021-03-25 03:55:36.683 [[pfelk]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-03-25 03:55:36.683 [[pfelk]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-03-25 03:55:36.737 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[INFO ] 2021-03-25 03:55:36.779 [Ruby-0-Thread-31: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:130] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[INFO ] 2021-03-25 03:55:36.832 [Ruby-0-Thread-31: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:130] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@Version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2021-03-25 03:55:36.974 [[pfelk]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
[INFO ] 2021-03-25 03:55:37.049 [[pfelk]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
[INFO ] 2021-03-25 03:55:37.125 [[pfelk]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[INFO ] 2021-03-25 03:55:37.286 [[pfelk]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[ERROR] 2021-03-25 03:55:37.321 [[pfelk]-pipeline-manager] javapipeline - Pipeline error {:pipeline_id=>"pfelk", :exception=>#<LogStash::Filters::Dictionary::DictionaryFileError: Translate: Missing or stray quote in line 1 when loading dictionary file at /etc/pfelk/databases/service-names-port-numbers.csv>, :backtrace=>["uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1899:in block in shift'", "org/jruby/RubyArray.java:1809:in each'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1867:in block in shift'", "org/jruby/RubyKernel.java:1442:in loop'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1821:in shift'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:20:in block in read_file_into_dictionary'", "org/jruby/RubyIO.java:3511:in foreach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:18:in read_file_into_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:101:in merge_dictionary'", "org/jruby/RubyMethod.java:115:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:66:in load_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:53:in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:19:in create'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/translate.rb:166:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/35-rules-desc.conf", "/etc/pfelk/conf.d/36-ports-desc.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/49-enhanced_private.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x7d02add9@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[INFO ] 2021-03-25 03:55:37.323 [[pfelk]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"pfelk"}
[ERROR] 2021-03-25 03:55:37.331 [Converge PipelineAction::Create] agent - Failed to execute action {:id=>:pfelk, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}
[INFO ] 2021-03-25 03:55:37.445 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-03-25 03:55:38.644 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[INFO ] 2021-03-25 03:55:39.417 [LogStash::Runner] runner - Logstash shut down.

@a3ilson
Copy link

a3ilson commented Mar 25, 2021

Where is the docker-compose.yml file located in reference to the pfelk files?

The docker-compose.yml is referencing those paths based on a relative path (i.e. the preceding dot). Based on your previous response your path is /home/elastic/pfelk/ so to get docker-compose to recognize this path the docker compose will need to be placed within /home/elastic/pfelk/

/home/elastic/pfelk/
├── docker-compose.yml
├── conf.d
│   ├── 01-inputs.conf
│   ├── 02-types.conf
│   ├── 03-filter.conf
│   ├── 05-apps.conf
│   ├── 20-interfaces.conf
│   ├── 30-geoip.conf
│   ├── 35-rules-desc.conf
│   ├── 36-ports-desc.conf
│   ├── 45-cleanup.conf
│   └── 50-outputs.conf
├── config
│   ├── logstash.yml
│   └── pipelines.yml
├── databases
│   ├── private-hostnames.csv
│   ├── rule-names.csv
│   └── service-names-port-numbers.csv
└── patterns
    ├── openvpn.grok
    └── pfelk.grok

Otherwise and alternatively you may amend the docker-compose.yml to specify the absolute path:

      - /home/elastic/pfelk/etc/logstash/config/:/usr/share/logstash/config:ro       
      - /home/elastic/pfelk/etc/logstash/conf.d/:/etc/pfelk/conf.d:ro
      - /home/elastic/pfelk/etc/logstash/conf.d/patterns/:/etc/pfelk/patterns:ro
      - /home/elastic/pfelk/etc/logstash/conf.d/databases/:/etc/pfelk/databases:ro

Note: the preceding dot was omitted, specifying the absolute path versus relative path

Linux Paths:
/ absolute path
. Relative path - current directory
.. Relative path - parent directory

@jcastillo725
Copy link
Author

jcastillo725 commented Mar 25, 2021

But the extracted the folder /etc/logstash does not contain the subfolder conf.d

image

The directory /home/elastic/pfelk/ is what I created to unzip the file and the created directory looks more like:

/home/elastic/pfelk/
├── docker-compose.yml
├── elasticsearch
│ ├── Dockerfile
├── kibana
│ ├── Dockerfile
├── logstash
│ ├── Dockerfile
├── etc
│ ├── logstash
│ │ ├── config
│ │ │ ├── logstash.yml
│ │ │ └── pipelines.yml
│ ├── pfelk
│ │ ├── conf.d
│ │ │ ├── 01-inputs.conf
│ │ │ ├── 02-types.conf
│ │ │ ├── 03-filter.conf
│ │ │ ├── 05-apps.conf
│ │ │ ├── 20-interfaces.conf
│ │ │ ├── 30-geoip.conf
│ │ │ ├── 35-rules-desc.conf
│ │ │ ├── 36-ports-desc.conf
│ │ │ ├── 45-cleanup.conf
│ │ │ └── 50-outputs.conf
│ │ ├── databases
│ │ │ ├── private-hostnames.csv
│ │ │ ├── rule-names.csv
│ │ │ └── service-names-port-numbers.csv
│ │ ├── patterns
│ │ │ ├── openvpn.grok (Missing)
│ │ │ └── pfelk.grok

so the docker-compes.yml is indeed in /home/elastic/pfelk together with the other extracted folders described above but the .conf.d folder is not in the ./etc/logstash folder but in ./etc/pfelk while the pipeline.yml is pointing it to ./etc/logstash.

image
image
image
image

btw please forgive me for my stubbornness and thanks for your patience... I know there's just something that I still don't understand with what you're explaining but can't see it yet =(

a3ilson pushed a commit that referenced this issue Mar 25, 2021
#29 - added conf.d folder with required conf files to pfelkdocker.zip file.
a3ilson pushed a commit that referenced this issue Mar 25, 2021
@a3ilson
Copy link

a3ilson commented Mar 25, 2021

Got it...the docker-compose.yml had an incorrect reference which was corrected.

I would download or update to the latest docker-compose.yml and try again - sorry for the inconvenience.

The pipelines

@jcastillo725
Copy link
Author

Not an inconvenience at all bro . The work you're putting on this is awesome!

The last time I copied the files to what I thought to be the correct references Logstash shutdown again with this error:

[ERROR] 2021-03-25 03:55:37.331 [Converge PipelineAction::Create] agent - Failed to execute action {:id=>:pfelk, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}

The whole log is in my 2nd comment. I will try again with the updated zip file later or tomorrow.

@jcastillo725
Copy link
Author

Just a few more minor adjustments for the following:

      - ./etc/pfelk/conf.d/patterns/:/etc/pfelk/patterns:ro
      - ./etc/pfelk/conf.d/databases/:/etc/pfelk/databases:ro

[ERROR] 2021-03-26 04:52:51.453 [Converge PipelineAction::Create] translate - Invalid setting for translate filter plugin:

filter {
translate {
# This setting must be a path
# File does not exist or cannot be opened /etc/pfelk/databases/rule-names.csv
dictionary_path => "/etc/pfelk/databases/rule-names.csv"
...
}
}

The databases and patterns folders within conf.d do not have the files.. The files are in the databases and patterns folders with pfelk folder alongside the conf.d folder.

So I copied the files to where they are referencing and seems to wrok but got this new error now:

[ERROR] 2021-03-26 05:07:13.699 [[pfelk]-pipeline-manager] javapipeline - Pipeline error {:pipeline_id=>"pfelk", :exception=>#<LogStash::Filters::Dictionary::DictionaryFileError: Translate: Missing or stray quote in line 1 when loading dictionary file at /etc/pfelk/databases/service-names-port-numbers.csv>, :backtrace=>["uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1899:in block in shift'", "org/jruby/RubyArray.java:1809:in each'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1867:in block in shift'", "org/jruby/RubyKernel.java:1442:in loop'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1821:in shift'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:20:in block in read_file_into_dictionary'", "org/jruby/RubyIO.java:3511:in foreach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:18:in read_file_into_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:101:in merge_dictionary'", "org/jruby/RubyMethod.java:115:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:66:in load_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:53:in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:19:in create'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/translate.rb:166:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/35-rules-desc.conf", "/etc/pfelk/conf.d/36-ports-desc.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/49-enhanced_private.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x652a68db@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}

PS: Can you check the new pfelkdocker.zip. It looks like it still has the old docker-compose.yml volumes

a3ilson pushed a commit that referenced this issue Mar 26, 2021
@a3ilson
Copy link

a3ilson commented Mar 26, 2021

The zip file was updated and I just tested on my system - working w/o issues.

@jcastillo725
Copy link
Author

hmm that's weird. I downloaded the new one, unzipped and still had to copy the databases and patterns from /pflek to /pfelk/conf.d. Anyway I was still having the errors with translate on service-names-port-numbers.csv and rule-names.csv. I ended up removing filters 35 and 36 and it worked.

I think the errors were weird though because I checked both CSVs and there were no extra quotation marks (") whatsoever. It's working now but unfortunately, I can't use the 2 enrichments but I guess its really ok.

See logs and screenshots below for reference:

Errors:

Error: Translate: Missing or stray quote in line 1 when loading dictionary file at /etc/pfelk/databases/service-names-port-numbers.csv>, :backtrace=>["uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1899:in block in shift'", "org/jruby/RubyArray.java:1809:in each'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1867:in block in shift'", "org/jruby/RubyKernel.java:1442:in loop'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1821:in shift'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:20:in block in read_file_into_dictionary'", "org/jruby/RubyIO.java:3511:in foreach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:18:in read_file_into_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:101:in merge_dictionary'", "org/jruby/RubyMethod.java:115:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:66:in load_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:53:in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:19:in create'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/translate.rb:166:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/35-rules-desc.conf", "/etc/pfelk/conf.d/36-ports-desc.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/49-enhanced_private.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x75350788@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}

[ERROR] 2021-03-26 15:24:19.829 [[pfelk]-pipeline-manager] javapipeline - Pipeline error {:pipeline_id=>"pfelk", :exception=>#<LogStash::Filters::Dictionary::DictionaryFileError: Translate: Missing or stray quote in line 1 when loading dictionary file at /etc/pfelk/databases/rule-names.csv>, :backtrace=>["uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1899:in block in shift'", "org/jruby/RubyArray.java:1809:in each'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1867:in block in shift'", "org/jruby/RubyKernel.java:1442:in loop'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/csv.rb:1821:in shift'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:20:in block in read_file_into_dictionary'", "org/jruby/RubyIO.java:3511:in foreach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/csv_file.rb:18:in read_file_into_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:101:in merge_dictionary'", "org/jruby/RubyMethod.java:115:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:66:in load_dictionary'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:53:in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/dictionary/file.rb:19:in create'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-translate-3.2.3/lib/logstash/filters/translate.rb:166:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/35-rules-desc.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/49-enhanced_private.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x68a190ce@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}

Renamed the extensions of filters 35 and 36 from .conf to .copy
image

Logstash working after the 2 filters were disabled.
image

Thanks for the support!

a3ilson pushed a commit that referenced this issue Mar 26, 2021
@a3ilson
Copy link

a3ilson commented Mar 26, 2021

Let me test it again on a fresh instance (purge all docker containers/volumes)... I had issues before with the database look-ups but my current setup is running fine with them.

@jcastillo725
Copy link
Author

Sorry this is off topic, for suricata do I need to install syslog-ng or the logs will be sent to the firewall's system logs? currently, I don't see surricata logs coming in.

@a3ilson
Copy link

a3ilson commented Mar 27, 2021

@jcastillo725 - that depends are you running OPNsense or pfSense?

OPNsense is the simplest as it utilizes syslog-ng natively.
pfSense is a bit wonky but this guide should help it getting it configured and setup

@jcastillo725
Copy link
Author

tcp("logstash.local"
do i change this to the ip since its on another host? I think I did but still not getting logs

@a3ilson
Copy link

a3ilson commented Mar 27, 2021

This screenshot is from issue #276 running pfSense 2.5.0. I currently use OPNsense but I know multiple people have been able to get it working with the provided wiki instructions.

I would apply the following:}

{
   tcp("logstash.local"
   port(5040)
   );
};

and amend logstash.local with the host or IP of where pfelk is installed.

109327219-174dd800-7826-11eb-83c4-784989fa3182

@jcastillo725
Copy link
Author

I ended up using opnsense instead and got all the logs. Have you had the chance to test the lookups on a fresh instance?

@a3ilson
Copy link

a3ilson commented Mar 28, 2021

I am currently running with a fresh instance of:

  • docker-compose.yml from here
  • pfelk files from here

I plan to merge the two repos and squash the docker repo but that'll be a future endeavor (i.e. once I have additional free time). Let me know if you need or want assistance with setting up this method.

@jcastillo725
Copy link
Author

And you're not getting any errors for service-names-port-numbers.csv and rule-names.csv? I will try that on another VM but I think we can close this issue now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request troubleshoot Troubleshooting
Projects
None yet
Development

No branches or pull requests

2 participants