New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logstash modules #6851

Closed
suyograo opened this Issue Mar 29, 2017 · 40 comments

Comments

Projects
None yet
6 participants
@suyograo
Member

suyograo commented Mar 29, 2017

Introduction

The idea is to explore modules for Logstash, similar to Filebeat modules feature released in 5.3.0. Modules contain packaged Logstash configuration, Kibana dashboards and other meta files to ease the set up of the Elastic stack for certain use cases or data sources. The goal of these modules are to provide an end-end, 5 min getting started experience for a user exploring a data source without having to learn different parts of the stack (initially).

Data sources

The initial goal is to focus on data sources connecting over-the-network to complement modules in beats.

Behavior

Users interact with modules in the following ways:

Command Line

bin/logstash --modules netflow

This will instruct Logstash to use a module and be ready to accept data or start pulling from a data source. Internally, modules subcommand should:

  1. Load the Kibana dashboard to Elasticsearch .kibana index. If there exists a dashboard with the same name, it should be overwritten.
  2. Load the ES template for this module if not existing already.
  3. Load other configuration files for the Stack components to their respective ES indexes or directly using the API.
  4. Start Logstash with the module's config. In this mode the configuration file is not persisted to disk and is loaded onto memory (as -e option).
  5. When modules are used, -f or loading an arbitrary configuration from a file source is disabled. This is to avoid any accidental inclusion of other configurations which could cause the modules to not work correctly.

You can load multiple modules as such:

bin/logstash --modules netflow, foo

File based configuration

If you prefer to use file based configuration for enabling modules (in lieu of the CLI), you can use logstash.yml.

modules:
- name: netflow
- name: foo

Overriding options

The out of the box module configuration assumes Elasticsearch is installed on the same host as Logstash. It also assumes the data sources are local. You can customize such stock configuration for your environment.

For example, you can point to a remote ES host when running the module:

bin/logstash --modules netflow -M "netflow.var.elasticsearch.host=es.mycloud.com"
bin/logstash --modules netflow -M "netflow.var.tcp.port=5606"

Each module will define its own variables that user can override. These are light-weight overrides, as in, we wouldn't expose the entire LS pipeline to be overridden.

In the logstash.yml

modules:
- name: netflow
   var.output.elasticsearch.host: "es.mycloud.com"
   var.output.elasticsearch.user: "foo"
   var.output.elasticsearch.password: "password"
   var.input.tcp.port: 5606

The variables will then be injected into the Logstash pipeline.

Persisting configuration

For v2, we can expose an additional option which can persist the Logstash pipeline to a file. Users can then use this as a template and extend it to their needs.

bin/logstash --modules netflow -M "netflow.var.tcp.port=5606" --save-configs

Internal implementation details

Modules will be implemented as universal plugins which provide access to core functionality directly. A new base Module class will be created which will contain logic to upload the Kibana dashboards, templates, and other configuration files. New modules will extend from this base class and get most of the bootstrapping for free. Module plugins will be installed OOTB with LS artifacts.

The file structure of modules:

logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json
│   │   ├── searches
│   │   └── vizualization
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   ├── logstash
│   │   └── modules
│   │       └── netflow.rb
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec

This structure also allows new modules to be created and packaged outside of Logstash core. They can then be installed like any other plugin:

bin/logstash-plugin install logstash-module-amazing

@untergeek @ph @acchen97 collobarated on this design.

Progress

Module

  • Add CLI flags to support modules
  • Add modules definition in logstash.yml
  • Add variables and support for overriding variables.
  • Importer for shipping Kibana dashboards and mapping to ES
  • ES mapping template - review template and also remove index settings (i.e. shard allocation) at the top
  • Integrate demo dashboards and mapping.
  • Documentation changes to support modules (post feature freeze). Mark as beta.

Code Delivery

  • Merge feature/module branch to master
  • backport to 5.5
  • Add CEF module to master and 5.5
  • Change default_index_content_id = @settings.fetch("index_pattern.kibana_version", "5.4.0") to use 5.5.0
  • Resolve issue whether to use module name prefix for fields.

Dashboards

  • Network and firewall - overview dashboard (Nic)
  • Network and firewall - suspicious activity dashboard (Nic)
  • Endpoint - overview dashboard (Samir)
  • Endpoint - Windows specific dashboard (Samir)
  • DNS - overview dashboard (Nic)
  • Ensure the navigation pane and top row of overview metrics are consistent across all dashboards
  • Per dashboard use case summary documentation (post feature freeze)
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Mar 29, 2017

Member

@suyograo

There is a small typo in the yaml configuration, the vars.* keys need to be a the same level of the name, it should be something like this:

modules:
- name: netflow
   var.elasticsearch.host: "es.mycloud.com"
   var.tcp.port: 5606

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

 var.output.elasticsearch.host: "es.mycloud.com"
 var.input.tcp.port: 5606
Member

ph commented Mar 29, 2017

@suyograo

There is a small typo in the yaml configuration, the vars.* keys need to be a the same level of the name, it should be something like this:

modules:
- name: netflow
   var.elasticsearch.host: "es.mycloud.com"
   var.tcp.port: 5606

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

 var.output.elasticsearch.host: "es.mycloud.com"
 var.input.tcp.port: 5606
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Mar 29, 2017

Member

I did another pass with the plugin structure and to be consistent with the actual plugins and rubygems, I think we should go with this structure:

logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json
│   │   ├── searches
│   │   └── vizualization
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   ├── logstash
│   │   └── modules
│   │       └── netflow.rb
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec

Note: the gemspec need to be changed to make sure we include the file in configuration/*

Edited, Since we want to leverage the universal plugin, we could have multiple differents modules (apache, netflow) in a single gem.

Member

ph commented Mar 29, 2017

I did another pass with the plugin structure and to be consistent with the actual plugins and rubygems, I think we should go with this structure:

logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json
│   │   ├── searches
│   │   └── vizualization
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   ├── logstash
│   │   └── modules
│   │       └── netflow.rb
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec

Note: the gemspec need to be changed to make sure we include the file in configuration/*

Edited, Since we want to leverage the universal plugin, we could have multiple differents modules (apache, netflow) in a single gem.

@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Mar 29, 2017

Member

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

logstash/
  modules/
Member

ph commented Mar 29, 2017

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

logstash/
  modules/
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Mar 29, 2017

Member

Really high level todo of required tasks:

  • Add a new plugin type module
  • Add internal hook to make them available in LS, so we can validate them from the CLI or the configuration.
  • Create a subclass of LogStash::UniversalPlugin see this file that will take care of the boilerplate for the module: register the right hooks, read the erb, create the pipeline. (This is just for easy development)
  • Maybe some changes are needed in the Settings to make the user experience and the validation better.
  • Update the plugin generator with this new type?
Member

ph commented Mar 29, 2017

Really high level todo of required tasks:

  • Add a new plugin type module
  • Add internal hook to make them available in LS, so we can validate them from the CLI or the configuration.
  • Create a subclass of LogStash::UniversalPlugin see this file that will take care of the boilerplate for the module: register the right hooks, read the erb, create the pipeline. (This is just for easy development)
  • Maybe some changes are needed in the Settings to make the user experience and the validation better.
  • Update the plugin generator with this new type?
@breml

This comment has been minimized.

Show comment
Hide comment
@breml

breml Mar 29, 2017

Contributor

I kind of like the idea, because it helps new users to get started quickly and to have a good end-2-end experience (including Kibana) right from the start.

But if one is working with Logstash for some time, there are other needs, which come to my mind. In the last few months I integrated the proper log processing for several daemons, which we use in our setup (e.g. consul, bosh director, mongodb, etc.). So one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker. (does everyone need to reinvent the wheel?)

So in our setup we use Filebeat to ship the logs to Logstash (via a MQ), which does the heavy lifting (in regards of parsing), and finally the logs get stored in Elasticsearch and viewed with Kibana. So for this use case it would be very beneficial for us, if we could use some kind of "modules" or "config snippets", in combination with some standard Kibana dashboards. But we still would need the possibility to modify the LS config according to our needs (like forwarding some logs to other systems).

Additional notes to the proposal:

  • Modules should some how be compatible with Beats in the way that it is possible to have a chain like Filebeat -> Redis -> Logstash -> Elasticsearch and the automatic deployment of the Kibana dashboards still works.
  • Where are the tests for the module? How would it be tested?
  • How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?
Contributor

breml commented Mar 29, 2017

I kind of like the idea, because it helps new users to get started quickly and to have a good end-2-end experience (including Kibana) right from the start.

But if one is working with Logstash for some time, there are other needs, which come to my mind. In the last few months I integrated the proper log processing for several daemons, which we use in our setup (e.g. consul, bosh director, mongodb, etc.). So one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker. (does everyone need to reinvent the wheel?)

So in our setup we use Filebeat to ship the logs to Logstash (via a MQ), which does the heavy lifting (in regards of parsing), and finally the logs get stored in Elasticsearch and viewed with Kibana. So for this use case it would be very beneficial for us, if we could use some kind of "modules" or "config snippets", in combination with some standard Kibana dashboards. But we still would need the possibility to modify the LS config according to our needs (like forwarding some logs to other systems).

Additional notes to the proposal:

  • Modules should some how be compatible with Beats in the way that it is possible to have a chain like Filebeat -> Redis -> Logstash -> Elasticsearch and the automatic deployment of the Kibana dashboards still works.
  • Where are the tests for the module? How would it be tested?
  • How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?
@suyograo

This comment has been minimized.

Show comment
Hide comment
@suyograo

suyograo Mar 30, 2017

Member

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

@ph good idea! will update the example.

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

This should get installed like any other plugin.

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

Member

suyograo commented Mar 30, 2017

Also should we be more explicit in the variables, If we use multiple kind of plugin with the same name, also this might help with the generation of the pipeline template?

@ph good idea! will update the example.

One thing I would like to clarify, when I see the following in the description does that mean the plugin get installed in the $LOGSTASH_HOME and not in the vendor/bundle directory like any other plugin?

This should get installed like any other plugin.

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Mar 30, 2017

Member

@breml +1 for testing, I will give it some thoughts, we need to make it reusable in other place. :)

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

That's a good question, I think it depends how flexible we want to be. We could discuss again if we really need to make them actual gems and not simple directory that we drop in a Logstash instance.

Member

ph commented Mar 30, 2017

@breml +1 for testing, I will give it some thoughts, we need to make it reusable in other place. :)

The other thing I forgot to add here is to make sure users can create modules without knowing too much about ruby or gem structure. Any ideas @ph. They should just deal with configurations and everything else should be magic. One idea is to add this to plugin-generator, but they still have to hack some ruby to put together a module. Note, this is not required for v1, something we should keep in mind.

That's a good question, I think it depends how flexible we want to be. We could discuss again if we really need to make them actual gems and not simple directory that we drop in a Logstash instance.

@suyograo

This comment has been minimized.

Show comment
Hide comment
@suyograo

suyograo Mar 30, 2017

Member

one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker.

@breml yes, I think of modules as a foundation to get to a "market place" eventually. Indeed, this is a common request and I agree on the concept of config sharing. This is more than configs though, with Kibana dashboard and other stack related configuration.

Where are the tests for the module? How would it be tested?

Each module should have a test to take the LS config, use an input text and assert against an expected json. Something like filter-verifier would work really well here.

How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?

Right, LS would optimize on data sources that beats don't address. This is where services that can push to TCP/syslog etc come into the picture. For sharing Kibana dashboards, we should be able to migrate from ingest to Logstash config. That is something we are brainstorming currently.

Member

suyograo commented Mar 30, 2017

one day one of my colleagues, which helped me in implementing the LS config, approached me and asked if there is no such thing like a "market place", where LS config snippets are exchanged, which then would help LS users to ramp up with the LS config much quicker.

@breml yes, I think of modules as a foundation to get to a "market place" eventually. Indeed, this is a common request and I agree on the concept of config sharing. This is more than configs though, with Kibana dashboard and other stack related configuration.

Where are the tests for the module? How would it be tested?

Each module should have a test to take the LS config, use an input text and assert against an expected json. Something like filter-verifier would work really well here.

How to act on the overlap between Beats and Logstash (e.g. Apache log file is possible with both), is it possible to share the Kibana dashboards?

Right, LS would optimize on data sources that beats don't address. This is where services that can push to TCP/syslog etc come into the picture. For sharing Kibana dashboards, we should be able to migrate from ingest to Logstash config. That is something we are brainstorming currently.

@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Apr 11, 2017

Member

@untergeek and myself discussed that on zoom:

  • Find a way to define configuration of the module in the template file. (ERB or something else.)
  • Make sure people don't have to write ruby code. (MAIN GOAL)
  • The module class will make some assumptions about the structure of the file on disk to do the right call.
  • Think about integration tests (rats)
  • We will use the Agent's register_pipeline as the way to create live configuration.
  • Need to find a way to make the logstash.yml more flexible to support settings from modules.
  • Add an ID to the module config block in the logstash.yml.
Member

ph commented Apr 11, 2017

@untergeek and myself discussed that on zoom:

  • Find a way to define configuration of the module in the template file. (ERB or something else.)
  • Make sure people don't have to write ruby code. (MAIN GOAL)
  • The module class will make some assumptions about the structure of the file on disk to do the right call.
  • Think about integration tests (rats)
  • We will use the Agent's register_pipeline as the way to create live configuration.
  • Need to find a way to make the logstash.yml more flexible to support settings from modules.
  • Add an ID to the module config block in the logstash.yml.
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph Apr 11, 2017

Member

Other discussions included, allow the generator to takes arguments for the template/dashboard and the pipeline config.

Member

ph commented Apr 11, 2017

Other discussions included, allow the generator to takes arguments for the template/dashboard and the pipeline config.

@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph May 2, 2017

Member

Notes from our discussion and action items.

Open questions to clarify:

  • Discuss limitation or obstacles for creating a 5.x version of this feature.
  • If we support 5, maybe only able to have 1 module enabled?

Actions:

@untergeek

  • Experiment with the universal plugin
  • Create the actual module source what work with the plugins and settings
  • Modify the settings to allow the module settings

@ph:

Member

ph commented May 2, 2017

Notes from our discussion and action items.

Open questions to clarify:

  • Discuss limitation or obstacles for creating a 5.x version of this feature.
  • If we support 5, maybe only able to have 1 module enabled?

Actions:

@untergeek

  • Experiment with the universal plugin
  • Create the actual module source what work with the plugins and settings
  • Modify the settings to allow the module settings

@ph:

@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph May 2, 2017

Member

We have discussed a crazy idea so people don't have to deal with an external settings file. It consist of using the ERB file as the source of configuration.

# cef.conf.erb
input {
 tcp {
   port => <%=setting("tcp.port", 45) %>
   host => <%= setting("tcp.host", "localhost") %>
   type => <%= setting("tcp.type", "server", ["server", "firewall"]
 }
}
#...

With that configuration we could use something like this. (untested code)

class SettingsExtractor
  def initialize(template)
    @template = File.read(template)
    @configs = []
  end

  def setting(key, value)
    #convert the key / values into LogStash::Settings
    @configs << a_settings # IE LogStash::Settings::Boolean.new("ssl", value)
  end

  def add_setting(setting)
    @configs << setting
  end
   
  def settings
    ERB.new(template, 3).result(self.binding)
    @configs
  end
end
SettingsExtractor.new("ceph.conf.erb").settings 

We can also allow more advanced users to use the settings class directly.

add_setting(LogStash::Setting::String.new("log.level", "info", true, ["fatal", "error", "warn", "debug", "info", "trace"]),
Member

ph commented May 2, 2017

We have discussed a crazy idea so people don't have to deal with an external settings file. It consist of using the ERB file as the source of configuration.

# cef.conf.erb
input {
 tcp {
   port => <%=setting("tcp.port", 45) %>
   host => <%= setting("tcp.host", "localhost") %>
   type => <%= setting("tcp.type", "server", ["server", "firewall"]
 }
}
#...

With that configuration we could use something like this. (untested code)

class SettingsExtractor
  def initialize(template)
    @template = File.read(template)
    @configs = []
  end

  def setting(key, value)
    #convert the key / values into LogStash::Settings
    @configs << a_settings # IE LogStash::Settings::Boolean.new("ssl", value)
  end

  def add_setting(setting)
    @configs << setting
  end
   
  def settings
    ERB.new(template, 3).result(self.binding)
    @configs
  end
end
SettingsExtractor.new("ceph.conf.erb").settings 

We can also allow more advanced users to use the settings class directly.

add_setting(LogStash::Setting::String.new("log.level", "info", true, ["fatal", "error", "warn", "debug", "info", "trace"]),
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph May 2, 2017

Member

I've proposed to create a general module that we could use to reduce the boilerplate required in the gems since all plugins will basically have the same structure.

# lib/logstash_registry.rb
LogStash::PLUGIN_REGISTRY.add(:modules, "newmod", LogStash::Modules::General.new("newmod", File.join(File.dirname(__FILE__), ".."))

Using this strategy as a nice side effect we could allow people to creates modules in a specific Logstash directory and at boot time loop through the module dir and add them to the registry.

Dir.glob("modules/*") do |module|
 LogStash::PLUGIN_REGISTRY.add(:modules, module_name, LogStash::Modules::General.new(module_name, directory))
end
Member

ph commented May 2, 2017

I've proposed to create a general module that we could use to reduce the boilerplate required in the gems since all plugins will basically have the same structure.

# lib/logstash_registry.rb
LogStash::PLUGIN_REGISTRY.add(:modules, "newmod", LogStash::Modules::General.new("newmod", File.join(File.dirname(__FILE__), ".."))

Using this strategy as a nice side effect we could allow people to creates modules in a specific Logstash directory and at boot time loop through the module dir and add them to the registry.

Dir.glob("modules/*") do |module|
 LogStash::PLUGIN_REGISTRY.add(:modules, module_name, LogStash::Modules::General.new(module_name, directory))
end
@suyograo

This comment has been minimized.

Show comment
Hide comment
@suyograo

suyograo May 2, 2017

Member

If we support 5, maybe only able to have 1 module enabled?

Based on our discussion, we are only supporting a single module to be run at a time on Logstash. So in other terms, 1 module is equivalent to 1 pipeline, which is equivalent to running bin/logstash -f cef.conf.

Member

suyograo commented May 2, 2017

If we support 5, maybe only able to have 1 module enabled?

Based on our discussion, we are only supporting a single module to be run at a time on Logstash. So in other terms, 1 module is equivalent to 1 pipeline, which is equivalent to running bin/logstash -f cef.conf.

ph added a commit to ph/logstash that referenced this issue May 2, 2017

Allow to ask the registry to get a list of plugin klass of a specific
type

This expose some of the internal of the registry to the outside world to
allow other part of the system to retrieves plugins.

This change was motivated by elastic#6851 to retrieve the installed list of
modules.

elasticsearch-bot pushed a commit that referenced this issue May 3, 2017

Allow to ask the registry to get a list of plugin klass of a specific…
… type

This expose some of the internal of the registry to the outside world to
allow other part of the system to retrieves plugins.

This change was motivated by #6851 to retrieve the installed list of
modules.

Fixes #7011

elasticsearch-bot pushed a commit that referenced this issue May 3, 2017

Allow to ask the registry to get a list of plugin klass of a specific…
… type

This expose some of the internal of the registry to the outside world to
allow other part of the system to retrieves plugins.

This change was motivated by #6851 to retrieve the installed list of
modules.

Fixes #7011
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph May 10, 2017

Member

after talking with @untergeek , for making it more simple in the first version we will drop the settings extractor class and we will just pass a config string to the pipeline and the pipeline will validate the settings like a normal pipeline.

Member

ph commented May 10, 2017

after talking with @untergeek , for making it more simple in the first version we will drop the settings extractor class and we will just pass a config string to the pipeline and the pipeline will validate the settings like a normal pipeline.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 19, 2017

Contributor

@untergeek @suyograo @ph
What do we think about adding a dashboards section to logstash.yml like Beats? Maybe some of the info. I would like to not hard code the kibana index string.

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# The directory from where to read the dashboards. It is used instead of the URL
# when it has a value.
#setup.dashboards.directory:

# The file archive (zip file) from where to read the dashboards. It is used instead
# of the URL when it has a value.
#setup.dashboards.file:

# If this option is enabled, the snapshot URL is used instead of the default URL.
#setup.dashboards.snapshot: false

# The URL from where to download the snapshot version of the dashboards. By default
# this has a value which is computed based on the Beat name and version.
#setup.dashboards.snapshot_url

# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: beatname

# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana

# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:
Contributor

guyboertje commented May 19, 2017

@untergeek @suyograo @ph
What do we think about adding a dashboards section to logstash.yml like Beats? Maybe some of the info. I would like to not hard code the kibana index string.

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# The directory from where to read the dashboards. It is used instead of the URL
# when it has a value.
#setup.dashboards.directory:

# The file archive (zip file) from where to read the dashboards. It is used instead
# of the URL when it has a value.
#setup.dashboards.file:

# If this option is enabled, the snapshot URL is used instead of the default URL.
#setup.dashboards.snapshot: false

# The URL from where to download the snapshot version of the dashboards. By default
# this has a value which is computed based on the Beat name and version.
#setup.dashboards.snapshot_url

# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: beatname

# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana

# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:
@ph

This comment has been minimized.

Show comment
Hide comment
@ph

ph May 19, 2017

Member

@guyboertje I think it make perfect sense to not hard code anything or at least provide a way to override it. The only thing I am not sure is theses look like global settings, right? should they be under modules?

Member

ph commented May 19, 2017

@guyboertje I think it make perfect sense to not hard code anything or at least provide a way to override it. The only thing I am not sure is theses look like global settings, right? should they be under modules?

@untergeek

This comment has been minimized.

Show comment
Hide comment
@untergeek

untergeek May 19, 2017

Member

They should be nested in the arrays under modules::

modules:
  - name: example
    var.plugintype.pluginname.key: value
  - name: foo
    var.plugintype.pluginname.key: value

Somewhere with the "vars"

Member

untergeek commented May 19, 2017

They should be nested in the arrays under modules::

modules:
  - name: example
    var.plugintype.pluginname.key: value
  - name: foo
    var.plugintype.pluginname.key: value

Somewhere with the "vars"

@untergeek

This comment has been minimized.

Show comment
Hide comment
@untergeek

untergeek May 19, 2017

Member

My code gets all this and puts the entire array in a @settings key

Member

untergeek commented May 19, 2017

My code gets all this and puts the entire array in a @settings key

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 19, 2017

Contributor

And my code modifies yours :-)

Contributor

guyboertje commented May 19, 2017

And my code modifies yours :-)

@untergeek

This comment has been minimized.

Show comment
Hide comment
@untergeek

untergeek May 19, 2017

Member

Please check my most recent PR for some changes

Member

untergeek commented May 19, 2017

Please check my most recent PR for some changes

@dedemorton dedemorton referenced this issue May 23, 2017

Closed

Logstash 5.5 doc changes #7188

19 of 19 tasks complete
@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 24, 2017

Contributor

Chaps, while checking the Security Analytics examples, I noticed that there can be more than one dashboard.
So I have coded for a file called dashboard/<module_name>.json - it will contain this:

["Dashboard-File-1", "Dashboard-File-2"]

Then there should be those two files in the dashboards folder.

Contributor

guyboertje commented May 24, 2017

Chaps, while checking the Security Analytics examples, I noticed that there can be more than one dashboard.
So I have coded for a file called dashboard/<module_name>.json - it will contain this:

["Dashboard-File-1", "Dashboard-File-2"]

Then there should be those two files in the dashboards folder.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 24, 2017

Contributor

Need to discuss version pinning WRT LS Kibana 5 vs 6 where dashboards may change (API)

Contributor

guyboertje commented May 24, 2017

Need to discuss version pinning WRT LS Kibana 5 vs 6 where dashboards may change (API)

@acchen97

This comment has been minimized.

Show comment
Hide comment
@acchen97

acchen97 May 24, 2017

Contributor

@guyboertje yes multiple dashboards should be expected. If there are Kibana API changes, we might need a different set of dashboards per Kibana major.

Contributor

acchen97 commented May 24, 2017

@guyboertje yes multiple dashboards should be expected. If there are Kibana API changes, we might need a different set of dashboards per Kibana major.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 25, 2017

Contributor

@acchen97 - I get the possible need for multiple via versioning but I'm talking about multiple active dashboards e.g. Apache Access vs Apache Errors.

Contributor

guyboertje commented May 25, 2017

@acchen97 - I get the possible need for multiple via versioning but I'm talking about multiple active dashboards e.g. Apache Access vs Apache Errors.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 25, 2017

Contributor

@acchen97 - Please confirm this module feature will not need to work with Kibana < 5.5

Contributor

guyboertje commented May 25, 2017

@acchen97 - Please confirm this module feature will not need to work with Kibana < 5.5

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 25, 2017

Contributor

@suyograo, @untergeek - Filebeat modules had to do a Kibana index hack to overcome this issue elastic/beats-dashboards#94, I coded for it but it failed to be applied in Kibana 5.4.
With this in mind, do you think we should plan for a folder in LS config that contains patches for .kibana index by versions?

Contributor

guyboertje commented May 25, 2017

@suyograo, @untergeek - Filebeat modules had to do a Kibana index hack to overcome this issue elastic/beats-dashboards#94, I coded for it but it failed to be applied in Kibana 5.4.
With this in mind, do you think we should plan for a folder in LS config that contains patches for .kibana index by versions?

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 25, 2017

Contributor

Also, this bug elastic/kibana#9571 makes modules unusable ATM.
With Samir I also got it when exported all saved objects from their demo Kibana (5.4.0) and then imported them on cleaned up elasticsearch.

Contributor

guyboertje commented May 25, 2017

Also, this bug elastic/kibana#9571 makes modules unusable ATM.
With Samir I also got it when exported all saved objects from their demo Kibana (5.4.0) and then imported them on cleaned up elasticsearch.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 25, 2017

Contributor

I talked with PH, we discussed the Modules namespace and folder for the classes of this feature.
The current file modules.rb needs to be renamed. I am going with Scaffold for now.

Contributor

guyboertje commented May 25, 2017

I talked with PH, we discussed the Modules namespace and folder for the classes of this feature.
The current file modules.rb needs to be renamed. I am going with Scaffold for now.

@acchen97

This comment has been minimized.

Show comment
Hide comment
@acchen97

acchen97 May 26, 2017

Contributor

@guyboertje Beats module dashboards are compatible across Kibana 5.x, so I think we should strive for similar compatibility. Is it a significant amount of work to make it work across Kibana versions?

Also, this bug elastic/kibana#9571 makes modules unusable ATM.

Is this a blocker?

Contributor

acchen97 commented May 26, 2017

@guyboertje Beats module dashboards are compatible across Kibana 5.x, so I think we should strive for similar compatibility. Is it a significant amount of work to make it work across Kibana versions?

Also, this bug elastic/kibana#9571 makes modules unusable ATM.

Is this a blocker?

@suyograo

This comment has been minimized.

Show comment
Hide comment
@suyograo

suyograo May 26, 2017

Member

@guyboertje I'm curious how beats module handle the importing of dashboards with this Kibana issue? Is it only that we run into this because we used a dashboard json from an older version?

I would say let's focus on >=v5.5 of the stack for now. We can deal with < 5.4 kibana issue later on.

Member

suyograo commented May 26, 2017

@guyboertje I'm curious how beats module handle the importing of dashboards with this Kibana issue? Is it only that we run into this because we used a dashboard json from an older version?

I would say let's focus on >=v5.5 of the stack for now. We can deal with < 5.4 kibana issue later on.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 26, 2017

Contributor

@suyograo @acchen97
Its a blocker for usable modules in both filebeat and us. We can still ship it but users will complain.
I could not get any workarounds suggested in the kibana issue to work for me but Samir or Dale might. The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance.

My tests used the exported cef demo kibana state from kibana 5.1 (So Samir tells me)

@acchen97
To support multiple kibana dashboards per version is not hard, we can do it with small modifications. We need a new file structure, dashboard/[module].json having a hash not an array and a means for the system (or the user via logstash.yml) to tell us which version to import.
We don't have to cater for this now, but using plugin-api pinning we can implement it later.
FROM

GEM File structure
logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json (contains '["dash1", "dash2"]')
│   │   │   └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │   └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   ├── search
|   |   |   └── search1.json
|   |   |   └── search2.json
│   │   └── vizualization
|   |   |   └── panel1.json
|   |   |   └── panel2.json
|   |   |   └── panel3.json
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec

TO

GEM File structure
logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json (contains '{"v5": ["dash1", "dash2"], "v6": ["dash1", "dash2"]}')
│   │   │   └── v5
│   │   │   |   └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │   |   └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   │   └── v6
│   │   │       └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │       └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   ├── search
│   │   │   └── v5
|   |   |   |   └── search1.json
|   |   |   |   └── search2.json
│   │   │   └── v6
|   |   |       └── search1.json
|   |   |       └── search2.json
│   │   └── vizualization
│   │   │   └── v5
|   |   |   |   └── panel1.json
|   |   |   |   └── panel2.json
|   |   |   |   └── panel3.json
│   │   │   └── v6
|   |   |       └── panel1.json
|   |   |       └── panel2.json
|   |   |       └── panel3.json
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec
Contributor

guyboertje commented May 26, 2017

@suyograo @acchen97
Its a blocker for usable modules in both filebeat and us. We can still ship it but users will complain.
I could not get any workarounds suggested in the kibana issue to work for me but Samir or Dale might. The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance.

My tests used the exported cef demo kibana state from kibana 5.1 (So Samir tells me)

@acchen97
To support multiple kibana dashboards per version is not hard, we can do it with small modifications. We need a new file structure, dashboard/[module].json having a hash not an array and a means for the system (or the user via logstash.yml) to tell us which version to import.
We don't have to cater for this now, but using plugin-api pinning we can implement it later.
FROM

GEM File structure
logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json (contains '["dash1", "dash2"]')
│   │   │   └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │   └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   ├── search
|   |   |   └── search1.json
|   |   |   └── search2.json
│   │   └── vizualization
|   |   |   └── panel1.json
|   |   |   └── panel2.json
|   |   |   └── panel3.json
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec

TO

GEM File structure
logstash-module-netflow
├── configuration
│   ├── elasticsearch
│   │   └── netflow.json
│   ├── kibana
│   │   ├── dashboard
│   │   │   └── netflow.json (contains '{"v5": ["dash1", "dash2"], "v6": ["dash1", "dash2"]}')
│   │   │   └── v5
│   │   │   |   └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │   |   └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   │   └── v6
│   │   │       └── dash1.json ("panelJSON" contains refs to visualization panels 1,2 and search 1)
│   │   │       └── dash2.json ("panelJSON" contains refs to visualization panel 3 and search 2)
│   │   ├── search
│   │   │   └── v5
|   |   |   |   └── search1.json
|   |   |   |   └── search2.json
│   │   │   └── v6
|   |   |       └── search1.json
|   |   |       └── search2.json
│   │   └── vizualization
│   │   │   └── v5
|   |   |   |   └── panel1.json
|   |   |   |   └── panel2.json
|   |   |   |   └── panel3.json
│   │   │   └── v6
|   |   |       └── panel1.json
|   |   |       └── panel2.json
|   |   |       └── panel3.json
│   └── logstash
│       └── netflow.conf.erb
├── lib
│   └── logstash_registry.rb
└── logstash-module-netflow.gemspec
@untergeek

This comment has been minimized.

Show comment
Hide comment
@untergeek

untergeek May 26, 2017

Member

@guyboertje So v5.5 and v6 are different enough that we need to support multiple JSON structures? Okay. I'm cool with that. We should plan for that, since we're targeting v5.5 anyway, and v6 and v7 might also have different structures, requiring multiple version support as well.

My worry is that potential minor version incompatibilities in Kibana might make it even more complex 😟

Member

untergeek commented May 26, 2017

@guyboertje So v5.5 and v6 are different enough that we need to support multiple JSON structures? Okay. I'm cool with that. We should plan for that, since we're targeting v5.5 anyway, and v6 and v7 might also have different structures, requiring multiple version support as well.

My worry is that potential minor version incompatibilities in Kibana might make it even more complex 😟

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 26, 2017

Contributor

@untergeek - I'm not saying that v5.5 and 6.0 will be different - just that we can easily accommodate it if it must be done.

After a bit of thought maybe the module gem spec should not reference the plugin api because the plugin api is not a contract that these gems agree to.

Its actually the Scaffold class that determines what file structures it expects the gem to support - implying that we need a modules_api meta gem.

Contributor

guyboertje commented May 26, 2017

@untergeek - I'm not saying that v5.5 and 6.0 will be different - just that we can easily accommodate it if it must be done.

After a bit of thought maybe the module gem spec should not reference the plugin api because the plugin api is not a contract that these gems agree to.

Its actually the Scaffold class that determines what file structures it expects the gem to support - implying that we need a modules_api meta gem.

@suyograo

This comment has been minimized.

Show comment
Hide comment
@suyograo

suyograo May 26, 2017

Member

The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

Member

suyograo commented May 26, 2017

The fault is seen when working kibana state is exported from one 5.4 instance and imported to a shiny new kibana 5.4 instance

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

@acchen97

This comment has been minimized.

Show comment
Hide comment
@acchen97

acchen97 May 26, 2017

Contributor

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

+1 on just focusing on Kibana 5.5 for this initial release, we can work on 5.4 afterwards. FYI - as we're planning to leverage time series visualizations in the dashboards, the farthest back we can go regarding full compatibility is 5.4 since that's when the feature was introduced.

Contributor

acchen97 commented May 26, 2017

@guyboertje can we create a new state for 5.5? We are only creating a module for 5.5, so why bother importing form 5.4 or 5.1? This way users will get a clean experience with 5.5 (until we fix the blocker in Kibana) -- WDYT?

It would suck if we release this for 5.5 and users complain immediately.

+1 on just focusing on Kibana 5.5 for this initial release, we can work on 5.4 afterwards. FYI - as we're planning to leverage time series visualizations in the dashboards, the farthest back we can go regarding full compatibility is 5.4 since that's when the feature was introduced.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 30, 2017

Contributor

@suyograo @acchen97
I don't believe that there is anything specific about Kibana 5.5 regarding dashboard and visualization formats. I am checking in Kibana Slack - that 5.1 or 5.4 dashboards are expected to work with 5.5.

Contributor

guyboertje commented May 30, 2017

@suyograo @acchen97
I don't believe that there is anything specific about Kibana 5.5 regarding dashboard and visualization formats. I am checking in Kibana Slack - that 5.1 or 5.4 dashboards are expected to work with 5.5.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 31, 2017

Contributor

need to import an index-pattern too
the mechanism is different from beats where they dynamically create the index-pattern json from a list of fields, we will use a static file provided by the "module maintainer".
also need to set the defaultIndex in .kibana/config/%{index_pattern.kibana_version} - this is a pain because we need to do this for a version until the Kibana team implement the "pick a default index pattern" feature that is on the cards.
NOTE: this is done locally.

Contributor

guyboertje commented May 31, 2017

need to import an index-pattern too
the mechanism is different from beats where they dynamically create the index-pattern json from a list of fields, we will use a static file provided by the "module maintainer".
also need to set the defaultIndex in .kibana/config/%{index_pattern.kibana_version} - this is a pain because we need to do this for a version until the Kibana team implement the "pick a default index pattern" feature that is on the cards.
NOTE: this is done locally.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje May 31, 2017

Contributor

Implement a dynamic search file import include based on any references to the field "savedSearchId" in any found visualization json files.
Done.
Then submit final branch PR before branch to master PR.

Contributor

guyboertje commented May 31, 2017

Implement a dynamic search file import include based on any references to the field "savedSearchId" in any found visualization json files.
Done.
Then submit final branch PR before branch to master PR.

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje Jun 6, 2017

Contributor

Done - for now

Contributor

guyboertje commented Jun 6, 2017

Done - for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment