Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor of the Agent and the source loading #6632

Closed
wants to merge 1 commit into from

Conversation

ph
Copy link
Contributor

@ph ph commented Feb 2, 2017

Note for reviewers:
This PR includes both the changes of the source loading to make it a bit more flexible and to allow universal plugin to add new source.

ref: #6514 and #6620

There is a lot a new code for the following reasons:

  1. It includes the code for the source loder.
  2. it includes a refactor of the agent_spec test.

Our current implementation of the Agent in Logstash does a bit too much:

  1. Start the webserver
  2. Start the metrics
  3. Fetch the configs
  4. start pipeline
  5. stop pipeline
  6. reload pipeline
  7. Metrics and action code are really linked

This PR attempt to clean up a bit of the execution model and how things are linked together, (3 -> 7)

In this code, the agent never interact directly with the configuration file, instead it uses a SourceLoader class. For the Agent, the SourceLoader is the source of truth.
if its not returned by the loader, the agent should not run it.

Testing Strategy

I tried to do minimum changes to the current agent_spec file, but I've decided to create 2 other spec with using less mocking and threading.

Scenario with auto-reload on

  • Logstash start
  • The Agent ask the SourceLoader to return the pipelines: [:main, :heavy_work, :foobar]
  • If the fetch is successful it continue, otherwise it wait next iteration
  • The Agent check the current running pipelines: []
  • The Agent ask the StateResolver how to converge to the required pipelines, the following actions are scheduled:
    • Start :main
    • Start :heavy_work
    • Start :foobar
  • The agent execute the actions
  • The agent records the stats with the result of the actions
  • The Agent sleep for the reload interval
  • The Agent ask the SourceLoader to return the pipelines: [:heavy_work(config_hash changed), :foobar]
  • If the fetch is successful it continue
  • The Agent check the current running pipelines: [:main, :heavy_work, :foobar]
  • The Agent ask the StateResolver how to converge to the required pipelines, the following actions are scheduled:
    • Stop :main
    • Reload :heavy_work (config_hash don't match)
  • The agent execute the actions
  • The agent records the stats with the result of the actions
  • The Agent sleep for the reload interval

A normal scenario without auto reload

  • Logstash start
  • The Agent ask the SourceLoader to return the pipelines: [:main, :heavy_work, :foobar]
  • If the fetch is successful it continue, otherwise it wait next iteration
  • The Agent check the current running pipelines: []
  • The Agent ask the StateResolver how to converge to the required pipelines, the following actions are scheduled:
    • Start :main
    • Start :heavy_work
    • Start :foobar
  • The agent execute the actions
  • The agent records the stats with the result of the actions

Both of theses scenario use the same path, the only difference is the refresh call when auto reload is on.

Each for the Action are encapsulated into their own class for easier testing and swapping of implementation, the first implementation
target the current ruby pipeline, but this will allow us to experiment with a java pipeline and have a different interface.

This change allow us to test without using any threads in the agent_spec or in the action.

@ph ph closed this Feb 23, 2017
@ph ph reopened this Feb 23, 2017
@ph ph changed the title [WIP] Agent refactor base on the source loader Agent refactor base on the source loader Feb 23, 2017
@ph ph force-pushed the fix/refactor-config-multiple-pipeline branch from 1f813a1 to c88b4f6 Compare February 23, 2017 21:06
@ph ph requested review from jsvd and jordansissel February 23, 2017 21:06
@ph ph changed the title Agent refactor base on the source loader Refactor of the Agent and the source loading Feb 23, 2017
class ConfigLoadingError < Error; end
class InvalidSourceLoaderSettingError < Error; end
class PipelineActionError < Error; end
class NonReloadablePipelineError < StandardError; end
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will clean these errors, I do not use them anymore.

@@ -0,0 +1,402 @@
# encoding: utf-8
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be split into smaller file maybe?

@ph
Copy link
Contributor Author

ph commented Feb 23, 2017

Also for the reviewer pay attention to the converge_spec.rb which show how to use the new source loader to manage the tests.

@ph
Copy link
Contributor Author

ph commented Feb 23, 2017

I see already one failure for the agent_spec, I am pretty sure its a timing issue :(

@ph
Copy link
Contributor Author

ph commented Feb 24, 2017

I've pushed a fix to help with the flaky test.

@ph ph force-pushed the fix/refactor-config-multiple-pipeline branch 2 times, most recently from 337f0ea to c942213 Compare March 2, 2017 23:58
begin
logger.debug("Executing action", :action => action)
action_result = action.execute(@pipelines)
converge_result.add(action, action_result)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could probably add a logger statement here when the action fail. I have mixed feeling if we should go with debug or error,

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

I tried running a simple pipeline and got an error:

 % bin/logstash -e ""
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /Users/joaoduarte/projects/logstash/logs which is now configured via log4j2.properties
[2017-03-13T05:21:40,353][ERROR][logstash.agent           ] An exception happened when converging configuration {:exception=>LogStash::InvalidSourceLoaderSettingError, :message=>"Can't find an appropriate config loader with current settings", :backtrace=>["/Users/joaoduarte/projects/logstash/logstash-core/lib/logstash/config/source_loader.rb:52:in `fetch'", "/Users/joaoduarte/projects/logstash/logstash-core/lib/logstash/agent.rb:133:in `converge_state_and_update'", "/Users/joaoduarte/projects/logstash/logstash-core/lib/logstash/agent.rb:84:in `execute'", "/Users/joaoduarte/projects/logstash/logstash-core/lib/logstash/runner.rb:297:in `execute'", "org/jruby/RubyProc.java:281:in `call'", "/Users/joaoduarte/projects/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24:in `initialize'"]}

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

I tested how auto reload reacts to a broken configuration, and the logging isn't very useful:

I started logstash with a good configuration:

% cat cfg
input {
  generator {}
}
filter {
  sleep { time => 1 }
}
output { stdout { codec => plain { format => "." } } }

and after a few events I changed it to:

% cat cfg
input {
  generator {}
}
filter {
  sleep { time => 1 }
}
output { stdout { codec => plain { format => "." } } }
broken thing

And the output is:

% bin/logstash -f cfg -w 1 -b 1 -r
[2017-03-13T05:31:09,313][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-13T05:31:09,331][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>1, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1}
[2017-03-13T05:31:09,343][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-13T05:31:09,350][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
................................[2017-03-13T05:31:42,382][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
...[2017-03-13T05:31:45,365][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
...[2017-03-13T05:31:48,369][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
...[2017-03-13T05:31:51,368][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
...[2017-03-13T05:31:54,368][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

Adding a comment to the end of the config breaks the loading of the pipeline:

% cat /tmp/cfg
input { generator {} }
filter { sleep { time => 1 } }
output { stdout { codec => plain { format => "." } } }
# hello

on logstash 5.2.1:

/tmp/logstash-5.2.1 % bin/logstash -f /tmp/cfg -w 1 -b 1
Sending Logstash's logs to /tmp/logstash-5.2.1/logs which is now configured via log4j2.properties
[2017-03-13T05:42:12,877][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>1, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1}
[2017-03-13T05:42:12,884][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-13T05:42:12,983][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
..........

on this PR:

~/projects/logstash (git)-[pr/6632] % bin/logstash -f /tmp/cfg -w 1 -b 1
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /Users/joaoduarte/projects/logstash/logs which is now configured via log4j2.properties
[2017-03-13T05:42:33,286][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of \r, \n at line 4, column 8 (byte 117) after # hello"}
[2017-03-13T05:42:33,301][INFO ][logstash.agent           ] Pipelines running {:count=>0, :pipelines=>[]}

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

reload metrics seem to be broken:

curl localhost:9600/_node/stats  | jq ".pipeline.reloads"
null

while on 5.2.1:

/tmp/logstash-5.2.1 % curl -s localhost:9600/_node/stats  | jq ".pipeline.reloads"
{
  "last_error": null,
  "successes": 0,
  "last_success_timestamp": null,
  "last_failure_timestamp": null,
  "failures": 0
}

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

It seems most pipeline actions are assuming the main pipeline id instead of the configured one. I changed the pipeline.id: "meh" in the config file, and in debug mode I see:

[2017-03-13T06:06:31,596][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
# ....
[2017-03-13T06:06:31,669][DEBUG][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"meh"}
[2017-03-13T06:06:31,676][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"meh", "pipeline.workers"=>1, "pipeline.batch.size"=>1, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1}
[2017-03-13T06:06:31,685][INFO ][logstash.pipeline        ] Pipeline meh started
[2017-03-13T06:06:31,696][DEBUG][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"meh"}
[2017-03-13T06:06:31,702][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["meh"]}

Same during shutdown:

[2017-03-13T06:07:16,325][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>1}
[2017-03-13T06:07:16,325][DEBUG][logstash.agent           ] Converging pipelines
[2017-03-13T06:07:16,325][DEBUG][logstash.agent           ] Needed actions to converge {:actions_count=>1}
[2017-03-13T06:07:16,326][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Stop/pipeline_id:main}
[2017-03-13T06:07:16,327][DEBUG][logstash.pipeline        ] Closing inputs
[2017-03-13T06:07:16,327][DEBUG][logstash.inputs.generator] stopping {:plugin=>"LogStash::Inputs::Generator"}
[2017-03-13T06:07:16,327][DEBUG][logstash.pipeline        ] Closed inputs
[2017-03-13T06:07:16,327][DEBUG][logstash.pipeline        ] Closing inputs
.[2017-03-13T06:07:17,127][DEBUG][logstash.inputs.generator] closing {:plugin=>"LogStash::Inputs::Generator"}
[2017-03-13T06:07:17,128][DEBUG][logstash.pipeline        ] Input plugins stopped! Will shutdown filter/output workers.
[2017-03-13T06:07:18,149][DEBUG][logstash.filters.sleep   ] closing {:plugin=>"LogStash::Filters::Sleep"}
[2017-03-13T06:07:18,149][DEBUG][logstash.outputs.stdout  ] closing {:plugin=>"LogStash::Outputs::Stdout"}
[2017-03-13T06:07:18,149][DEBUG][logstash.pipeline        ] Pipeline meh has been shutdown

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

it seems to be possible to start a pipeline that is not reloadable with -r

% cat myconfs/*
input { generator { } }
input { stdin {} }
filter { sleep { time => 1 } }
output { stdout { codec => plain { format => "." } } }
output { stdout { codec => rubydebug } }

in logstash 5.2.1:

/tmp/logstash-5.2.1 % bin/logstash -f "~/projects/logstash/myconfs/*" -r
[2017-03-13T06:42:31,378][ERROR][logstash.agent           ] Logstash is not able to start since configuration auto reloading was enabled but the configuration contains plugins that don't support it. Quitting... {:pipeline_id=>"main", :plugins=>[LogStash::Inputs::Stdin]}

in this PR:

 ~/projects/logstash (git)-[pr/6632] % bin/logstash -f "myconfs/*" -r
[2017-03-13T06:43:00,570][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-13T06:43:00,726][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"meh", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-03-13T06:43:00,748][INFO ][logstash.pipeline        ] Pipeline meh started
The stdin plugin is now waiting for input:

@jsvd
Copy link
Member

jsvd commented Mar 13, 2017

the --config.debug flag is now ignored even when using --log.level=debug, which means there is no way to see the configuration that logstash is loading.
I tried with both -t --log.level=debug --config.debug and without -t

Example run in this gist

@ph
Copy link
Contributor Author

ph commented Mar 13, 2017

Thanks for catching all theses issues, I will check the suite to see if are covering them.
I will fix them an update the update the PR, I don't see anything major.

@ph
Copy link
Contributor Author

ph commented Mar 14, 2017

@jsvd

The reload metrics they seems to work on my ends, what configuration were you using?

{
  "last_error": null,
  "successes": 0,
  "last_success_timestamp": null,
  "last_failure_timestamp": null,
  "failures": 0
}
{
  "last_error": null,
  "successes": 1,
  "last_success_timestamp": "2017-03-14T21:37:27.788Z",
  "last_failure_timestamp": null,
  "failures": 0
}

also failures also update the metrics.

{
  "last_error": {
    "message": "Something is wrong with your configuration.",
    "backtrace": [
      "/Users/ph/es/logstash/logstash-core/lib/logstash/config/mixin.rb:130:in `config_init'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/outputs/base.rb:63:in `initialize'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:3:in `initialize'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/output_delegator.rb:19:in `initialize'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/pipeline.rb:95:in `plugin'",
      "(eval):12:in `initialize'",
      "org/jruby/RubyKernel.java:1079:in `eval'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/pipeline.rb:64:in `initialize'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/pipeline_action/reload.rb:30:in `execute'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:292:in `converge_state'",
      "org/jruby/RubyArray.java:1613:in `each'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:277:in `converge_state'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:150:in `converge_state_and_update'",
      "org/jruby/ext/thread/Mutex.java:149:in `synchronize'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:148:in `converge_state_and_update'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:101:in `execute'",
      "org/jruby/RubyProc.java:281:in `call'",
      "/Users/ph/es/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/interval.rb:18:in `interval'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/agent.rb:90:in `execute'",
      "/Users/ph/es/logstash/logstash-core/lib/logstash/runner.rb:297:in `execute'",
      "org/jruby/RubyProc.java:281:in `call'",
      "/Users/ph/es/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24:in `initialize'"
    ]
  },
  "successes": 1,
  "last_success_timestamp": "2017-03-14T21:37:27.788Z",
  "last_failure_timestamp": "2017-03-14T21:38:00.693Z",
  "failures": 1
}

@ph
Copy link
Contributor Author

ph commented Mar 14, 2017

It seems most pipeline actions are assuming the main pipeline id instead of the configured one. I changed the pipeline.id: "meh" in the config file, and in debug mode I see:

For that, until we have the changes for the logstash.yml to configure multiple pipeline I think we should keep that? WDYT?

Copy link
Contributor

@guyboertje guyboertje left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay - test pass for me.
Minor comments aside, Outstanding work PH!!
LGTM

logger.debug("Count not fetch the configuration to converge, will retry", :message => results.error, :retrying_in => @reload_interval)
return
else
raise "Count not fetch the configuration, message: #{results.error}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be "Could not ..." and line137 too?

java_version = LogStash::Util::JavaVersion.version

if LogStash::Util::JavaVersion.bad_java_version?(java_version)
raise LogStash::BootstrapCheckError, "Java version 1.8.0 or later is required. (You are running: #{java_version})"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to me that we should use a function like LogStash::Util::JavaVersion.good_java_version to not hard code the "Java version 1.8.0 or later..." here.

Copy link
Contributor Author

@ph ph Apr 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved the actual code that raised the exection inside the JavaVersion module and call validate_java_version! instead, this make all the logic encapsulated in the same file and the bootstrap_check is just a thin wrapper around that.

class DefaultConfig
def self.check(settings)
if settings.get("config.string").nil? && settings.get("path.config").nil?
raise LogStash::BootstrapCheckError, I18n.t("logstash.runner.missing-configuration")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will Joao's comment be affected if we use pipelines.yml?

end

def <=>(other)
(ORDERING.index(self.class) <=> ORDERING.index(other.class)).nonzero? || self.pipeline_id <=> other.pipeline_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to return -1, 0, 1. e.g.

(ORDERING.index(self.class) <=> ORDERING.index(other.class)).nonzero? ? self.pipeline_id <=> other.pipeline_id : 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I don't need to check the pipeline_id, since we cannot execute 2 differents action for the same pipeline_id

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I take that back, it make cleaner debug.

@ph ph force-pushed the fix/refactor-config-multiple-pipeline branch from 5d23803 to 3b62df2 Compare April 5, 2017 20:02
@ph
Copy link
Contributor Author

ph commented Apr 5, 2017

@guyboertje @jsvd I've pushed the changes from the review and rebased the PR with master.

@jsvd
Copy link
Member

jsvd commented Apr 6, 2017

are the integration ci failures related to the bug you saw with sudo?

@ph
Copy link
Contributor Author

ph commented Apr 6, 2017

@jsvd Not at all, I think they are in part related to #6754, I am stress testing them locally on a linux vm. For now, I don't think we have a big problem in this PR. Yesterday, the PR was green then I've pushed some small changes and it when back to red, the changes shouldn't have any effect on the failling tests.

@jsvd
Copy link
Member

jsvd commented Apr 6, 2017

LGTM, squash and merge 🤗 🚢 🏆

@ph
Copy link
Contributor Author

ph commented Apr 6, 2017

As a note, I am able to get some of the integration failures locally on a linux container.

@ph
Copy link
Contributor Author

ph commented Apr 6, 2017

@jsvd I take my last comment back, one of the integration test is failing for a good reason.

  2) Test Logstash instance should still merge when -e is specified and -f has no valid config files
     Failure/Error: expect(is_port_open?(port1)).to be true

       expected true
            got false
     # ./specs/01_logstash_bin_smoke_spec.rb:137:in `(root)'
     # ./vendor/jruby/1.9/gems/stud-0.0.22/lib/stud/try.rb:79:in `try'
     # ./vendor/jruby/1.9/gems/stud-0.0.22/lib/stud/try.rb:95:in `try'
     # ./vendor/jruby/1.9/gems/stud-0.0.22/lib/stud/try.rb:91:in `try'
     # ./vendor/jruby/1.9/gems/stud-0.0.22/lib/stud/try.rb:123:in `try'
     # ./vendor/jruby/1.9/gems/logstash-devutils-1.3.1-java/lib/logstash/devutils/rspec/logstash_helpers.rb:18:in `try'
     # ./specs/01_logstash_bin_smoke_spec.rb:136:in `(root)'
     # ./vendor/jruby/1.9/gems/rspec-wait-0.0.9/lib/rspec/wait.rb:46:in `(root)'

This behavior is a bit strange if you take 5.X, (/tmp/foobar doesn't exist)

bin/logstash -f /tmp/foobar # will fait to start, complaining it cannot find files.
bin/logstash -e 'input { generator {}} output { null {}}' -f /tmp/hloa # will complain about it cannot find file but will continue.

The behavior in this branch is: You provided settings that don't work and we fails.

Since we target 6.0, WDYT? I more into make it fails, but I wonder if there any negative side effect.

What's strange, this test work in some run. :(

@jsvd
Copy link
Member

jsvd commented Apr 6, 2017

I think it's pretty easy to say that it should fail in the case of bin/logstash -e "" -f /tmp/absent/file
However, in the scenario of bin/logstash -e "" -f "/tmp/files/*.cfg" it might be ok if there are no files whatsoever.

I'm ok with failing if there are no files even if -e is present 👍

@suyograo
Copy link
Contributor

suyograo commented Apr 6, 2017

I'm ok with failing if there are no files even if -e is present

@jsvd @ph that's not gonna work when we install via packages. We've seen in the past that users install LS via deb/rpm and proceed to start it from the binary install location (and not using service scripts). When this happens, -f is implicit because we point to /etc/logstash/conf. So anyone using bin/logstash -e "blah" will implicitly also be doing bin/logstash -e "blah" -f /etc/logstash/conf. Hence, we can't fail if /etc/logstash/conf/*.cfg does not return any results.

bin/logstash -e 'input { generator {}} output { null {}}' -f /tmp/hloa # will complain about it cannot find file but will continue.`

This test encapsulated the behavior I described ^^

@suyograo
Copy link
Contributor

suyograo commented Apr 6, 2017

Can you check what's the behavior if /tmp/hloa directory is present, but there are no files underneath? If this works, we can modify the test to create /tmp/hloa

@ph
Copy link
Contributor Author

ph commented Apr 7, 2017

I've fixed the -e -f, in a8b4307

@ph ph force-pushed the fix/refactor-config-multiple-pipeline branch 2 times, most recently from dce4931 to 4ca2603 Compare April 10, 2017 15:37
@ph
Copy link
Contributor Author

ph commented Apr 10, 2017

I've rebased this monstrous PR, I have run the rats test on my linux box over the weekend and seems to only get the intermittent failures that you already working on fixing @jsvd.

@jordansissel
Copy link
Contributor

@ph and @jsvd walked me through the design and a light stepping through the patch. Design LGTM.

This PR introduces majors changes inside Logstash, to help build future
features like supporting multiple pipeline or java execution.

The previous implementation of the agent made the class hard to refactor or add new feature to it, because it needed to
know about too much internal of the pipeline and how the configuration existed.

The changes includes:
- Externalize the loading of the configuration file using a `SourceLoader`
- The source loader can support multiple sources and will aggregate them.
- We keep some metadata about the original file so the LIR can give better feedback.
- The Agent now ask the `SourceLoader` to know which configuration need to be run
- The Agent now uses a converge state strategy to handle: start, reload, stop
- Each actions executed on the pipeline are now extracted into their own classes to help with migration to a new pipeline execution.
- The pipeline now has a start method that handle the tread
- Better out of the box support for multiple pipeline (monitoring)
- Refactor of the spec of the agent
@ph ph force-pushed the fix/refactor-config-multiple-pipeline branch from 3e22fe4 to f139489 Compare April 13, 2017 18:15
@elasticsearch-bot
Copy link

Pier-Hugues Pellerin merged this into the following branches!

Branch Commits
master 645fcec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants