Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using logstash-forwarder twice to proxy everything to one logstash server #381

Closed
leevs opened this issue Feb 19, 2015 · 7 comments
Closed

Comments

@leevs
Copy link

leevs commented Feb 19, 2015

I am trying to connect multiple datacenters to one. Everything needs to be secure so that's why I want to use logstash-forwarder. See the image for a complete setup.

Image : https://www.dropbox.com/s/hekhka5lzyt6ojr/Screenshot%202015-02-19%2012.34.55.png?dl=0

Server:

logstash-forwarder-0.3.1-1

{
  "network": {
    "servers": [ "central:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/logstash/logstash-forwarder.crt"
  },
  "files": [{"paths": ["/var/log/messages", "/var/log/secure"], "fields": {"type": "syslog"}}, {"paths": ["application-loglocation"], "fields": {"type": "application"}}]
}

Central:

logstash-forwarder-0.3.1-1
logstash-1.4.1-1

input {
  lumberjack {
    port => 5000
    ssl_certificate => "/etc/pki/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/pki/logstash/logstash-forwarder.key"
  }
}
output {
  stdout {}
  elasticsearch {
    host => "logstash-server"
    protocol => http
  }
}

Logstash-Server:

logstash-1.4.1-1

input {
  lumberjack {
    port => 5000
    ssl_certificate => "/etc/pki/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/pki/logstash/logstash-forwarder.key"
  }
}

filter {
  if type == [...]
}

Now the problem is that the type is always getting lost after 1 hop.
On my central server I can see that the type comes is like it should but after I transfer it to the logstash server it will default back to "logs".

Is this a bug in logstash-forwarder or just something I am doing wrong ?

I also tried to add a filter on my central server to readd the tag with mutate but I had no luck doing so.

@jordansissel
Copy link
Contributor

Can you include links your full logstash configs? I don't think I have enough information above to determine what is going on.

@aidanjl
Copy link

aidanjl commented Mar 12, 2015

Im having a very similar problem trying to centralize logs.

Our existing setup is:
logstash-forwarder(client) --> logstash(master).
And its working fine.

Because of networking restrictions I now need to "proxy" the logstash clients through a single point.
Like this:
logstash-forwarder(client) --> logstash(intermediary) --> logstash(master)

But the intermediary formats the logs in a way that breaks the existing groks on the master.
Tomorrow we are going to add more grok rules to handle them but it seems like that shouldn't be necessary.

How can I setup the intermediary to do zero formatting and just forward logs?
This is what I have right now:

Intermediary logstash:

input {
lumberjack {
port => 50514
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
type => "syslog"
}
}

output {
lumberjack {
hosts => ["logs-test.subdomain.com"]
port => 5043
ssl_certificate => "/etc/logstash/logstash-test.crt"
type => "syslog"
}
stdout { codec => rubydebug }
}

Or should I be using a broker like redis??
Everything needs to be encrypted so was trying to avoid redis + some SSL wrapper.

Very new to logstash so open to suggestions.

@fiunchinho
Copy link

Same behaviour here. We have logstash A sending events to logstash B, and event type gets lost in the way, eventually being shown as "logs" in ElasticSearch. If I try to use output stdout() in logstash B, I can see how it has no type.

Logstash A config:

input {
  lumberjack {
    port => 5001
    ssl_certificate => "/etc/logstash/ssl/cert.crt"
    ssl_key => "/etc/logstash/ssl/cert.key"
  }
}
output {
    lumberjack {
        port => 5001
        ssl_certificate => "/etc/logstash/ssl/cert.crt"
        hosts => [ "logstashB" ]
    }
}

Logstash B input config:

input {
  lumberjack {
    port => 5001
    ssl_certificate => "/etc/logstash/ssl/cert.crt"
    ssl_key => "/etc/logstash/ssl/cert.key"
  }
}

Logstash version: logstash-1.4.2-1_2c0f5a1
Forwarder version: logstash-forwarder-0.4.0

@torrancew
Copy link

@fiunchinho @aidanjl Can you try setting the codec to "json" for the lumberjack output and the lumberjack input that receives from it? Tampering with the one LSF writes to will have a potential performance impact to no benefit, so leave that one alone.

@fiunchinho
Copy link

Seems like @torrancew solutions works! Adding the json codec in both proxy output and logstash input solves the problem.
I guess this should be in the documentation somewhere. I can send a PR if you guide me about where would be the best place to add it.

@dmoinescu
Copy link

Hello all,

I have an issue with above mentioned setup:
logstash-forwarder -> logstash(proxy) -> logstash(master):

-- logstash(proxy)

input {
  tcp {
    host => "192.168.0.3"
    port => "55044"
    type => "syslog"
  }
  lumberjack {
    host => "192.168.0.3"
    port => "55045"
    type => "lumberjack"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

output {
     file { path => "/data/logstash/logs/%{host}/%{+YYYY-MM-dd}.remote" }
     lumberjack {
         hosts => ["logstash-master"]
         port => "12345"
         ssl_certificate => "/etc/logstash/logstash-srv.crt"
         codec => "json"
      }
}

-- logstash-forwarder

{
  # The network section covers network configuration :)
  "network": {
    # A list of downstream servers listening for our messages.
    # logstash-forwarder will pick one at random and only switch if
    # the selected one appears to be dead or unresponsive
    "servers": [ "192.168.0.3:55045" ],

    # The path to your client ssl certificate (optional)
    #"ssl certificate": "./logstash-forwarder.crt",
    # The path to your client ssl key (optional)
    #"ssl key": "./logstash-forwarder.key",

    # The path to your trusted ssl CA file. This is used
    # to authenticate your downstream server.
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",

    # Network timeout in seconds. This is most important for
    # logstash-forwarder determining whether to stop waiting for an
    # acknowledgement from the downstream server. If an timeout is reached,
    # logstash-forwarder will assume the connection or server is bad and
    # will connect to a server chosen at random from the servers list.
    "timeout": 65
  },

  # The list of files configurations
  "files": [
    # An array of hashes. Each hash tells what paths to watch and
    # what fields to annotate on events from those paths.
    {
      "paths": [
        # single paths are fine
        #"/var/log/messages",
        # globs are fine too, they will be periodically evaluated
        # to see if any new files match the wildcard.
        "/var/log/messages",
        "/var/log/secure",
        "/var/log/cron",
        "/var/log/maillog"
      ],

      # A dictionary of fields to annotate on each event.
      "fields": { "type": "syslog" }
    }, {
      "paths": [ "/var/log/nginx/access.log" ],
      "fields": { "type": "nginx" }
    }
  ]
}

The problem is that I keep receiveing errors on logstash-forwarder:
2015/08/24 14:50:29.661093 Read error looking for ack: EOF
2015/08/24 14:50:29.661237 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2015/08/24 14:50:29.661708 Connecting to [192.168.0.3]:55045 (192.168.0.3)
2015/08/24 14:50:29.739638 Connected to 192.168.0.3
2015/08/24 14:50:29.746608 Read error looking for ack: EOF
2015/08/24 14:50:29.746693 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2015/08/24 14:50:29.747031 Connecting to [192.168.0.3]:55045 (192.168.0.3)
2015/08/24 14:50:59.786637 Connected to 192.168.0.3

I have observed that if I don't have output lumberjack in logstash(proxy) I have no issues in logstash-forwarder

What can I do?

I use

  1. logstash-forwarder.x86_64 0.4.0-1 @logstash-forwarder - on the "client"
  2. logstash.noarch 1:1.5.4-1 @/logstash-1.5.4-1.noarch - on the logstash (proxy)

thx

@ruflin ruflin added the libbeat label Sep 16, 2015
@jordansissel
Copy link
Contributor

It seems like this original problem in this issue was resolved by setting codec => json on both lumberjack input and outputs.


Thanks for helping make logstash-forwarder better!

Logstash-forwarder is going away and is replaced by filebeat and its friend, libbeat. If this is still an issue, would you mind opening a ticket there?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants