Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logstash: Preserve Host IP #241

Closed
Huppys opened this issue Feb 5, 2018 · 8 comments
Closed

Logstash: Preserve Host IP #241

Huppys opened this issue Feb 5, 2018 · 8 comments

Comments

@Huppys
Copy link

Huppys commented Feb 5, 2018

Hey,

I'm collecting some inputs via http plugin. I'd like to seperate data by the hosts IP address.

Due to the fact the docker-elk service is creating his network named dockerelk_elk using the bridge driver, any connection is routed through the gateways of dockerelk_elk if it's not coming from within this network.

When previewing the collected logs within Kibana, the host field always contains the gateways IP address because the requests came from the outside of dockerelk_elk.

image

Does anyone has an idea how to preserve the IP address the request is coming from?

I found this issue from the docker repo moby/libnetwork#1994 suggesting to use the host network driver. So I started elasticsearch and kibana via the docker-compose service. Afterwards I started logstash via

docker run -it --net=host -v ./logstash/pipeline/:/usr/share/logstash/pipeline/ -v ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml docker.elastic.co/logstash/logstash:6.1.3

In fact, from host network this docker container running logstash cannot connect to another network like dockerelk_elk. I ran docker network connect dockerelk_elk [LOGSTASH_CONTAINER_ID] just to get an error

Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network

So it seems to me I have to run all three containers on the same network to get this up and running.

Or does anybody else have a suggestion?

Best,
huppys

@Xplouder
Copy link

Same problem here. Seems to be related with this: moby/moby#15086.

@trajano
Copy link

trajano commented Mar 1, 2018

One suggestion I have is to deploy your service that requires the source IP using docker-compose rather than docker stack deploy. You'd lose the replica and load balancing but you'd get the source IP.

@antoineco
Copy link
Collaborator

antoineco commented Mar 1, 2018

@Huppys are the log senders running on the same machine as the Logstash container? The source IP is usually preserved for traffic coming on your external network interface, but it's not for traffic coming on the loopback.

In short, try this and compare:

# on the ELK machine
$ curl http://127.0.0.1:<LOGSTASH_HTTP_PORT> \
    -d 'test from inside'
# on some external machine
$ curl http://<LOGSTASH_HOST_IP>:<LOGSTASH_HTTP_PORT> \
    -d 'test from outside'

@alexhaydock
Copy link

One suggestion I have is to deploy your service that requires the source IP using docker-compose rather than docker stack deploy. You'd lose the replica and load balancing but you'd get the source IP.

Hi @trajano - Do you have any more information on how this could be done? I am seeing the same issue running my stack with docker-compose using bridge mode networking.

@sergey-safarov
Copy link

Looks that is root of issue
moby/libnetwork#2423

@aramaki87
Copy link

I just used the latest build. Trying to get UPD (Netflow) traffic on Port 9995 but host ip is shown as 172.18.0.1 so how do I fix this?
Everything else is working.

@orgads
Copy link

orgads commented Feb 3, 2021

Add this to docker-compose.yml:

  conntrack:
    image: cap10morgan/conntrack
    depends_on:
      - logstash
    network_mode: host
    privileged: true
    command: -D --proto udp

@antoineco
Copy link
Collaborator

antoineco commented Feb 3, 2021

@orgads A bit dangerous to include in the default stack, but very useful for users running into this, thanks for sharing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants