Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed Honeypot Collector #41

Closed
wh1t3-n01s3 opened this issue Jun 9, 2016 · 8 comments
Closed

Distributed Honeypot Collector #41

wh1t3-n01s3 opened this issue Jun 9, 2016 · 8 comments
Assignees
Milestone

Comments

@wh1t3-n01s3
Copy link

Being able to deploy multiple TPot's and have them send logs to a central collector or aggregator for visualization would be nice. HPFeeds does not appear to retain the same level of integrity as the events visible in the ELK stack, maybe add an optional Splunk Docker container to have a Splunk Forwarder send events to a Splunk server or some other method.

@t3chn0m4g3
Copy link
Member

Thanks for your feedback. Currently development is focused around ELK. As far as I know Splunk is not open source and thus no candidate for a publicly available docker image.
Conpot, Honeytrap and Dionaea will get native JSON support allowing for even better integration in ELK.

@wh1t3-n01s3
Copy link
Author

Would it break anything if we set the Docker containers to use the native Docker logging available in Splunk? For example, using a nginx container, you can run 'docker run --publish 80:80 --log-driver=splunk --log-opt splunk-token=99E16DCD-E064-4D74-BBDA-E88CE902F600 --log-opt splunk-url=https://192.168.1.123:8088 --log-opt splunk-insecureskipverify=true nginx' and it will forward logs from the nginx container into the search head specified when NGINX is launched. If it works like I think it would, I would simply need to modify the startup scripts for each docker container which would then forward logs into both Splunk and ELK.

@t3chn0m4g3
Copy link
Member

This should probably work and not break anything. Just modify the startup scripts accordingly. Is the log-driver=splunk suitable for ELK, too? The ELK / Splunk receiver should listen on a different device, though to avoid any port conflicts. Also check out the 16.10 branch which is currently in development since we will be switching to SystemD / Ubuntu 16.04.
I will keep the issue open for the next weeks, so you can post about your experience, guess others might be interested also.

@wh1t3-n01s3
Copy link
Author

For ELK (Logstash) you would want to use the log-driver=gelf per the Docker documentation here: https://docs.docker.com/engine/admin/logging/overview/

@t3chn0m4g3
Copy link
Member

I do not think that you will receive the output you want, since all relevant logging information is stored within the container by supervisord. The only information you will receive is the start / stop information regarding the supervised programs within the container.

Redirecting all the outputs to supervisor is mandatory as well as enable the debug mode, here an example for ELK:

[supervisord]
nodaemon=true
loglevel=debug

[program:elasticsearch]
redirect_stderr=true
redirect_stdout=true
command=/usr/share/elasticsearch/bin/elasticsearch
user=tpot
autorestart=true

[program:kibana]
redirect_stderr=true
redirect_stdout=true
command=/opt/kibana/bin/kibana
autorestart=true

[program:logstash]
redirect_stderr=true
redirect_stdout=true
command=/opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/logstash.conf
autorestart=true

@wh1t3-n01s3
Copy link
Author

You are correct, the best solution I've concocted so far is to use the Splunk container at https://hub.docker.com/r/outcoldman/splunk/ and then mount the /data folder into Splunk and monitor the files specified in the ELK config at https://github.com/dtag-dev-sec/elk/blob/master/logstash.conf. So far I've only experimented with the JSON output files, but Splunk indexes them perfectly so I believe this to be the most streamlined method without compromising community submission or breaking the ELK stack.

@t3chn0m4g3
Copy link
Member

t3chn0m4g3 commented Jun 16, 2016

For 16.10 I will implement a solution that should fit your needs, as well as others. I will split the conf.d/logstash.conf into its different services (cowrie.conf, dionaea.conf, ...) and upon start in supervisord I will check if dedicated config files do exist in /data/elk/logstash/conf and copy these into the container.
You would only need to copy the logstash configs to that folder and can per service decide what logstash should do with it.

@t3chn0m4g3 t3chn0m4g3 self-assigned this Jun 16, 2016
@t3chn0m4g3 t3chn0m4g3 added this to the T-Pot 16.10 milestone Jun 16, 2016
@t3chn0m4g3
Copy link
Member

Merged to ELK 16.10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants