Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerization of PFELK #69

Closed
fktkrt opened this issue Jan 16, 2020 · 46 comments
Closed

Containerization of PFELK #69

fktkrt opened this issue Jan 16, 2020 · 46 comments
Assignees
Labels
wip work in progress

Comments

@fktkrt
Copy link
Collaborator

fktkrt commented Jan 16, 2020

Is your feature request related to a problem? Please describe.
Running pfelk in containers could be another deploy method.

Describe the solution you'd like
There would be Dockerfiles for the components (Elasticsearch, Logstash, Kibana), the configuration files, patterns would be included in them.
Management is question to discuss, we could use simple docker compose for configuring multiple containers, or we can choose an orchestration tool eg. Kubernetes or Docker Swarm.
Should we target a single or multi node architecture?

In my opinion we should either stick to Docker Compose or choose Kubernetes, depending on the architecture.

Additional context
I am in favor of using VMs under the ELK stack, but being able to deploy this on containers could be better suited for some use cases.

I am quite busy at the moment, but would like to work in this, so any help is appreciated starting from design decisions to implementation details.

What everybody is thinking?

@a3ilson
Copy link
Contributor

a3ilson commented Jan 16, 2020 via email

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 16, 2020

I'm happy to hear that. Don't hesitate to tell me if I can assist you in any way.

@a3ilson
Copy link
Contributor

a3ilson commented Jan 18, 2020

Leaning towards LXC after initial attempt. Dockers are single processes only (one process for Elastic, Logstash, and Kibana). However, incorporating cron jobs (MaxMind) and such are problematic whereas LXC are easier alternative for this specific endeavor.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 18, 2020

In my opinion LXC could scare people away, for most people containers are docker, full stop.

We can use docker compose to pull up individual containers to each services (elasticsearch, logstash, kibana). I think that would be the official recommendation for this type of solution.
What do you think?

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 19, 2020

I'm thinking fine-tuning something like this: https://github.com/elastic/stack-docker

@a3ilson
Copy link
Contributor

a3ilson commented Jan 20, 2020 via email

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 20, 2020

Should we test/measure different approaches then maybe?

@carpenike
Copy link

I’d also suggest it be Docker or bust. If deploying to Kubernetes, consider leveraging this Elastic operator to handle the ELK stack deployment. You’d just need to insert the Pf specific configurations. Those can generally be defined in code as well as part of the values that get injected.

https://operatorhub.io/operator/elastic-cloud-eck

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 20, 2020

Leaning towards LXC after initial attempt. Dockers are single processes only (one process for Elastic, Logstash, and Kibana). However, incorporating cron jobs (MaxMind) and such are problematic whereas LXC are easier alternative for this specific endeavor.

Cron job ideas:

@a3ilson
Copy link
Contributor

a3ilson commented Jan 20, 2020

Leaning towards LXC after initial attempt. Dockers are single processes only (one process for Elastic, Logstash, and Kibana). However, incorporating cron jobs (MaxMind) and such are problematic whereas LXC are easier alternative for this specific endeavor.

Cron job ideas:

* with simple docker (compose) I would bake the crontab into the individual images with RUN, CMD layers

* if we go K8s, we can even define `CronJob` objects, https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

Currently testing with LXC; and working (tweaking) the Docker instance.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 22, 2020

I can manage building Docker samples for testing to take some workload of you, just let me know.

@a3ilson
Copy link
Contributor

a3ilson commented Jan 23, 2020 via email

@revere521
Copy link
Collaborator

My only experience with docker was through the modules in openmediavault - but once you have something ready to test, i'd be glad to mess around with it.

@a3ilson
Copy link
Contributor

a3ilson commented Jan 23, 2020

@revere521 - thanks! Pending weekend plans, I hope to have something operational this weekend.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 23, 2020

I have a new branch with an initial attempt using Docker, you can check it here: https://github.com/fktkrt/pfelk/archive/docker-pfelk.zip

Some info for faster testing:

  • you can deploy it with docker-compose up
  • you can bring it down with docker-compose down
  • you can debug the processes with --verbose flag: docker-compose --verbose up/down
  • you can define a stack version in .env, currently it's on 7.5.1.

Currently it is bringing up the three services, populating the logstash filters and binding the services together. At the moment, the 20-geoip.conf is omitted, because could not decide on the method of installing Maxmind, and including that filter is broking the logstash service.

Other debug options:

  • list current networks with docker network ls
  • delete a given network with docker network remove <name-of-network>
  • list docker-compose processes with docker-compose ps
  • access service logs with docker-compose logs

If everything is set up, you should see a similar output:

            Name                          Command               State                            Ports
--------------------------------------------------------------------------------------------------------------------------------
docker-pfelk_elasticsearch_1   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
docker-pfelk_kibana_1          /usr/local/bin/dumb-init - ...   Up      0.0.0.0:5601->5601/tcp
docker-pfelk_logstash_1        /usr/local/bin/docker-entr ...   Up      0.0.0.0:5000->5000/tcp, 5044/tcp, 0.0.0.0:9600->9600/tcp

Check the elasticsearch nodes:

curl -X GET "localhost:9200/_cat/nodes?v&pretty"

@revere521
Copy link
Collaborator

i think i'm going to look at setting up docker on server essentials 2016 so i can help test. I have a VM that i'll try these with first; then i can possibly try it on my physical server. Maybe this weekend

@revere521
Copy link
Collaborator

It looks like essentials may not have the necessary components for docker based on a cursory search...so i built an ubuntu 18.04.3 server VM and setup docker there. ready to test, but not tonight :)

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 25, 2020

Just finished setting up automated building & testing with Travis-CI, you can check it out at: https://travis-ci.org/fktkrt/pfelk or in the README at https://github.com/fktkrt/pfelk

I think we should incorporate this in the original repo, if we go this way. What do you think?

@revere521
Copy link
Collaborator

revere521 commented Jan 26, 2020

the Travis-CI looks pretty neat for sure...that tests PRs on the fly by running your stuff out in the aether somewhere and giving you a pass/fail?

To test this .. lets pretend i'm just some old guy on the internet (cough,cough)...

It looks like:

  1. i need to install docker compose as described here: https://docs.docker.com/compose/gettingstarted/ (note that it does look like the current version is 1.25.3) on my server with the docker engine.
  2. ??
  3. Profit

Seriously though - for step 2 - do i only need to clone whats int the docker-pfelk folder in the repo on my docker machine, then run docker-compose up from inside that folder to do this?

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 26, 2020

the Travis-CI looks pretty neat for sure...that tests PRs on the fly by running your stuff out in the aether somewhere and giving you a pass/fail?

Yes, in a nutshell you only need a .travis.yml where you define the env/language/pre-posttests/etc, then with every commit push/PR it "compiles" your "code". In this case it fires up a test environment where it composes up the services, checking their logs, then composes them down. If all returns with 0, it passes, if any returns something other than 0, it fails. Finally, I am referencing that build badge from my travis-ci branch, to keep it up-to-date. It is a very simple workflow, which can get much more complex than this.

To test this .. lets pretend i'm just some old guy on the internet (cough,cough)...

It looks like:

1. i need to install docker compose as described here: https://docs.docker.com/compose/gettingstarted/  (note that it does look like the current version is 1.25.3) on my server with the docker engine.

2. ??

3. Profit

Seriously though - for step 2 - do i only need to clone whats int the docker-pfelk folder in the repo on my docker machine, then run docker-compose up from inside that folder to do this?

Yes, if you have docker engine and compose installed, that should get you started.

@revere521
Copy link
Collaborator

i got this when on my first attempt:

VMUbuntu64Server:~/docker-pfelk$ docker-compose up
WARNING: The ELK_VERSION variable is not set. Defaulting to a blank string.
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

i can see in the .yml where it calls the variable, but not sure where to set it?

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 26, 2020

It should be set under docker-pfelk/.env file with the following content: ELK_VERSION=7.5.1

@revere521
Copy link
Collaborator

yes, it was probably my error - i re-cloned the repo, then an the docker-compose command with sudo and its installing now.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 27, 2020

@revere521, was is successful?
@a3ilson, have you got any success with lxc or either with your docker or with mine? Any thoughts on integrating Travis-CI?

@revere521
Copy link
Collaborator

Sorry, its beena busy couple of days - i justy got back to chekc and this is my output:


VMUbuntu64Server:~/pfelk/docker-pfelk$ sudo docker-compose up -d
Creating network "docker-pfelk_elk" with driver "bridge"
Creating volume "docker-pfelk_elasticsearch" with default driver
Building elasticsearch
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
7.5.1: Pulling from elasticsearch/elasticsearch
c808caf183b6: Pull complete
05ff3f896999: Pull complete
82fb7fb0a94e: Pull complete
c4d0024708f4: Pull complete
136650a16cfe: Pull complete
968db096c092: Pull complete
42547e91692f: Pull complete
Digest: sha256:b0960105e830085acbb1f9c8001f58626506ce118f33816ea5d38c772bfc7e6c
Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:7.5.1
 ---> 2bd69c322e98
Successfully built 2bd69c322e98
Successfully tagged docker-pfelk_elasticsearch:latest
WARNING: Image for service elasticsearch was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building logstash
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
7.5.1: Pulling from logstash/logstash
c808caf183b6: Already exists
7c07521065ed: Pull complete
d0d212a3b734: Pull complete
418bd04a229b: Pull complete
b22f374f97b1: Pull complete
b65908943591: Pull complete
2ee12bfc6e9c: Pull complete
309701bd1d88: Pull complete
b3555469618d: Pull complete
2834c4c48906: Pull complete
bae432e5da20: Pull complete
Digest: sha256:5bc89224f65459072931bc782943a931f13b92a1a060261741897e724996ac1a
Status: Downloaded newer image for docker.elastic.co/logstash/logstash:7.5.1
 ---> 8b94897b4254
Successfully built 8b94897b4254
Successfully tagged docker-pfelk_logstash:latest
WARNING: Image for service logstash was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building kibana
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
7.5.1: Pulling from kibana/kibana
c808caf183b6: Already exists
e12a414b7b04: Pull complete
20714d0b39d8: Pull complete
393e0a5bccf2: Pull complete
b142626e938b: Extracting [==================================================>]  272.2MB/272.2MB
b28e35a143ca: Download complete
728725922476: Download complete
96692e1a8406: Download complete
e4c3cbe1dbbe: Download complete
bb6fc46a19d1: Download complete
ERROR: Service 'kibana' failed to build: failed to register layer: Error processing tar file(exit status 1: unpigz: skipping: <stdin>: corrupted -- crc32 mismatch
):

I will try again this evening

@revere521
Copy link
Collaborator

revere521 commented Jan 27, 2020 via email

@revere521
Copy link
Collaborator

Ok, it installed successfully the 2nd time - i can hit the default Kibana web interface. Just figured out that i needed to edit the configs under the logstash/pipline folder. Still figuring out how to send data to it

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 28, 2020

You only need to configure the IP address of your docker host with port 5140 as your pfSense remote host, at "Enable Remote Logging".

@revere521
Copy link
Collaborator

I did add it as a second remote log server in pfsense, but it doesn't seem to be ingesting data yet...i'll need to troubleshoot if thats a network issue

@revere521
Copy link
Collaborator

revere521 commented Jan 28, 2020

for some reason port 5140 is not open, or at least not reachable. I made sure it wasn't a UFW firewall issue - and that doesn't seem to be the case. from the port test in pfsense i can hit all the ports you have open for Elasticsearch and Kibana (in the docker-compose.yml); and i can connect to port 9600 for Logstash, but not port 5000 - and i added 5140 and cant hit that either.

@a3ilson
Copy link
Contributor

a3ilson commented Jan 29, 2020

for some reason port 5140 is not open, or at least not reachable. I made sure it wasn't a UFW firewall issue - and that doesn't seem to be the case. from the port test in pfsense i can hit all the ports you have open for Elasticsearch and Kibana (in the docker-compose.yml); and i can connect to port 9600 for Logstash, but not port 5000 - and i added 5140 and cant hit that either.

@revere521 - I would recommend doing a tcpdump on port 5140. This will help troubleshoot (i.e. are the logs being sent but not parsed or not sent)

@a3ilson
Copy link
Contributor

a3ilson commented Jan 29, 2020

@revere521, was is successful?
@a3ilson, have you got any success with lxc or either with your docker or with mine? Any thoughts on integrating Travis-CI?

@revere521 - I honestly haven't had any time but it's on my list of things to do. I'll finish tinkering with docker before finalizing a container.

Feel free to add another install option (linking Travis CI) to the README. I haven't had a chance to evaluate your docker(s) but did you specify the pfSense/OPNsense IP address, allow for a user input to configure or omit?

@revere521
Copy link
Collaborator

I think @fktkrt was asking - but i did have to edit the config files in the pfelk/docker/logstash/pipeline folder, then run the docker-compose commands - it looks like the config files are then copied to the correct place during the build.

@revere521
Copy link
Collaborator

revere521 commented Jan 29, 2020

the server where i'm running docker looks like its getting the syslog data; its just not making it into the logstash container

(interestingly, you can also see here my temp/humidity/light sensor spamming the thingspeak api for some reason)

sudo tcpdump -i enp0s3 -nn -s0 -v port 5140
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
19:51:07.989162 IP (tos 0x0, ttl 64, id 32142, offset 0, flags [none], proto UDP (17), length 107)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 79
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:07 unbound: [69261:1] info: resolving api.thingspeak.com. A IN
19:51:07.989213 IP (tos 0x0, ttl 64, id 16105, offset 0, flags [none], proto UDP (17), length 107)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 79
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:07 unbound: [69261:1] info: resolving api.thingspeak.com. A IN
19:51:08.064648 IP (tos 0x0, ttl 64, id 20702, offset 0, flags [none], proto UDP (17), length 310)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 282
        Facility local5 (21), Severity info (6)
        Msg: Jan 28 19:51:08 pfsense.home nginx: 192.168.1.107 - - [28/Jan/2020:19:51:08 -0500] "GET /widgets/widgets/snort_alerts.widget.php?getNewAlerts=1580259067124 HTTP/2.0" 200 322 "https://192.168.1.1/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0"
19:51:08.340807 IP (tos 0x0, ttl 64, id 17835, offset 0, flags [none], proto UDP (17), length 110)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 82
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: response for api.thingspeak.com. A IN
19:51:08.340887 IP (tos 0x0, ttl 64, id 34585, offset 0, flags [none], proto UDP (17), length 110)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 82
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: response for api.thingspeak.com. A IN
19:51:08.340894 IP (tos 0x0, ttl 64, id 28715, offset 0, flags [none], proto UDP (17), length 99)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 71
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: reply from <.> 1.0.0.1#853
19:51:08.340898 IP (tos 0x0, ttl 64, id 35728, offset 0, flags [none], proto UDP (17), length 99)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 71
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: reply from <.> 1.0.0.1#853
19:51:08.340979 IP (tos 0x0, ttl 64, id 13273, offset 0, flags [none], proto UDP (17), length 98)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 70
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: query response was ANSWER
19:51:08.340987 IP (tos 0x0, ttl 64, id 4604, offset 0, flags [none], proto UDP (17), length 98)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 70
        Facility daemon (3), Severity info (6)
        Msg: Jan 28 19:51:08 unbound: [69261:1] info: query response was ANSWER
19:51:08.741815 IP (tos 0x0, ttl 64, id 61584, offset 0, flags [none], proto UDP (17), length 164)
    192.168.1.1.514 > 192.168.1.56.5140: SYSLOG, length: 136

@a3ilson
Copy link
Contributor

a3ilson commented Jan 29, 2020

@revere521 - Access the Docker and check the 01-inputs.conf. Specifically, line 9 which provides the host ip ( if [host] =~ /172.22.33.1/ { ). If this line is present, adjust to match your pf/OPNsense instance. Alternatively, you can omit lines 9-13 which will allow any traffic received on port 5140 to be parsed vs traffic received from a particular IP address.

@revere521
Copy link
Collaborator

I did set the correct IP in the config file in the ./pfelk/docker/logstash/pipeline/01-inputs.conf

I tried a rebuild commenting that section out, and it doesn't change the behavior.

This looks like an issue with traffic on port 5140 getting from the physical network to thevirtual docker network that these three containers are configured on (in the docker-compose.yml it looks like its a bridged network called "elk") - i just don't really know how that works

@a3ilson
Copy link
Contributor

a3ilson commented Jan 29, 2020 via email

@fktkrt
Copy link
Collaborator Author

fktkrt commented Jan 29, 2020

Feel free to add another install option (linking Travis CI) to the README. I haven't had a chance to evaluate your docker(s) but did you specify the pfSense/OPNsense IP address, allow for a user input to configure or omit?

Travis-CI: @a3ilson, I was thinking more like incorporating this as a fail/pass badge testing. Should I include my repository's Travis branch for this? Should I PR my docker changes for review, then tinker that later? We can even add multiple deploy methods for docker and lxc.

@revere521, yes you are right, port 5140/udp was not exposed in the logstash container, it's fixed now. the CI build is passing, will test properly later this evening.

@fktkrt fktkrt added the wip work in progress label Jan 29, 2020
@revere521
Copy link
Collaborator

For some reason, even after making that edit in the docker-compose.yml and running sudo docker-compose build it still isn't listening on 5140.

@VMUbuntu64Server:~/pfelk/docker-pfelk$ netstat -tuplen
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      0          24338      -
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      101        18693      -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      0          22882      -
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      0          24956      -
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      0          24957      -
tcp6       0      0 :::9200                 :::*                    LISTEN      0          29137      -
tcp6       0      0 :::9300                 :::*                    LISTEN      0          29124      -
tcp6       0      0 :::22                   :::*                    LISTEN      0          22884      -
tcp6       0      0 :::445                  :::*                    LISTEN      0          24954      -
tcp6       0      0 :::9600                 :::*                    LISTEN      0          28293      -
tcp6       0      0 :::5601                 :::*                    LISTEN      0          29287      -
tcp6       0      0 :::5000                 :::*                    LISTEN      0          29266      -
tcp6       0      0 :::139                  :::*                    LISTEN      0          24955      -

the documentation for docker compose states that the build command is build and rebuild - is that the right thing to do?

@revere521
Copy link
Collaborator

revere521 commented Jan 29, 2020

ok, i was finally able to get it to pickup the port change by running:
sudo docker-compose up -d --force-recreate --no-deps --build logstash
then rebooting the server itself

I also added restart: unless-stopped to the docker-compose.yml to automatically restart the containers after the reboot like this:

version: '3.2'

services:
  elasticsearch:
    restart: unless-stopped
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      # Use single node discovery in order to disable production mode and avoid bootstrap checks
      # see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk

  logstash:
    restart: unless-stopped
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - ./logstash/pipeline/:/usr/share/logstash/pipeline/
      - ./logstash/patterns/:/usr/share/logstash/patterns/
    ports:
      - "5000:5000"
      - "5140:5140/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    restart: unless-stopped
    build:
      context: kibana/

I'm getting data now, and i'll let it collect data over the course of today and see whats-what later

@a3ilson
Copy link
Contributor

a3ilson commented Feb 2, 2020

I got a working and tested docker with a makeshift GeoIP configured, for now. All required files are posted and will author initial configuration/installation instructions within the week.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Feb 3, 2020

Looks fine to me!
A have a few ideas/questions, though:

  • Wouldn't be a good idea to set the version of Elasticsearch with $ELK_VERSION?
  • I might include a separate folder and Dockerfile for Elasticsearch service to have greater control over it.
  • We should test the docker-compose with Travis-CI once it is final, then embed the pass/failure badge in README. Feel free to take a look at my .travis.yml for inspiration.

@a3ilson
Copy link
Contributor

a3ilson commented Feb 4, 2020

Looks fine to me!
A have a few ideas/questions, though:

* Wouldn't be a good idea to set the version of Elasticsearch with `$ELK_VERSION`?

* I might include a separate folder and Dockerfile for Elasticsearch service to have greater control over it.

* We should test the docker-compose with Travis-CI once it is final, then embed the pass/failure badge in README. Feel free to take a look at my `.travis.yml` for inspiration.

Will do - I'll spend more time on it next weekend.

@a3ilson
Copy link
Contributor

a3ilson commented Feb 11, 2020

@fktkrt feel free to test with Travis-CI
Running pfELK Docker without any issues...will finalize and tweak in the following weeks.

@fktkrt
Copy link
Collaborator Author

fktkrt commented Feb 17, 2020

Travis-CI now supported at org level.
Currently the pipeline is quite simple, we can fine-tune it later, you can find it in .travis.yml.

Added the pass/failure badge to the README as well.

@a3ilson
Copy link
Contributor

a3ilson commented Mar 21, 2020

This was closed and opened within the Docker-pfelk under issue #2. The only outlier is to accomplish the cron job for GeoIP within the docker.

@a3ilson a3ilson closed this as completed Mar 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wip work in progress
Projects
None yet
Development

No branches or pull requests

4 participants