Logging and analysis of security data in a network
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
Builds/rek-physical Update 60-output.conf Sep 21, 2018
Project management Update working_hours.md Sep 17, 2018
etc dest Sep 19, 2018
images move file Sep 5, 2018
installations Update ufw.md Sep 21, 2018
scripts organize files Sep 7, 2018
LICENSE Initial commit Aug 24, 2018
README.md Update README.md Sep 17, 2018
REK_dataflow.jpg add new picture Sep 21, 2018
Screenshot_2018-09-21_15-48-13.png add flowchart Sep 21, 2018
links Update links Sep 7, 2018
sources.md Update sources.md Sep 17, 2018


Logging and analysis of security data in a network

Multidisciplinary Software Project, fall 2018
Haaga-Helia University of Applied Sciences
Jussi Isosomppi, Eino Kupias, Saku Kähäri

Our goal for this project is to create a solution that analyzes multiple kinds of data sent from multiple devices to a centralized server. The server processes, filters and stores the data in a form that is suitable for further analysis using graphical tools such as Kibana or Grafana. This solution should be suitable for use in small to medium companies, being sophisticated enough to offer valuable data while still being clear enough to also be usable for others than system administrators.

Part of the tools and methods we're planning to use to achieve these goals are:

  • Using a centralized management solution (such as Salt) to ensure all workstations and others devices have the correct software setup for reporting
  • Using graphical tools (Kibana, Grafana) to allow data to avoid handling data in the CLI
  • Using open source code to ensure availability and low-to-zero cost for the solution


  1. Setting up a logging server and passing data to it from one client
  2. Passing data from several (distinguishable) clients
  3. Automating client setup via Salt
  • Extra: Setting up a router to send log data
  • Extra: Replacing Logstash with syslog to reduce resource use


  • ELK stack
  • Grafana: Setting up a simple dashboard for preset data
  • Salt: Automating client setup
  • Docker: For quick testing and failing
    • Portainer: For easier control over Docker containers

Project diary

Week 1

We installed ELK Stack succesfully through docker, and managed to pass data from the host computer to ElasticSearch running inside the container. This data was sent with MetricBeat and viewed through Kibana's GUI.

Week 2

We formed our project plan, and created other required documents for the course. We created scripts to automate installation of services, hopefully reducing downtime in the coming weeks.

On the technical side: we tested the setup scripts and got them to work. We passed data from a few different beats to ElasticSearch and managed to get some nice visualizations with Kibana.


Week 3


We started researching the use of purely open source components, replacing the Beats and Logstash with rsyslog. Rsyslog can be configured to run as both a client and a server, and accepts configuration files to shape the log outputs into a form that is readable in Elasticsearch. We had some progress with setting up rsyslog, but didn't yet manage to display results in Elasticsearch/Kibana.


We were assigned a server from Servula and started setting it up for use. We decided to use Ubuntu Server 18.04.1 LTS, as the LTS gave us confidence it would be supported. We started immediately running into issues with the OS installation:

  • Hostname would change into localhost.localdomain with the first startup
  • User account created during install could not be used to log in

We managed to gain access to the system by booting it into single user mode (Hold ESC after BIOS and add single to launch parameters. From the root shell we could identify some problems:

  • No network connectivity (despite setting it up succesfully during install)
  • No user account present
  • Network interface settings missing (/etc/network/interfaces empty, ifconfig only showing loopback)
  • No SSH access for any users, even from the same system. Turns out root owns all SSL keyfiles.

Some solutions to our problems:

  • Creating user account again from within the single user root shell
  • Ubuntu 18.04 uses Netplan to configure interfaces, so we needed to write a .yaml file for the default interface. This should have been generated with the install but for some reason the /etc/netplan folder was completely empty.
 version: 2
 renderer: networkd
     dhcp4: yes
     dhcp6: yes

This file and sudo netplan apply gave us internet connectivity.

We configured a client computer and sent identifiable log data to server via Rsyslog. Next we find out how to pass log data from Rsyslog to ElasticSearch and Kibana.


We started our server setup from scratch, this time with Ubuntu Server 16.04 (problem-free install!). We setup the basics for the computer (accounts, security, remote access) and began configuring our stack for use. We started with Docker containers, but ran into some issues with forwarding traffic from Rsyslog clients to the Elasticsearch backend. We moved to locally installed versions of Elasticsearch and Kibana, to make the testing of configurations easier.

We spent a couple of hours building our configuration files, but did not have success in making the data available on Kibana. However, we managed to get our log forwarding to work on clients. Our clients are now sending select log entries to our Rsyslog server, where the entries are grouped under a comprehensive folder structure. We managed to input monitoring data from the server into Elasticsearch, and were able to view graphs of it on Kibana.

At this point in the project, Kibana and Elasticsearch seem to be functioning properly, while Rsyslog has currently unidentified issues.

Week 4


More work on the logging setup. Rsyslog is still causing trouble, not forwarding logs like it should. We spent hours reading guides and documentation, but had no luck making the information accessible.

On the upside: service monitoring works! We enabled monitoring services in both Elasticsearch and Kibana, and we can now view data on amount of queries processed, response times etc.


Project presentations.


We rebuilt the logging server. Due to numerous issues with our previous setup, we decided to start with a clean slate. A fresh install of Ubuntu Server 16.04.5 and our necessary components solved most of our issues, which were likely caused by tired tinkering of system settings and file/directory permissions.

Week 5


We did indeed generate more log data over the weekend, however our Rsyslog settings seemed to be a bit off. the /var/log/client_logs folder was over 200Gb in size, while other files in /var/log/ (all 80Gb+ of it) were affected too. It seems our problem is handling input and output on the same machine, with Rsyslog forwarding each incoming message to itself in an infinite loop. All of this data exists, despite the fact we had enabled rate limitation, which seemed to work as planned (confirmed by log messages of tens of thousands of messages being blocked by rate limitations).

Here are the clearly affected files in /var/log:

jussi@logmaster:/var/log$ ls -lah
total 83G
-rw-r-----  1 syslog        adm           9.2G Sep 17 12:36 auth.log
-rw-r-----  1 syslog        adm            15G Sep 16 06:38 auth.log.1
-rw-r-----  1 syslog        adm           732M Sep 17 12:11 kern.log
-rw-r-----  1 syslog        adm           932M Sep 16 06:38 kern.log.1
-rw-r-----  1 syslog        adm            12G Sep 17 12:36 syslog
-rw-r-----  1 syslog        adm            45G Sep 17 06:33 syslog.1
-rw-r-----  1 syslog        adm           506M Sep 16 06:38 syslog.2.gz
-rw-r-----  1 syslog        adm           844M Sep 15 06:25 syslog.3.gz

Changing back to our earlier setup (with Docker containers running the ELK components) and sending data with FileBeat resulted in a working setup. In our minds, this confirms that our problems lie within Rsyslogs configuration.