Skip to content

SELKS 5.0 RC1

Peter Manev edited this page Apr 18, 2019 · 18 revisions

SELKS 5

Short admin and config intro

You can download SELKS 5 from here - https://www.stamus-networks.com/open-source/

Our blog about the release - https://www.stamus-networks.com/2018/12/21/selks5-rc1-threat-hunting-and-more/

Virtual machines import note - Recommended basic set up for SELKS 5.0 is 2vCPUs 6GB RAM

After you are done with installing the iso and First time setup (please read below) you should just point your browser to https://your.selks.IP.here/

Please read for the ELK stack 6.6.0 nginx config change needed please make the needed adjustments as explained if you upgrade to ELK 6.6.0 and above.

Credentials and log in

Usage and logon credentials (OS/ssh and web management user):

  • user: selks-user
  • password: selks-user

(password in Live mode is live)

The default root password is StamusNetworks

First time setup

NOTE: Internet access is needed to complete the first time setup.

cmd/shell:

selks-first-time-setup_stamus.sh

On Desktop versions of SELKS:

Double click "FirstTimeSetup" icon on the desktop

NOTE: Follow the instructions and answer the setup questions. The first time set up script can take about 2-5 min to finish up. Logs from the first time set up process and tasks are located in - /opt/selks/log/

Follow the instructions and type in the desired sniffing interface(s) and choose a Full Packet Capture (FPC) option or not.

After the script is finished you can access the web management interface and GUI via https://your.selks.IP.here/ which provides a landing page for :

  • Scirius ruleset management and Suricata administration management
  • Kibana dashboards – providing links/connections to rules and alert event drill down management, correlation and Full Packet Capture
  • EveBox alert,event, correlation management
  • Moloch viewer for pcap export and packet capture drill down
  • Scirius Hunt interface – once logged in , right upper corner, click and choose Hunt.

Services

To see the status of the critical services

systemctl status suricata elasticsearch logstash kibana evebox molochviewer-selks molochpcapread-selks
supervisorctl status scirius

Configs

All configs for elasticsearch kibana logstash scirius share their default locations. For example /etc/{service_name_here}/

For Moloch:

/data/moloch/etc/config.ini

For Suricata:

/etc/suricata/suricata.yaml
/etc/suricata/selks5-addin.yaml
/etc/suricata/selks5-interfaces-config.yaml

selks5-addin.yaml contains SELKS 5 and Suricata specific setup selks5-interfaces-config.yaml contains auto generated (by First Time Setup script) interface configuration for the chosen sniffing interfaces.

Scripts

First time set up script can be run from the command line:

selks-first-time-setup_stamus.sh

or from the desktop shortcut (if using the desktop version of SELKS) by doubleclicking on the FirstTimeSetup icon

Data, Logs and Full Packet Capture

Logs

Elasticsearch:

/var/log/elasticsearch/elasticsearch.log

Logstash:

/var/log/logstash/logstash-plain.log
/var/log/logstash/logstash-slowlog-plain.log

Suricata:

/var/log/suricata/suricata.log
/var/log/suricata/stats.log

Moloch:

/data/moloch/logs/

Scirius:

/var/log/scirius-error.log
/var/log/scirius.log

Full Packet Capture (FPC)

Full Packet Capture on SELKS 5 is done by Suricata. During the first time set up you will be asked to make a choice of:

1) FPC - Full Packet Capture. Suricata will rotate and delete the pcap captured files.
2) FPC_Retain - Full Packet Capture with having Moloch's pcap retention/rotation. Keeps the pcaps as long as there is space available.
3) None - disable packet capture\n

How it all comes together

The default settings for SELKS 5 are located in /etc/suricata/selks5-addin.yaml and in terms of FPC are:

- pcap-log:
    enabled: yes
    filename: log.%n.%t.pcap
    #filename: log.pcap

    # File size limit.  Can be specified in kb, mb, gb.  Just a number
    # is parsed as bytes.
    limit: 10mb

    # If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit"
    max-files: 20

    mode: multi # normal, multi or sguil.

    # Directory to place pcap files. If not provided the default log
    # directory will be used. Required for "sguil" mode.
    dir: /data/nsm/

Which means that every Suricata thread will write a pcap for each of its threads (by default as many as the CPUs available). When that pcap reaches 10 MB of size it will be closed and a new one will be started to be written into.

NOTE: When the pcap is closed for writing by Suricata - it will then be picked up by the Moloch reader and digested onto Elasticsearch. This would also explain why in certain cases you can see an alert but the FPC would not be immediately available in the Moloch viewer.

The pcaps will be rotated on a per thread basis once a thread has reached 20 pcaps written (for a particular thread). So in the default config setup you could theoretically have a maximum of:

20 * #number_of_threads * 10MB

For a 4 CPU machine with the default settings this would be:

20 * 4 * 10MB = 800MB of data max.

After that it will get rotated and would never go over 800MB.

If you choose to adjust those default settings you would need to restart the service for the changes to take effect systemctl restart suricata

Option 1

Pcaps get stored in:

/data/nsm/

If you choose Option 1 the pcaps will be rotated by Suricata.

Keeping smaller pcap files would make sure you get the data more often digested. However it would need to be tried/tested out for each particular deployment depending on what the size of the sniffing traffic is.

Moloch's config.ini settings are explained here - https://github.com/aol/moloch/wiki/Settings

Option 2 (Recommended)

Pcaps get stored in:

/data/nsm/
/data/moloch/raw/

This option offers possibility for setting up size and time retention policy.

The pcap storage is being handled by Moloch. In that case Suricata would write the FPC pcaps in /data/nsm/ but when done writing (file is closed for example as it reaches the default 10MB set limit) they would be digested and then immediately deleted from /data/nsm/ but would be kept in /data/moloch/raw/. The rotation policy of Moloch is described here - https://github.com/aol/moloch/wiki/FAQ#pcap-deletion. The settings if needed then can be adjusted here on SELKS - /data/moloch/etc/config.ini. You would need to restart the service for the changes to take effect systemctl status molochpcapread-selks.

Moloch's config.ini settings are explained here - https://github.com/aol/moloch/wiki/Settings

Moloch's storage handling is explained here - https://github.com/aol/moloch/wiki/FAQ#data-never-gets-deleted

Option 3

In this option there will be no pcap capture done

FPC data deletion/rotation

Moloch and Elasticsearch clean up procedure located in :

/etc/crontab

With respect to pcap storage - depending on if you have chosen Option 1 or Option 2 the pcaps rotation and deletion will be handled either by Suricata or Moloch. It is explained how in the sections above.

To clean up and delete all (wipe everything out) logs, pcaps and flush the Elasticsearch DB you could use:

selks-db-logs-cleanup_stamus.sh

The PCAP storage rotation policy of Moloch (if it is chosen) is described here - https://github.com/aol/moloch/wiki/FAQ#pcap-deletion.

To see the size of indices into Elasticsearch:

curl -X GET "localhost:9200/_cat/indices?v&s=store.size"

To check specifically Suricata's logs created indices in Elasticsearch:

curl -X GET "localhost:9200/_cat/indices?v&s=store.size"  |grep logstash

Scripts

Handy scripts -

Clean all logs and flush DBs:

selks-db-logs-cleanup_stamus.sh

First time setup. Can be run multiple times, as many as needed/wanted:

selks-first-time-setup_stamus.sh

Set up and configure Moloch (already included in selks-first-time-setup_stamus.sh execution sequence):

selks-molochdb-init-setup_stamus.sh

Set up sniffing interface for Suricata(already included in selks-first-time-setup_stamus.sh execution sequence):

selks-setup-ids-interface.sh

SELKS Upgrade:

selks-upgrade_stamus.sh

Data storage considerations

For the purpose of speed the OS and /data/nsm/ can reside on the SSDs.

Depending on your FPC needs you can consider mounting /data/moloch/raw/ onto separate partitions/disks. Those could be slower but with much bigger volume and cheaper.

Kibana dashboards

NOTE: Make sure (especially if you have upgraded to Scirius 3.2.0+) that in /etc/scirius/local_settings.py you have the following variable:

KIBANA6_DASHBOARDS_PATH = "/opt/selks/kibana6-dashboards/"

To reload/reset the dashboards from the cmd/shell :

cd /usr/share/python/scirius/ && . bin/activate && python bin/manage.py kibana_reset && deactivate

To reload/reset the dashboards from Scirius GUI -

Go to System settings (from the Stamus logo drop down menu in the left upper corner) -> Kibana -> choose the desired action.

Source location:

/opt/selks/kibana6-dashboards/

Performance optimizations and docs

Elasticsearch

Documentation:

https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html

A quick try to do some performance optimization could be looking at /etc/elasticsearch/jvm.options and increasing the heap

## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

A more thorough guide for Elasticsearch performance tuning

Logstash

Documentation:

https://www.elastic.co/guide/en/logstash/current/introduction.html

For a quick try/fix do some performance optimization you could try increasing the heap in /etc/logstash/jvm.options:

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

Also in /etc/logstash/logstash.yml:

# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125

A more thorough guide for Performance tuning and troubleshooting

Moloch

Documentation:

https://github.com/aol/moloch/wiki

Performance tuning:

https://github.com/aol/moloch/wiki/Settings#high-performance-settings

Suricata

Documentation:

https://suricata.readthedocs.io/en/latest/

Performance tuning (for advanced users):

https://github.com/pevma/SEPTun

https://github.com/pevma/SEPTun-Mark-II

ELK stack 6.6.0 nginx config change needed

Adjust your nginx config as following below and restart nginx - systemctl restart nginx

root@SELKS5RCv2:~# cat  /etc/nginx/sites-available/selks5.conf
server {
   listen 127.0.0.1:80;
   listen 127.0.1.1:80;
   listen 443 default_server ssl;
   ssl_certificate /etc/nginx/ssl/scirius.crt;
   ssl_certificate_key /etc/nginx/ssl/scirius.key;
   server_name SELKS;
   access_log /var/log/nginx/scirius.access.log;
   error_log /var/log/nginx/scirius.error.log;

   # https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production
   location /static/ { # STATIC_URL
       alias /var/lib/scirius/static/; # STATIC_ROOT
       expires 30d;
   }

   location /media/ { # MEDIA_URL
       alias /var/lib/scirius/static/; # MEDIA_ROOT
       expires 30d;
   }

   location /app/moloch/ {
       proxy_pass https://127.0.0.1:8005;
       proxy_redirect off;
   }

  location /plugins/ {
       proxy_pass http://127.0.0.1:5601/plugins/;
       proxy_redirect off;
   }

  location /dlls/ {
       proxy_pass http://127.0.0.1:5601/dlls/;
       proxy_redirect off;
   }

   location /socket.io/ {
       proxy_pass http://127.0.0.1:5601/socket.io/;
       proxy_redirect off;
   }

   location /dataset/ {
       proxy_pass http://127.0.0.1:5601/dataset/;
       proxy_redirect off;
   }

   location /translations/ {
       proxy_pass http://127.0.0.1:5601/translations/;
       proxy_redirect off;
   }

   location ^~ /built_assets/ {
       proxy_pass http://127.0.0.1:5601/built_assets/;
       proxy_redirect off;
   }

   location /ui/ {
       proxy_pass http://127.0.0.1:5601/ui/;
       proxy_redirect off;
   }

   location / {
       proxy_pass http://127.0.0.1:8000;
       proxy_read_timeout 600;
       proxy_set_header Host $http_host;
       proxy_set_header X-Forwarded-Proto https;
       proxy_redirect off;
   }

}
Clone this wiki locally