[root@rock00 SimpleRock]# chef-client -z -r "recipe[simplerock]" [2016-01-22T18:08:19-05:00] WARN: No config file found or specified on command line, using command line options. Starting Chef Client, version 12.3.0 resolving cookbooks for run list: ["simplerock"] Synchronizing Cookbooks: - simplerock - yum Compiling Cookbooks... Converging 122 resources Recipe: simplerock::default * log[branding] action nothing (skipped due to action :nothing) * directory[/data] action create (up to date) * yum_package[chrony] action install - install version 2.1.1-1.el7.centos of package chrony * execute[set_time_zone] action run - execute /usr/bin/timedatectl set-timezone UTC * execute[enable_ntp] action run - execute /usr/bin/timedatectl set-ntp yes * execute[enable_rhel_optional] action run (skipped due to only_if) * execute[enable_rhel_extras] action run (skipped due to only_if) * execute[enable_centos_cr] action run - execute sed -i "s/enabled=0/enabled=1/g" /etc/yum.repos.d/CentOS-CR.repo * yum_package[kernel-devel] action install - install version 3.10.0-327.el7 of package kernel-devel * yum_package[kernel-headers] action install - install version 3.10.0-327.el7 of package kernel-headers * ohai[reload_network] action nothing (skipped due to action :nothing) * ruby_block[determine_monitor_interface] action nothing (skipped due to action :nothing) * execute[ipv6_permanent_disable_1] action run - execute echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf * execute[ipv6_permanent_disable_2] action run - execute echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf * execute[ipv6_permanent_disable_3] action run - execute sysctl -p * execute[ipv6_ephemeral_disable_all] action run - execute echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6 * execute[ipv6_ephemeral_disable_default] action run - execute echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6 * execute[ipv6_fix_sshd] action run - execute sed -i "s/#AddressFamily any/AddressFamily inet/g" /etc/ssh/sshd_config * execute[ipv6_fix_sshd_restart] action run - execute systemctl restart sshd * execute[ipv6_remove_localhost] action run - execute sed -i "/^::1/d" /etc/hosts * execute[ipv6_fix_network_restart] action run - execute systemctl restart network.service * ohai[reload_network] action reload - re-run ohai and merge results into node attributes * template[ifcfg-monif] action nothing (skipped due to action :nothing) * execute[ifcfg_hacks] action nothing (skipped due to action :nothing) * template[/sbin/ifup-local] action nothing (skipped due to action :nothing) * execute[add_google_dns] action run - execute echo "nameserver 8.8.8.8" >> /etc/resolv.conf; echo "nameserver 8.8.4.4" >> /etc/resolv.conf * execute[set_hostname] action run - execute echo -e "127.0.0.2\tsimplerockbuild.simplerock.lan\tsimplerockbuild" >> /etc/hosts * execute[set_system_hostname] action run - execute hostnamectl set-hostname simplerockbuild.simplerock.lan * yum_package[epel-release] action install - install version 7-5 of package epel-release * execute[install_epel] action run (skipped due to not_if) * execute[import_epel_key] action run - execute rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 * yum_repository[bintray_cyberdev] action create * template[/etc/yum.repos.d/bintray_cyberdev.repo] action create - create new file /etc/yum.repos.d/bintray_cyberdev.repo - update content in file /etc/yum.repos.d/bintray_cyberdev.repo from none to efca23 --- /etc/yum.repos.d/bintray_cyberdev.repo 2016-01-22 23:09:37.776875486 +0000 +++ /tmp/chef-rendered-template20160122-9436-b4w2t8 2016-01-22 23:09:37.776875486 +0000 @@ -1 +1,10 @@ +# This file was generated by Chef +# Do NOT modify this file by hand. + +[bintray_cyberdev] +name=Bintray CyberDev Repo +baseurl=https://dl.bintray.com/cyberdev/capes +enabled=1 +gpgcheck=0 +sslverify=true - change mode from '' to '0644' - restore selinux security context * execute[yum clean bintray_cyberdev] action run - execute yum clean all --disablerepo=* --enablerepo=bintray_cyberdev * execute[yum-makecache-bintray_cyberdev] action run - execute yum -q makecache --disablerepo=* --enablerepo=bintray_cyberdev * ruby_block[yum-cache-reload-bintray_cyberdev] action create - execute the ruby block yum-cache-reload-bintray_cyberdev * execute[yum clean bintray_cyberdev] action nothing (skipped due to action :nothing) * execute[yum-makecache-bintray_cyberdev] action nothing (skipped due to action :nothing) * ruby_block[yum-cache-reload-bintray_cyberdev] action nothing (skipped due to action :nothing) * yum_repository[logstash-2.1.x] action create * template[/etc/yum.repos.d/logstash-2.1.x.repo] action create - create new file /etc/yum.repos.d/logstash-2.1.x.repo - update content in file /etc/yum.repos.d/logstash-2.1.x.repo from none to 9c351d --- /etc/yum.repos.d/logstash-2.1.x.repo 2016-01-22 23:09:39.659875442 +0000 +++ /tmp/chef-rendered-template20160122-9436-zu5bcp 2016-01-22 23:09:39.658875442 +0000 @@ -1 +1,11 @@ +# This file was generated by Chef +# Do NOT modify this file by hand. + +[logstash-2.1.x] +name=Logstash repository for 2.1.x packages +baseurl=http://packages.elastic.co/logstash/2.1/centos +enabled=1 +gpgcheck=1 +gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch +sslverify=true - change mode from '' to '0644' - restore selinux security context * execute[yum clean logstash-2.1.x] action run - execute yum clean all --disablerepo=* --enablerepo=logstash-2.1.x * execute[yum-makecache-logstash-2.1.x] action run - execute yum -q makecache --disablerepo=* --enablerepo=logstash-2.1.x * ruby_block[yum-cache-reload-logstash-2.1.x] action create - execute the ruby block yum-cache-reload-logstash-2.1.x * execute[yum clean logstash-2.1.x] action nothing (skipped due to action :nothing) * execute[yum-makecache-logstash-2.1.x] action nothing (skipped due to action :nothing) * ruby_block[yum-cache-reload-logstash-2.1.x] action nothing (skipped due to action :nothing) * yum_repository[elasticsearch-2.x] action create * template[/etc/yum.repos.d/elasticsearch-2.x.repo] action create - create new file /etc/yum.repos.d/elasticsearch-2.x.repo - update content in file /etc/yum.repos.d/elasticsearch-2.x.repo from none to 62f955 --- /etc/yum.repos.d/elasticsearch-2.x.repo 2016-01-22 23:09:40.978875411 +0000 +++ /tmp/chef-rendered-template20160122-9436-1q86nt0 2016-01-22 23:09:40.977875411 +0000 @@ -1 +1,11 @@ +# This file was generated by Chef +# Do NOT modify this file by hand. + +[elasticsearch-2.x] +name=Elasticsearch repository for 2.x packages +baseurl=http://packages.elastic.co/elasticsearch/2.x/centos +enabled=1 +gpgcheck=1 +gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch +sslverify=true - change mode from '' to '0644' - restore selinux security context * execute[yum clean elasticsearch-2.x] action run - execute yum clean all --disablerepo=* --enablerepo=elasticsearch-2.x * execute[yum-makecache-elasticsearch-2.x] action run - execute yum -q makecache --disablerepo=* --enablerepo=elasticsearch-2.x * ruby_block[yum-cache-reload-elasticsearch-2.x] action create - execute the ruby block yum-cache-reload-elasticsearch-2.x * execute[yum clean elasticsearch-2.x] action nothing (skipped due to action :nothing) * execute[yum-makecache-elasticsearch-2.x] action nothing (skipped due to action :nothing) * ruby_block[yum-cache-reload-elasticsearch-2.x] action nothing (skipped due to action :nothing) * execute[import_ntop_key] action run - execute rpm --import http://packages.ntop.org/centos-stable/RPM-GPG-KEY-deri * execute[yum_makecache] action run - execute yum makecache fast * yum_package[firewalld] action remove (up to date) * yum_package[postfix] action remove - remove package postfix * yum_package[tcpreplay, iptables-services, dkms, bro, broctl, kafka-bro-plugin, gperftools-libs, git, java-1.8.0-oracle, kafka, logstash, elasticsearch, nginx-spnego, jq, policycoreutils-python, patch, vim, openssl-devel, zlib-devel, net-tools, lsof, htop, GeoIP-update, GeoIP-devel, GeoIP, kafkacat, stenographer, bats, nmap-ncat, snort, daq, perl-libwww-perl, perl-Crypt-SSLeay, perl-Archive-Tar, perl-Sys-Syslog, perl-LWP-Protocol-https] action install - install version 4.1.0-1.el7 of package tcpreplay - install version 1.4.21-16.el7 of package iptables-services - install version 2.2.0.3-30.git.7c3e7c5.el7 of package dkms - install version 2.4.1-1.1 of package bro - install version 2.4.1-1.1 of package broctl - install version 0.7.5-1 of package kafka-bro-plugin - install version 2.4-7.el7 of package gperftools-libs - install version 1.8.0.51-1jpp.2.el7_1 of package java-1.8.0-oracle - install version 0.8.2.1-1.el7.centos of package kafka - install version 2.1.1-1 of package logstash - install version 2.1.1-1 of package elasticsearch - install version 1.6.3-1.el7.centos.ngx of package nginx-spnego - install version 1.5-1.el7 of package jq - install version 2.2.5-20.el7 of package policycoreutils-python - install version 2.7.1-8.el7 of package patch - install version 1.0.1e-51.el7_2.2 of package openssl-devel - install version 1.2.7-15.el7 of package zlib-devel - install version 2.0-0.17.20131004git.el7 of package net-tools - install version 4.87-4.el7 of package lsof - install version 1.0.3-3.el7 of package htop - install version 1.5.0-9.el7 of package GeoIP-update - install version 1.5.0-9.el7 of package GeoIP-devel - install version 1.5.0-9.el7 of package GeoIP - install version 1.2.0-1.el7.centos of package kafkacat - install version 0.0-1.git89a9b66.el7.centos of package stenographer - install version 0.4.0-1.20141016git3b33a5a.el7 of package bats - install version 6.40-7.el7 of package nmap-ncat - install version 2.9.8.0-1 of package snort - install version 2.0.6-1 of package daq - install version 6.05-2.el7 of package perl-libwww-perl - install version 0.64-5.el7 of package perl-Crypt-SSLeay - install version 1.92-2.el7 of package perl-Archive-Tar - install version 0.33-3.el7 of package perl-Sys-Syslog - install version 6.04-4.el7 of package perl-LWP-Protocol-https - install version 7.4.160-1.el7 of package vim-enhanced * directory[/etc/pf_ring] action create - create new directory /etc/pf_ring - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * file[/etc/pf_ring/pf_ring.start] action create - create new file /etc/pf_ring/pf_ring.start - restore selinux security context * template[/etc/pf_ring/pf_ring.conf] action create - create new file /etc/pf_ring/pf_ring.conf - update content in file /etc/pf_ring/pf_ring.conf from none to 3aefa7 --- /etc/pf_ring/pf_ring.conf 2016-01-22 23:13:01.637870681 +0000 +++ /tmp/chef-rendered-template20160122-9436-nzsaf6 2016-01-22 23:13:01.637870681 +0000 @@ -1 +1,6 @@ +transparent_mode=0 +min_num_slots=8192 +enable_tx_capture=0 +enable_ip_defrag=0 +quick_mode=0 - change mode from '' to '0644' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * service[pf_ring] action enable (up to date) * service[pf_ring] action start - start service service[pf_ring] * service[cluster] action disable - disable service service[cluster] * directory[/data/bro/logs] action create - create new directory /data/bro/logs - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * directory[/data/bro/spool] action create - create new directory /data/bro/spool - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * directory[/opt/bro/logs] action delete - delete existing directory /opt/bro/logs * link[/opt/bro/logs] action create - create symlink at /opt/bro/logs to /data/bro/logs * directory[/opt/bro/spool] action delete - delete existing directory /opt/bro/spool * link[/opt/bro/spool] action create - create symlink at /opt/bro/spool to /data/bro/spool * execute[start_bro] action nothing (skipped due to action :nothing) * directory[/opt/bro/share/bro/site/scripts] action create - create new directory /opt/bro/share/bro/site/scripts - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/opt/bro/share/bro/site/scripts/readme.txt] action create - create new file /opt/bro/share/bro/site/scripts/readme.txt - update content in file /opt/bro/share/bro/site/scripts/readme.txt from none to 50aa67 --- /opt/bro/share/bro/site/scripts/readme.txt 2016-01-22 23:13:04.095870623 +0000 +++ /tmp/chef-rendered-template20160122-9436-lacvzl 2016-01-22 23:13:04.095870623 +0000 @@ -1 +1,11 @@ + +It is recommened to put bro scripts in individual directories and use __load__.bro files. + +Example: +directory = scripts/something +script = scripts/something/something.bro +loader = scripts/something/__load__.bro + +Then in your custom.local.bro you can @load scripts/something + - change mode from '' to '0644' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/opt/bro/etc/broctl.cfg] action create - update content in file /opt/bro/etc/broctl.cfg from 05399c to ca275f --- /opt/bro/etc/broctl.cfg 2015-10-22 03:30:16.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-1m5l3hj 2016-01-22 23:13:04.131870622 +0000 @@ -1,71 +1,14 @@ -## Global BroControl configuration file. - -############################################### -# Mail Options - -# Recipient address for all emails sent out by Bro and BroControl. MailTo = root@localhost - -# Mail connection summary reports each log rotation interval. A value of 1 -# means mail connection summaries, and a value of 0 means do not mail -# connection summaries. This option has no effect if the trace-summary -# script is not available. MailConnectionSummary = 1 - -# Lower threshold (in percentage of disk space) for space available on the -# disk that holds SpoolDir. If less space is available, "broctl cron" starts -# sending out warning emails. A value of 0 disables this feature. MinDiskSpace = 5 - -# Send mail when "broctl cron" notices the availability of a host in the -# cluster to have changed. A value of 1 means send mail when a host status -# changes, and a value of 0 means do not send mail. MailHostUpDown = 1 - -############################################### -# Logging Options - -# Rotation interval in seconds for log files on manager (or standalone) node. -# A value of 0 disables log rotation. LogRotationInterval = 3600 - -# Expiration interval for log files in LogDir. Files older than this many days -# will be deleted upon running "broctl cron". A value of 0 means that logs -# never expire. LogExpireInterval = 0 - -# Enable BroControl to write statistics to the stats.log file. A value of 1 -# means write to stats.log, and a value of 0 means do not write to stats.log. StatsLogEnable = 1 - -# Number of days that entries in the stats.log file are kept. Entries older -# than this many days will be removed upon running "broctl cron". A value of 0 -# means that entries never expire. StatsLogExpireInterval = 0 - -############################################### -# Other Options - -# Show all output of the broctl status command. If set to 1, then all output -# is shown. If set to 0, then broctl status will not collect or show the peer -# information (and the command will run faster). StatusCmdShowAll = 1 - -# Site-specific policy script to load. Bro will look for this in -# $PREFIX/share/bro/site. A default local.bro comes preinstalled -# and can be customized as desired. SitePolicyStandalone = local.bro - -# Location of the log directory where log files will be archived each rotation -# interval. -LogDir = /opt/bro/logs - -# Location of the spool directory where files and data that are currently being -# written are stored. -SpoolDir = /opt/bro/spool - -# Location of other configuration files that can be used to customize -# BroControl operation (e.g. local networks, nodes). +LogDir = /data/bro/logs +SpoolDir = /data/bro/spool CfgDir = /opt/bro/etc - - change mode from '0664' to '0644' - change group from 'bro' to 'root' - restore selinux security context * template[/opt/bro/etc/networks.cfg] action create - update content in file /opt/bro/etc/networks.cfg from d62112 to 2da42b --- /opt/bro/etc/networks.cfg 2015-09-06 19:43:34.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-ogawoa 2016-01-22 23:13:04.168870621 +0000 @@ -1,7 +1,11 @@ -# List of local networks in CIDR notation, optionally followed by a -# descriptive tag. -# For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes. +#LOCAL NETS +10.0.0.0/8 RFC1918 +172.16.0.0/12 RFC1918 +192.168.0.0/16 RFC1918 -10.0.0.0/8 Private IP space -192.168.0.0/16 Private IP space +########## +## ROCK ## +########## +# Add networks for the networks you are monitoring into this file if they're not all RFC1918. +########## - change mode from '0664' to '0644' - change group from 'bro' to 'root' - restore selinux security context * template[/opt/bro/etc/node.cfg] action nothing (skipped due to action :nothing) * template[/opt/bro/share/bro/site/local.bro] action create - update content in file /opt/bro/share/bro/site/local.bro from ab57d0 to d1a8d7 --- /opt/bro/share/bro/site/local.bro 2015-09-06 19:43:16.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-dcjuol 2016-01-22 23:13:04.203870621 +0000 @@ -1,4 +1,4 @@ -##! Local site policy. Customize as appropriate. +##! Local site policy. Customize as appropriate. ##! ##! This file will not be overwritten when upgrading or reinstalling! @@ -11,16 +11,16 @@ # Load the scan detection script. @load misc/scan -# Log some information about web applications being used by users +# Log some information about web applications being used by users # on your network. @load misc/app-stats -# Detect traceroute being run on the network. +# Detect traceroute being run on the network. @load misc/detect-traceroute # Generate notices when vulnerable versions of software are discovered. # The default is to only monitor software found in the address space defined -# as "local". Refer to the software framework's documentation for more +# as "local". Refer to the software framework's documentation for more # information. @load frameworks/software/vulnerable @@ -35,12 +35,12 @@ @load protocols/smtp/software @load protocols/ssh/software @load protocols/http/software -# The detect-webapps script could possibly cause performance trouble when +# The detect-webapps script could possibly cause performance trouble when # running on live traffic. Enable it cautiously. #@load protocols/http/detect-webapps -# This script detects DNS results pointing toward your Site::local_nets -# where the name is not part of your local DNS zone and is being hosted +# This script detects DNS results pointing toward your Site::local_nets +# where the name is not part of your local DNS zone and is being hosted # externally. Requires that the Site::local_zones variable is defined. @load protocols/dns/detect-external-names @@ -62,7 +62,7 @@ # certificate notary service; see http://notary.icsi.berkeley.edu . # @load protocols/ssl/notary -# If you have libGeoIP support built in, do some geographic detections and +# If you have libGeoIP support built in, do some geographic detections and # logging for SSH traffic. @load protocols/ssh/geo-data # Detect hosts doing SSH bruteforce attacks. @@ -79,9 +79,31 @@ @load frameworks/files/hash-all-files # Detect SHA1 sums in Team Cymru's Malware Hash Registry. -@load frameworks/files/detect-MHR +#@load frameworks/files/detect-MHR # Uncomment the following line to enable detection of the heartbleed attack. Enabling # this might impact performance a bit. # @load policy/protocols/ssl/heartbleed + +#@load scripts/json-logs +########## +## ROCK ## +########## +# As referenced in the "simplerock" recipe, here is the json-logs load line, commented out. +########## + + +# Bro Kafka Output +@load Kafka/KafkaWriter/logs-to-kafka +redef KafkaLogger::topic_name = "bro_raw"; +redef KafkaLogger::sensor_name = "rock00"; +redef KafkaLogger::logstash_style_timestamp = T; + +# File Extraction +@load scripts/bro-file-extraction +redef FileExtract::prefix = "/data/bro/logs/extract_files/"; +redef FileExtract::default_limit = 1048576000; + +# Unified2 +@load scripts/rock/unified2 - change mode from '0664' to '0644' - change group from 'bro' to 'root' - restore selinux security context * git[/opt/bro/share/bro/site/scripts/bro-file-extraction] action sync - clone from https://github.com/CyberAnalyticDevTeam/bro-file-extraction.git into /opt/bro/share/bro/site/scripts/bro-file-extraction - checkout ref 0d50f4cfc8d700035d5a6798d0c7b7de816e768a branch master * template[/opt/bro/share/bro/site/scripts/json-logs.bro] action create - create new file /opt/bro/share/bro/site/scripts/json-logs.bro - update content in file /opt/bro/share/bro/site/scripts/json-logs.bro from none to c43924 --- /opt/bro/share/bro/site/scripts/json-logs.bro 2016-01-22 23:13:05.070870600 +0000 +++ /tmp/chef-rendered-template20160122-9436-u8aheh 2016-01-22 23:13:05.069870600 +0000 @@ -1 +1,5 @@ +@load tuning/json-logs + +redef LogAscii::json_timestamps = JSON::TS_ISO8601; +redef LogAscii::use_json = T; - change mode from '' to '0644' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/etc/profile.d/bro.sh] action create - create new file /etc/profile.d/bro.sh - update content in file /etc/profile.d/bro.sh from none to 29b72e --- /etc/profile.d/bro.sh 2016-01-22 23:13:05.103870599 +0000 +++ /tmp/chef-rendered-template20160122-9436-a5016g 2016-01-22 23:13:05.103870599 +0000 @@ -1 +1,55 @@ +# Load bro to PATH +pathmunge /opt/bro/bin + +# Helpers +alias bro-column="sed \"s/fields.//;s/types.//\" | column -s $'\t' -t" +alias bro-awk='awk -F" "' +bro-grep() { grep -E "(^#)|$1" $2; } +bro-zgrep() { zgrep -E "(^#)|$1" $2; } +topcount() { sort | uniq -c | sort -rn | head -n ${1:-10}; } +colorize() { sed 's/#fields\t\|#types\t/#/g' | awk 'BEGIN {FS="\t"};{for(i=1;i<=NF;i++) printf("\x1b[%sm %s \x1b[0m",(i%7)+31,$i);print ""}'; } +cm() { cat $1 | sed 's/#fields\t\|#types\t/#/g' | awk 'BEGIN {FS="\t"};{for(i=1;i<=NF;i++) printf("\x1b[%sm %s \x1b[0m",(i%7)+31,$i);print ""}'; } +lesscolor() { cat $1 | sed 's/#fields\t\|#types\t/#/g' | awk 'BEGIN {FS="\t"};{for(i=1;i<=NF;i++) printf("\x1b[%sm %s \x1b[0m",(i%7)+31,$i);print ""}' | less -RS; } +topconn() { if [ $# -lt 2 ]; then echo "Usage: topconn {resp|orig} {proto|service} {tcp|udp|icmp|http|dns|ssl|smtp|\"-\"}"; else cat conn.log | bro-cut id.$1_h $2 | grep $3 | topcount; fi; } +fields() { grep -m 1 -E "^#fields" $1 | awk -vRS='\t' '/^[^#]/ { print $1 }' | cat -n ; } +toptalk() { for i in *.log; do echo -e "$i\n================="; cat $i | bro-cut id.orig_h id.resp_h | topcount 20; done; } +talkers() { for j in tcp udp icmp; do echo -e "\t=============\n\t $j\n\t============="; for i in resp orig; do echo -e "====\n$i\n===="; topconn $i proto $j | column -t; done; done; } + +toptotal() { if [ $# -lt 3 ]; then echo "Usage: toptotal {resp|orig} {orig_bytes|resp_bytes|duration} conn.log"; else + zcat $3 | bro-cut id.$1_h $2 \ + | sort \ + | awk '{ if (host != $1) { \ + if (size != 0) \ + print $1, size; \ + host=$1; \ + size=0 \ + } else \ + size += $2 \ + } \ + END { \ + if (size != 0) \ + print $1, size \ + }' \ + | sort -rnk 2 \ + | head -n 20; fi; } + +topconvo() { if [ $# -lt 1 ]; then echo "Usage: topconvo conn.log"; else + zcat $1 | bro-cut id.orig_h id.resp_h orig_bytes resp_bytes \ + | sort \ + | awk '{ if (host != $1 || host2 != $2) { \ + if (size != 0) \ + print $1, $2, size; \ + host=$1; \ + host2=$2; \ + size=0 \ + } else \ + size += $3; \ + size += $4 \ + } \ + END { \ + if (size != 0) \ + print $1, $2, size \ + }' \ + | sort -rnk 3 \ + | head -n 20; fi; } - restore selinux security context * execute[set capabilities on bro] action run - execute /usr/sbin/setcap cap_net_raw,cap_net_admin=eip $(readlink -f /opt/bro/bin/bro) * execute[set capabilities on capstats] action run - execute /usr/sbin/setcap cap_net_raw,cap_net_admin=eip $(readlink -f /opt/bro/bin/capstats) * execute[reload_systemd] action nothing (skipped due to action :nothing) * service[zookeeper] action enable - enable service service[zookeeper] * service[zookeeper] action start - start service service[zookeeper] * execute[create_bro_topic] action nothing (skipped due to action :nothing) * execute[set_kafka_retention] action run - execute sed -i "s/log.retention.hours=168/log.retention.hours=1/" /opt/kafka/config/server.properties * execute[set_kafka_data_dir] action run - execute sed -i "s|log.dirs=/tmp/kafka-logs|log.dirs=/data/kafka|g" /opt/kafka/config/server.properties * directory[/data/kafka] action create - create new directory /data/kafka - change mode from '' to '0755' - change owner from '' to 'kafka' - change group from '' to 'kafka' - restore selinux security context * service[kafka] action enable - enable service service[kafka] * service[kafka] action start - start service service[kafka] * directory[/data/elasticsearch] action create - create new directory /data/elasticsearch - change mode from '' to '0755' - change owner from '' to 'elasticsearch' - change group from '' to 'elasticsearch' - restore selinux security context * template[/etc/sysconfig/elasticsearch] action create - update content in file /etc/sysconfig/elasticsearch from 13bae9 to 3179a0 --- /etc/sysconfig/elasticsearch 2015-12-15 13:37:05.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-einly8 2016-01-22 23:13:06.948870556 +0000 @@ -8,18 +8,23 @@ # Elasticsearch configuration directory #CONF_DIR=/etc/elasticsearch +# Elasticsearch configuration file +#CONF_FILE=$CONF_DIR/elasticsearch.yml + # Elasticsearch data directory -#DATA_DIR=/var/lib/elasticsearch +DATA_DIR=/data/elasticsearch # Elasticsearch logs directory #LOG_DIR=/var/log/elasticsearch +# Elasticsearch work directory +#WORK_DIR=/tmp/elasticsearch + # Elasticsearch PID directory #PID_DIR=/var/run/elasticsearch -# Heap size defaults to 256m min, 1g max # Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g -#ES_HEAP_SIZE=2g +ES_HEAP_SIZE=3g # Heap new generation #ES_HEAP_NEWSIZE= @@ -50,9 +55,6 @@ #ES_USER=elasticsearch #ES_GROUP=elasticsearch -# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process -ES_STARTUP_SLEEP_TIME=5 - ################################ # System properties ################################ @@ -60,17 +62,17 @@ # Specifies the maximum file descriptor number that can be opened by this process # When using Systemd, this setting is ignored and the LimitNOFILE defined in # /usr/lib/systemd/system/elasticsearch.service takes precedence -#MAX_OPEN_FILES=65535 +MAX_OPEN_FILES=65535 # The maximum number of bytes of memory that may be locked into RAM # Set to "unlimited" if you use the 'bootstrap.mlockall: true' option # in elasticsearch.yml (ES_HEAP_SIZE must also be set). # When using Systemd, the LimitMEMLOCK property must be set # in /usr/lib/systemd/system/elasticsearch.service -#MAX_LOCKED_MEMORY=unlimited +MAX_LOCKED_MEMORY=unlimited # Maximum number of VMA (Virtual Memory Areas) a process can own # When using Systemd, this setting is ignored and the 'vm.max_map_count' # property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf -#MAX_MAP_COUNT=262144 +MAX_MAP_COUNT=262144 - restore selinux security context * template[/usr/lib/sysctl.d/elasticsearch.conf] action create (up to date) * template[/etc/elasticsearch/elasticsearch.yml] action create - update content in file /etc/elasticsearch/elasticsearch.yml from 656bae to 65d389 --- /etc/elasticsearch/elasticsearch.yml 2015-11-18 12:27:49.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-v3kemm 2016-01-22 23:13:06.997870555 +0000 @@ -1,97 +1,444 @@ -# ======================== Elasticsearch Configuration ========================= +##################### Elasticsearch Configuration Example ##################### + +# This file contains an overview of various configuration settings, +# targeted at operations staff. Application developers should +# consult the guide at . # -# NOTE: Elasticsearch comes with reasonable defaults for most settings. -# Before you set out to tweak and tune the configuration, make sure you -# understand what are you trying to accomplish and the consequences. +# The installation procedure is covered at +# . # -# The primary way of configuring a node is via this file. This template lists -# the most important settings you may want to configure for a production cluster. +# Elasticsearch comes with reasonable defaults for most settings, +# so you can try it out without bothering with configuration. # -# Please see the documentation for further information on configuration options: -# +# Most of the time, these defaults are just fine for running a production +# cluster. If you're fine-tuning your cluster, or wondering about the +# effect of certain configuration option, please _do ask_ on the +# mailing list or IRC channel [http://elasticsearch.org/community]. + +# Any element in the configuration can be replaced with environment variables +# by placing them in ${...} notation. For example: # -# ---------------------------------- Cluster ----------------------------------- +#node.rack: ${RACK_ENV_VAR} + +# For information on supported formats and syntax for the config file, see +# + + +################################### Cluster ################################### + +# Cluster name identifies your cluster for auto-discovery. If you're running +# multiple clusters on the same network, make sure you're using unique names. # -# Use a descriptive name for your cluster: +#cluster.name: elasticsearch +########## +## ROCK ## +########## +# There are some compelling reasons to change this. But, if you're not multicasting and +# providing a unicast host list (later in this config) then you needn't worry as long as +# you have the same thing here and in the logstash config. +########## + +#################################### Node ##################################### + +# Node names are generated dynamically on startup, so you're relieved +# from configuring them manually. You can tie this node to a specific name: # -# cluster.name: my-application +node.name: "simplerockbuild.simplerock.lan" +########## +## ROCK ## +########## +# This needs to be different on every host. +########## + +# Every node can be configured to allow or deny being eligible as the master, +# and to allow or deny to store the data. # -# ------------------------------------ Node ------------------------------------ +# Allow this node to be eligible as a master node (enabled by default): # -# Use a descriptive name for the node: +node.master: true # -# node.name: node-1 +# Allow this node to store data (enabled by default): # -# Add custom attributes to the node: +node.data: true + +########## +## ROCK ## +########## +# At scale, you will need to play with master and data nodes. +# That is currently left as an exercise for the reader. +########## + +# You can exploit these settings to design advanced cluster topologies. # -# node.rack: r1 +# 1. You want this node to never become a master node, only to hold data. +# This will be the "workhorse" of your cluster. # -# ----------------------------------- Paths ------------------------------------ +#node.master: false +#node.data: true # -# Path to directory where to store the data (separate multiple locations by comma): +# 2. You want this node to only serve as a master: to not store any data and +# to have free resources. This will be the "coordinator" of your cluster. # -# path.data: /path/to/data +#node.master: true +#node.data: false # +# 3. You want this node to be neither master nor data node, but +# to act as a "search load balancer" (fetching data from nodes, +# aggregating results, etc.) +# +#node.master: false +#node.data: false + +# Use the Cluster Health API [http://localhost:9200/_cluster/health], the +# Node Info API [http://localhost:9200/_nodes] or GUI tools +# such as , +# , +# and +# to inspect the cluster state. + +# A node can have generic attributes associated with it, which can later be used +# for customized shard allocation filtering, or allocation awareness. An attribute +# is a simple key value pair, similar to node.key: value, here is an example: +# +#node.rack: rack314 + +# By default, multiple nodes are allowed to start from the same installation location +# to disable it, set the following: +#node.max_local_storage_nodes: 1 + + +#################################### Index #################################### + +# You can set a number of options (such as shard/replica options, mapping +# or analyzer definitions, translog settings, ...) for indices globally, +# in this file. +# +# Note, that it makes more sense to configure index settings specifically for +# a certain index, either when creating it or by using the index templates API. +# +# See and +# +# for more information. + +# Set the number of shards (splits) of an index (5 by default): +# +index.number_of_shards: 32 + +########## +## ROCK ## +########## +# This is a "sane default" on reasonable hardware. This is also a number that should be tuned at scale. +########## + +# Set the number of replicas (additional copies) of an index (1 by default): +# +index.number_of_replicas: 0 + +########## +## ROCK ## +########## +# We don't care about keeping copies of the data safe, so we're not going to waste resources on keeping copies. Effects search speed, so tune it accordingly. +########## + +# Note, that for development on a local machine, with small indices, it usually +# makes sense to "disable" the distributed features: +# +#index.number_of_shards: 1 +#index.number_of_replicas: 0 + +# These settings directly affect the performance of index and search operations +# in your cluster. Assuming you have enough machines to hold shards and +# replicas, the rule of thumb is: +# +# 1. Having more *shards* enhances the _indexing_ performance and allows to +# _distribute_ a big index across machines. +# 2. Having more *replicas* enhances the _search_ performance and improves the +# cluster _availability_. +# +# The "number_of_shards" is a one-time setting for an index. +# +# The "number_of_replicas" can be increased or decreased anytime, +# by using the Index Update Settings API. +# +# Elasticsearch takes care about load balancing, relocating, gathering the +# results from nodes, etc. Experiment with different settings to fine-tune +# your setup. + +# Use the Index Status API () to inspect +# the index status. + + +#################################### Paths #################################### + +# Path to directory containing configuration (this file and logging.yml): +# +#path.conf: /path/to/conf + +# Path to directory where to store index data allocated for this node. +# +path.data: /data/elasticsearch + +########## +## ROCK ## +########## +# If you move your data somewhere else, you need to change it here. +########## + +# +# Can optionally include more than one location, causing data to be striped across +# the locations (a la RAID 0) on a file level, favouring locations with most free +# space on creation. For example: +# +#path.data: /path/to/data1,/path/to/data2 + +# Path to temporary files: +# +#path.work: /path/to/work + # Path to log files: # -# path.logs: /path/to/logs +path.logs: /var/log/elasticsearch + +# Path to where plugins are installed: # -# ----------------------------------- Memory ----------------------------------- +#path.plugins: /path/to/plugins + + +#################################### Plugin ################################### + +# If a plugin listed here is not installed for current node, the node will not start. # -# Lock the memory on startup: +#plugin.mandatory: mapper-attachments,lang-groovy + + +################################### Memory #################################### + +# Elasticsearch performs poorly when JVM starts swapping: you should ensure that +# it _never_ swaps. # -# bootstrap.mlockall: true +# Set this property to true to lock the memory: # -# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory -# available on the system and that the owner of the process is allowed to use this limit. +bootstrap.mlockall: true + +########## +## ROCK ## +########## +# Let ES take all the memory promised to it at startup. Things can go sideways without this. +########## + +# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set +# to the same value, and that the machine has enough memory to allocate +# for Elasticsearch, leaving enough memory for the operating system itself. # -# Elasticsearch performs poorly when the system is swapping the memory. +# You should also make sure that the Elasticsearch process is allowed to lock +# the memory, eg. by using `ulimit -l unlimited`. + + +############################## Network And HTTP ############################### + +# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens +# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node +# communication. (the range means that if the port is busy, it will automatically +# try the next port). + +# Set the bind address specifically (IPv4 or IPv6): # -# ---------------------------------- Network ----------------------------------- +#network.bind_host: 192.168.0.1 + +# Set the address other nodes will use to communicate with this node. If not +# set, it is automatically derived. It must point to an actual IP address. # -# Set the bind address to a specific IP (IPv4 or IPv6): +#network.publish_host: 192.168.0.1 + +# Set both 'bind_host' and 'publish_host': # -# network.host: 192.168.0.1 +#network.host: 192.168.0.1 + +# Set a custom port for the node to node communication (9300 by default): # -# Set a custom port for HTTP: +#transport.tcp.port: 9300 + +# Enable compression for all communication between nodes (disabled by default): # -# http.port: 9200 +#transport.tcp.compress: true + +# Set a custom port to listen for HTTP traffic: # -# For more information, see the documentation at: -# +#http.port: 9200 + +# Set a custom allowed content length: # -# ---------------------------------- Gateway ----------------------------------- +#http.max_content_length: 100mb + +# Disable HTTP completely: # -# Block initial recovery after a full cluster restart until N nodes are started: +#http.enabled: false + + +################################### Gateway ################################### + +# The gateway allows for persisting the cluster state between full cluster +# restarts. Every change to the state (such as adding an index) will be stored +# in the gateway, and when the cluster starts up for the first time, +# it will read its state from the gateway. + +# There are several types of gateway implementations. For more information, see +# . + +# The default gateway type is the "local" gateway (recommended): # -# gateway.recover_after_nodes: 3 +#gateway.type: local + +# Settings below control how and when to start the initial recovery process on +# a full cluster restart (to reuse as much local data as possible when using shared +# gateway). + +# Allow recovery process after N nodes in a cluster are up: # -# For more information, see the documentation at: -# +#gateway.recover_after_nodes: 1 + +# Set the timeout to initiate the recovery process, once the N nodes +# from previous setting are up (accepts time value): # -# --------------------------------- Discovery ---------------------------------- +#gateway.recover_after_time: 5m + +# Set how many nodes are expected in this cluster. Once these N nodes +# are up (and recover_after_nodes is met), begin recovery process immediately +# (without waiting for recover_after_time to expire): # -# Elasticsearch nodes will find each other via unicast, by default. +#gateway.expected_nodes: 2 + + +############################# Recovery Throttling ############################# + +# These settings allow to control the process of shards allocation between +# nodes during initial recovery, replica allocation, rebalancing, +# or when adding and removing nodes. + +# Set the number of concurrent recoveries happening on a node: # -# Pass an initial list of hosts to perform discovery when new node is started: -# The default list of hosts is ["127.0.0.1", "[::1]"] +# 1. During the initial recovery # -# discovery.zen.ping.unicast.hosts: ["host1", "host2"] +#cluster.routing.allocation.node_initial_primaries_recoveries: 4 # -# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): +# 2. During adding/removing nodes, rebalancing, etc # -# discovery.zen.minimum_master_nodes: 3 +#cluster.routing.allocation.node_concurrent_recoveries: 2 + +# Set to throttle throughput when recovering (eg. 100mb, by default 20mb): # -# For more information, see the documentation at: -# +#indices.recovery.max_bytes_per_sec: 20mb + +# Set to limit the number of open concurrent streams when +# recovering a shard from a peer: # -# ---------------------------------- Various ----------------------------------- +#indices.recovery.concurrent_streams: 5 + + +################################## Discovery ################################## + +# Discovery infrastructure ensures nodes can be found within a cluster +# and master node is elected. Multicast discovery is the default. + +# Set to ensure a node sees N other master eligible nodes to be considered +# operational within the cluster. This should be set to a quorum/majority of +# the master-eligible nodes in the cluster. # -# Disable starting multiple nodes on a single system: +#discovery.zen.minimum_master_nodes: 1 + +# Set the time to wait for ping responses from other nodes when discovering. +# Set this option to a higher value on a slow or congested network +# to minimize discovery failures: # -# node.max_local_storage_nodes: 1 +#discovery.zen.ping.timeout: 3s + +# For more information, see +# + +# Unicast discovery allows to explicitly control which nodes will be used +# to discover the cluster. It can be used when multicast is not present, +# or to restrict the cluster communication-wise. # -# Require explicit names when deleting indices: +# 1. Disable multicast discovery (enabled by default): # -# action.destructive_requires_name: true +discovery.zen.ping.multicast.enabled: false + +########## +## ROCK ## +########## +# Your ES environment is probably going to be relatively static. No need for them to yell into the void looking for each other. +########## + +# +# 2. Configure an initial list of master nodes in the cluster +# to perform discovery when new nodes (master or data) are started: +# +discovery.zen.ping.unicast.hosts: ["localhost"] + +########## +## ROCK ## +########## +# If you're clustering, replace this line with something like: +# discovery.zen.ping.unicast.hosts: ["192.168.168.12", "192.168.168.13", "192.168.168.14", "192.168.168.15"] +# Including all of the ES hosts you want in the cluster. +########## + +# EC2 discovery allows to use AWS EC2 API in order to perform discovery. +# +# You have to install the cloud-aws plugin for enabling the EC2 discovery. +# +# For more information, see +# +# +# See +# for a step-by-step tutorial. + +# GCE discovery allows to use Google Compute Engine API in order to perform discovery. +# +# You have to install the cloud-gce plugin for enabling the GCE discovery. +# +# For more information, see . + +# Azure discovery allows to use Azure API in order to perform discovery. +# +# You have to install the cloud-azure plugin for enabling the Azure discovery. +# +# For more information, see . + +################################## Slow Log ################################## + +# Shard level query and fetch threshold logging. + +#index.search.slowlog.threshold.query.warn: 10s +#index.search.slowlog.threshold.query.info: 5s +#index.search.slowlog.threshold.query.debug: 2s +#index.search.slowlog.threshold.query.trace: 500ms + +#index.search.slowlog.threshold.fetch.warn: 1s +#index.search.slowlog.threshold.fetch.info: 800ms +#index.search.slowlog.threshold.fetch.debug: 500ms +#index.search.slowlog.threshold.fetch.trace: 200ms + +#index.indexing.slowlog.threshold.index.warn: 10s +#index.indexing.slowlog.threshold.index.info: 5s +#index.indexing.slowlog.threshold.index.debug: 2s +#index.indexing.slowlog.threshold.index.trace: 500ms + +################################## GC Logging ################################ + +#monitor.jvm.gc.young.warn: 1000ms +#monitor.jvm.gc.young.info: 700ms +#monitor.jvm.gc.young.debug: 400ms + +#monitor.jvm.gc.old.warn: 10s +#monitor.jvm.gc.old.info: 5s +#monitor.jvm.gc.old.debug: 2s + +################################## Security ################################ + +# Uncomment if you want to enable JSONP as a valid return transport on the +# http server. With this enabled, it may pose a security risk, so disabling +# it unless you need it is recommended (it is disabled by default). +# +#http.jsonp.enable: true - restore selinux security context * template[/etc/security/limits.d/elasticsearch.conf] action create - create new file /etc/security/limits.d/elasticsearch.conf - update content in file /etc/security/limits.d/elasticsearch.conf from none to a56597 --- /etc/security/limits.d/elasticsearch.conf 2016-01-22 23:13:07.075870553 +0000 +++ /tmp/chef-rendered-template20160122-9436-jb93ui 2016-01-22 23:13:07.074870553 +0000 @@ -1 +1,3 @@ +elasticsearch - nofile 65535 +elasticsearch - memlock unlimited - restore selinux security context * template[/usr/local/bin/es_cleanup.sh] action create - create new file /usr/local/bin/es_cleanup.sh - update content in file /usr/local/bin/es_cleanup.sh from none to ec6df8 --- /usr/local/bin/es_cleanup.sh 2016-01-22 23:13:07.110870552 +0000 +++ /tmp/chef-rendered-template20160122-9436-8190or 2016-01-22 23:13:07.110870552 +0000 @@ -1 +1,22 @@ +#!/bin/bash + +#Clean out old marvel indexes, only keeping the current index. +for i in $(curl -sSL http://localhost:9200/_stats/indexes\?pretty\=1 | grep marvel | grep -Ev {es-data|kibana} | grep -vF "$(date +%m.%d)" | awk '{print $1}' | sed 's/\"//g' 2>/dev/null); do + curl -sSL -XDELETE http://127.0.0.1:9200/$i > /dev/null 2>&1 +done + +#Cleanup TopBeats indexes from 5 days ago. +curl -sSL -XDELETE "http://127.0.0.1:9200/topbeat-$(date -d '5 days ago' +%Y.%m.%d)" 2>&1 + +#Delete Logstash indexes from 30 days ago. +curl -sSL -XDELETE "http://127.0.0.1:9200/logstash-$(date -d '30 days ago' +%Y.%m.%d)" 2>&1 + +#Make sure all indexes have replicas off +curl -sSL -XPUT 'localhost:9200/_all/_settings' -d ' +{ + "index" : { + "number_of_replicas" : 0 + } +}' > /dev/null 2>&1 + - change mode from '' to '0755' - restore selinux security context * execute[set_es_memlock] action run - execute sed -i "s/.*LimitMEMLOCK.*/LimitMEMLOCK=infinity/g" /usr/lib/systemd/system/elasticsearch.service * service[elasticsearch] action enable - enable service service[elasticsearch] * service[elasticsearch] action start - start service service[elasticsearch] * execute[set_logstash_ipv4_affinity] action run - execute echo "LS_JAVA_OPTS=\"-Djava.net.preferIPv4Stack=true\"" >> /etc/sysconfig/logstash * template[/etc/logstash/conf.d/kafka-bro.conf] action create - create new file /etc/logstash/conf.d/kafka-bro.conf - update content in file /etc/logstash/conf.d/kafka-bro.conf from none to 158c53 --- /etc/logstash/conf.d/kafka-bro.conf 2016-01-22 23:13:08.187870527 +0000 +++ /tmp/chef-rendered-template20160122-9436-1084zim 2016-01-22 23:13:08.187870527 +0000 @@ -1 +1,28 @@ +input { + kafka { + topic_id => "bro_raw" + add_field => { + "[@metadata][stage]" => "bro_kafka" + } + #reset_beginning => true + auto_offset_reset => "smallest" + } +} + +filter { + if "_jsonparsefailure" in [tags] { + drop { } + } +} + +output { + if [@metadata][stage] == "bro_kafka" { + #stdout { codec => rubydebug } + elasticsearch { + hosts => ["127.0.0.1"] + document_type => "%{sensor_logtype}" + #flush_size => 1000 + } + } +} - restore selinux security context * execute[update_kafka_input_plugin] action run - execute cd /opt/logstash; sudo bin/plugin install --version 2.0.3 logstash-input-kafka * service[logstash] action enable (up to date) * service[logstash] action start - start service service[logstash] * remote_file[/root/.chef/local-mode-cache/cache/kibana.tar.gz] action create - create new file /root/.chef/local-mode-cache/cache/kibana.tar.gz - update content in file /root/.chef/local-mode-cache/cache/kibana.tar.gz from none to c6a919 (file sizes exceed 10000000 bytes, diff output suppressed) - restore selinux security context * execute[untar_kibana] action run - execute tar xzf /root/.chef/local-mode-cache/cache/kibana.tar.gz -C /opt/ * execute[rename_kibana_dir] action run - execute mv /opt/{kibana-4.3.1-linux-x64,kibana} * user[kibana] action create - create user kibana * execute[chown_kibana] action run - execute chown -R kibana:kibana /opt/kibana * execute[chown_kibana] action run (skipped due to not_if) * template[/etc/systemd/system/kibana.service] action create - create new file /etc/systemd/system/kibana.service - update content in file /etc/systemd/system/kibana.service from none to d125f7 --- /etc/systemd/system/kibana.service 2016-01-22 23:14:16.260868922 +0000 +++ /tmp/chef-rendered-template20160122-9436-yp60zr 2016-01-22 23:14:16.259868922 +0000 @@ -1 +1,13 @@ +[Unit] +Description=Kibana 4 + +[Service] +Type=simple +User=kibana +Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml +Environment=NODE_ENV=production +ExecStart=/opt/kibana/node/bin/node /opt/kibana/src/cli + +[Install] +WantedBy=multi-user.target - restore selinux security context * execute[reload_systemd] action run - execute systemctl daemon-reload * bash[set_kibana_replicas] action run - execute "bash" "/tmp/chef-script20160122-9436-1k78w0r" * service[kibana] action enable - enable service service[kibana] * service[kibana] action start - start service service[kibana] * bash[install_marvel_and_sql] action run - execute "bash" "/tmp/chef-script20160122-9436-77vetk" * cron[es_cleanup_cron] action create - add crontab entry for cron[es_cleanup_cron] * cron[bro_cron] action create - add crontab entry for cron[bro_cron] * template[/usr/local/bin/rock_start] action create - create new file /usr/local/bin/rock_start - update content in file /usr/local/bin/rock_start from none to 653640 --- /usr/local/bin/rock_start 2016-01-22 23:16:03.710866389 +0000 +++ /tmp/chef-rendered-template20160122-9436-yu463q 2016-01-22 23:16:03.710866389 +0000 @@ -1 +1,44 @@ +#!/bin/bash + +echo "Starting Zookeeper..." +systemctl start zookeeper +sleep 5 +systemctl status zookeeper | egrep "^\s*Active" + +echo "Starting Elasticsearch..." +systemctl start elasticsearch +sleep 5 +systemctl status elasticsearch | egrep "^\s*Active" + +echo "Starting Kafka..." +systemctl start kafka +sleep 5 +systemctl status kafka | egrep "^\s*Active" + +echo "Starting Logstash..." +systemctl start logstash +sleep 5 +systemctl status logstash | egrep "^\s*Active" + +echo "Starting Kibana..." +systemctl start kibana +sleep 5 +systemctl status kibana | egrep "^\s*Active" + +echo "Starting Snort..." +systemctl start snortd +sleep 5 +systemctl status snortd | egrep "^\s*Active" + +echo "Starting Bro..." +/opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start +sleep 5 +/opt/bro/bin/broctl status + +#echo "Starting Stenographer..." +#systemctl start stenographer +#sleep 5 +#systemctl status stenographer | egrep "^\s*Active" + +exit 0 - change mode from '' to '0700' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/usr/local/bin/rock_stop] action create - create new file /usr/local/bin/rock_stop - update content in file /usr/local/bin/rock_stop from none to 65fecc --- /usr/local/bin/rock_stop 2016-01-22 23:16:03.754866388 +0000 +++ /tmp/chef-rendered-template20160122-9436-1s80yov 2016-01-22 23:16:03.754866388 +0000 @@ -1 +1,28 @@ +#!/bin/bash + +#echo "Stopping Stenographer..." +#systemctl stop stenographer + +echo "Stopping Snort..." +systemctl stop snortd + +echo "Stopping Bro..." +/opt/bro/bin/broctl stop + +echo "Stopping Logstash..." +systemctl stop logstash + +echo "Stopping Kibana..." +systemctl stop kibana + +echo "Stopping Elasticsearch..." +systemctl stop elasticsearch + +echo "Stopping Kafka..." +systemctl stop kafka + +echo "Stopping Zookeeper..." +systemctl stop zookeeper + +exit 0 - change mode from '' to '0700' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/usr/local/bin/rock_status] action create - create new file /usr/local/bin/rock_status - update content in file /usr/local/bin/rock_status from none to c284bb (new content is binary, diff output suppressed) - change mode from '' to '0700' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/etc/stenographer/config] action nothing (skipped due to action :nothing) * directory[/data/stenographer] action create - create new directory /data/stenographer - change mode from '' to '0755' - change owner from '' to 'stenographer' - change group from '' to 'stenographer' - restore selinux security context * directory[/data/stenographer/index] action create - create new directory /data/stenographer/index - change mode from '' to '0755' - change owner from '' to 'stenographer' - change group from '' to 'stenographer' - restore selinux security context * directory[/data/stenographer/packets] action create - create new directory /data/stenographer/packets - change mode from '' to '0755' - change owner from '' to 'stenographer' - change group from '' to 'stenographer' - restore selinux security context * service[stenographer] action disable (up to date) * template[/etc/nginx/conf.d/rock.conf] action create - create new file /etc/nginx/conf.d/rock.conf - update content in file /etc/nginx/conf.d/rock.conf from none to 299ae4 --- /etc/nginx/conf.d/rock.conf 2016-01-22 23:16:04.293866376 +0000 +++ /tmp/chef-rendered-template20160122-9436-1bwrj4y 2016-01-22 23:16:04.293866376 +0000 @@ -1 +1,28 @@ +server { + listen 80; + + server_name rock00; + + #auth_basic "Restricted Access"; + #auth_basic_user_file /etc/nginx/htpasswd.users; + + location / { + proxy_pass http://localhost:5601; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection 'upgrade'; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + } + + location /_plugin/ { + proxy_pass http://localhost:9200; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection 'upgrade'; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + proxy_redirect off; + } +} - restore selinux security context * file[/etc/nginx/conf.d/default.conf] action delete - delete file /etc/nginx/conf.d/default.conf * file[/etc/nginx/conf.d/example_ssl.conf] action delete - delete file /etc/nginx/conf.d/example_ssl.conf * execute[enable_nginx_connect_selinux] action run - execute setsebool -P httpd_can_network_connect 1 * service[nginx] action enable - enable service service[nginx] * service[nginx] action start - start service service[nginx] * template[/etc/sysconfig/snort] action nothing (skipped due to action :nothing) * template[/etc/snort/snort.conf] action create - update content in file /etc/snort/snort.conf from 06c6a4 to 78f426 --- /etc/snort/snort.conf 2015-11-20 17:15:56.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-jjx6ai 2016-01-22 23:16:06.698866319 +0000 @@ -41,6 +41,11 @@ # Step #1: Set the network variables. For more information, see README.variables ################################################### +############## +# ROCK # +############## +# You need to configure as many of these variables as possible. + # Setup the network addresses you are protecting ipvar HOME_NET any @@ -110,8 +115,8 @@ # not relative to snort.conf like the above variables # This is completely inconsistent with how other vars work, BUG 89986 # Set the absolute path appropriately -var WHITE_LIST_PATH ../rules -var BLACK_LIST_PATH ../rules +var WHITE_LIST_PATH rules +var BLACK_LIST_PATH rules ################################################### # Step #2: Configure the decoder. For more information, see README.decode @@ -250,7 +255,7 @@ dynamicengine /usr/lib64/snort-2.9.8.0_dynamicengine/libsf_engine.so # path to dynamic rules libraries -dynamicdetection directory /usr/local/lib/snort_dynamicrules +# dynamicdetection directory /usr/lib/snort_dynamicrules ################################################### # Step #5: Configure preprocessors @@ -518,10 +523,10 @@ # unified2 # Recommended for most installs -# output unified2: filename merged.log, limit 128, nostamp, mpls_event_types, vlan_event_types +#output unified2: filename merged.log, limit 128, nostamp, mpls_event_types, vlan_event_types # Additional configuration for specific types of installs -# output alert_unified2: filename snort.alert, limit 128, nostamp +output alert_unified2: filename snort.alert, limit 128, nostamp # output log_unified2: filename snort.log, limit 128, nostamp # syslog @@ -544,111 +549,7 @@ # site specific rules include $RULE_PATH/local.rules - -include $RULE_PATH/app-detect.rules -include $RULE_PATH/attack-responses.rules -include $RULE_PATH/backdoor.rules -include $RULE_PATH/bad-traffic.rules -include $RULE_PATH/blacklist.rules -include $RULE_PATH/botnet-cnc.rules -include $RULE_PATH/browser-chrome.rules -include $RULE_PATH/browser-firefox.rules -include $RULE_PATH/browser-ie.rules -include $RULE_PATH/browser-other.rules -include $RULE_PATH/browser-plugins.rules -include $RULE_PATH/browser-webkit.rules -include $RULE_PATH/chat.rules -include $RULE_PATH/content-replace.rules -include $RULE_PATH/ddos.rules -include $RULE_PATH/dns.rules -include $RULE_PATH/dos.rules -include $RULE_PATH/experimental.rules -include $RULE_PATH/exploit-kit.rules -include $RULE_PATH/exploit.rules -include $RULE_PATH/file-executable.rules -include $RULE_PATH/file-flash.rules -include $RULE_PATH/file-identify.rules -include $RULE_PATH/file-image.rules -include $RULE_PATH/file-multimedia.rules -include $RULE_PATH/file-office.rules -include $RULE_PATH/file-other.rules -include $RULE_PATH/file-pdf.rules -include $RULE_PATH/finger.rules -include $RULE_PATH/ftp.rules -include $RULE_PATH/icmp-info.rules -include $RULE_PATH/icmp.rules -include $RULE_PATH/imap.rules -include $RULE_PATH/indicator-compromise.rules -include $RULE_PATH/indicator-obfuscation.rules -include $RULE_PATH/indicator-shellcode.rules -include $RULE_PATH/info.rules -include $RULE_PATH/malware-backdoor.rules -include $RULE_PATH/malware-cnc.rules -include $RULE_PATH/malware-other.rules -include $RULE_PATH/malware-tools.rules -include $RULE_PATH/misc.rules -include $RULE_PATH/multimedia.rules -include $RULE_PATH/mysql.rules -include $RULE_PATH/netbios.rules -include $RULE_PATH/nntp.rules -include $RULE_PATH/oracle.rules -include $RULE_PATH/os-linux.rules -include $RULE_PATH/os-other.rules -include $RULE_PATH/os-solaris.rules -include $RULE_PATH/os-windows.rules -include $RULE_PATH/other-ids.rules -include $RULE_PATH/p2p.rules -include $RULE_PATH/phishing-spam.rules -include $RULE_PATH/policy-multimedia.rules -include $RULE_PATH/policy-other.rules -include $RULE_PATH/policy.rules -include $RULE_PATH/policy-social.rules -include $RULE_PATH/policy-spam.rules -include $RULE_PATH/pop2.rules -include $RULE_PATH/pop3.rules -include $RULE_PATH/protocol-finger.rules -include $RULE_PATH/protocol-ftp.rules -include $RULE_PATH/protocol-icmp.rules -include $RULE_PATH/protocol-imap.rules -include $RULE_PATH/protocol-pop.rules -include $RULE_PATH/protocol-services.rules -include $RULE_PATH/protocol-voip.rules -include $RULE_PATH/pua-adware.rules -include $RULE_PATH/pua-other.rules -include $RULE_PATH/pua-p2p.rules -include $RULE_PATH/pua-toolbars.rules -include $RULE_PATH/rpc.rules -include $RULE_PATH/rservices.rules -include $RULE_PATH/scada.rules -include $RULE_PATH/scan.rules -include $RULE_PATH/server-apache.rules -include $RULE_PATH/server-iis.rules -include $RULE_PATH/server-mail.rules -include $RULE_PATH/server-mssql.rules -include $RULE_PATH/server-mysql.rules -include $RULE_PATH/server-oracle.rules -include $RULE_PATH/server-other.rules -include $RULE_PATH/server-webapp.rules -include $RULE_PATH/shellcode.rules -include $RULE_PATH/smtp.rules -include $RULE_PATH/snmp.rules -include $RULE_PATH/specific-threats.rules -include $RULE_PATH/spyware-put.rules -include $RULE_PATH/sql.rules -include $RULE_PATH/telnet.rules -include $RULE_PATH/tftp.rules -include $RULE_PATH/virus.rules -include $RULE_PATH/voip.rules -include $RULE_PATH/web-activex.rules -include $RULE_PATH/web-attacks.rules -include $RULE_PATH/web-cgi.rules -include $RULE_PATH/web-client.rules -include $RULE_PATH/web-coldfusion.rules -include $RULE_PATH/web-frontpage.rules -include $RULE_PATH/web-iis.rules -include $RULE_PATH/web-misc.rules -include $RULE_PATH/web-php.rules -include $RULE_PATH/x11.rules +include $RULE_PATH/snort.rules ################################################### # Step #8: Customize your preprocessor and decoder alerts - restore selinux security context * template[/etc/snort/disablesid.conf] action create - create new file /etc/snort/disablesid.conf - update content in file /etc/snort/disablesid.conf from none to 94a187 --- /etc/snort/disablesid.conf 2016-01-22 23:16:06.759866317 +0000 +++ /tmp/chef-rendered-template20160122-9436-7cdfew 2016-01-22 23:16:06.759866317 +0000 @@ -1 +1,4 @@ + +# These all fail out with errors like: "FATAL ERROR: /etc/snort/rules/snort.rules(4635) !any is not allowed: !$DNS_SERVERS." +1:2013353,1:2013354,1:2013355,1:2013357,1:2013358,1:2013359,1:2013360,1:2011802,1:2000328,1:2002087 - restore selinux security context * template[/etc/snort/pulledpork.conf] action create - create new file /etc/snort/pulledpork.conf - update content in file /etc/snort/pulledpork.conf from none to 24fb2c --- /etc/snort/pulledpork.conf 2016-01-22 23:16:06.787866317 +0000 +++ /tmp/chef-rendered-template20160122-9436-jy3qy1 2016-01-22 23:16:06.787866317 +0000 @@ -1 +1,31 @@ +rule_url=https://snort.org/downloads/community/|community-rules.tar.gz|Community +rule_url=https://www.snort.org/reg-rules/|snortrules-snapshot.tar.gz|796f26a2188c4c953ced38ff3ec899d8ae543350 +rule_url=https://rules.emergingthreats.net/|emerging.rules.tar.gz|open-nogpl +rule_url=http://talosintel.com/feeds/ip-filter.blf|IPBLACKLIST|open +ignore=deleted.rules,experimental.rules,local.rules +temp_path=/tmp +rule_path=/etc/snort/rules/snort.rules +sid_msg=/etc/snort/sid-msg.map +sid_msg_version=1 +sid_changelog=/var/log/snort/sid_changes.log +snort_path=/usr/sbin/snort +config_path=/etc/snort/snort.conf +sorule_path=/usr/lib/snort_dynamicrules/ +distro='' +black_list=/etc/snort/rules/iplists/default.blacklist +IPRVersion=/etc/snort/rules/iplists +version=0.7.2 + +# Here you can specify what rule modification files to run automatically. +# simply uncomment and specify the apt path. +# enablesid=/usr/local/etc/snort/enablesid.conf +# dropsid=/usr/local/etc/snort/dropsid.conf +disablesid=/etc/snort/disablesid.conf +# modifysid=/usr/local/etc/snort/modifysid.conf + +# What is the base ruleset that you want to use, please uncomment to use +# and see the README.RULESETS for a description of the options. +# Note that setting this value will disable all ET rulesets if you are +# Running such rulesets +# ips_policy=security - restore selinux security context * execute[set capabilities on snort] action run - execute /usr/sbin/setcap cap_net_raw,cap_net_admin=eip $(readlink -f /usr/sbin/snort) * git[/opt/bro/share/bro/site/scripts/rock] action sync - clone from https://github.com/CyberAnalyticDevTeam/rock_bro_scripts.git into /opt/bro/share/bro/site/scripts/rock - checkout ref 2ca9e1ef1d5958c0a3dc7e9658753d8a65a03239 branch master * git[/usr/local/pulledpork] action sync - clone from https://github.com/shirkdog/pulledpork.git into /usr/local/pulledpork - checkout ref 8b9441aeeb7e1477e5be415f27dbc4eb25dd9d59 branch master * execute[chmod_pulledpork] action run - execute chmod 755 /usr/local/pulledpork/pulledpork.pl * link[/usr/local/bin/pulledpork.pl] action create - create symlink at /usr/local/bin/pulledpork.pl to /usr/local/pulledpork/pulledpork.pl * directory[/usr/lib/snort_dynamicrules] action create - create new directory /usr/lib/snort_dynamicrules - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * directory[/etc/snort/rules/iplists] action create - create new directory /etc/snort/rules/iplists - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * file[/etc/snort/rules/local.rules] action create - create new file /etc/snort/rules/local.rules - restore selinux security context * file[/etc/snort/rules/white_list.rules] action create - create new file /etc/snort/rules/white_list.rules - restore selinux security context * file[/etc/snort/rules/black_list.rules] action create - create new file /etc/snort/rules/black_list.rules - restore selinux security context * directory[/data/snort] action create - create new directory /data/snort - change mode from '' to '0755' - change owner from '' to 'snort' - change group from '' to 'snort' - restore selinux security context * execute[snort_chcon] action run - execute chcon -v --type=snort_log_t /data/snort/ * bash[run_pulledpork] action nothing (skipped due to action :nothing) * cron[pulledpork] action create - add crontab entry for cron[pulledpork] * ruby_block[determine_monitor_interface] action run - execute the ruby block determine_monitor_interface * execute[create_bro_topic] action run - execute sleep 10; /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic bro_raw * template[ifcfg-monif] action create - create new file /tmp/ifcfg_hacks.sh - update content in file /tmp/ifcfg_hacks.sh from none to 78b776 --- /tmp/ifcfg_hacks.sh 2016-01-22 23:16:19.955866006 +0000 +++ /tmp/chef-rendered-template20160122-9436-52svg4 2016-01-22 23:16:19.954866006 +0000 @@ -1 +1,30 @@ +#!/bin/bash + +MONIFACEFILE="/etc/sysconfig/network-scripts/ifcfg-eno33554984" +MONIFACE="eno33554984" + +if [[ -f $MONIFACEFILE ]]; then + ORIGUUID=$(grep -E "^UUID" $MONIFACEFILE) +else + ORIGUUID="#UUID=xxxxxxxxxxxxxxxxxxxxxxxxxx" +fi + +cat << EOF | tee $MONIFACEFILE +TYPE=Ethernet +BOOTPROTO=none +IPV4_FAILURE_FATAL=no +IPV6INIT=no +IPV6_FAILURE_FATAL=no +NAME=$MONIFACE +$ORIGUUID +DEVICE=$MONIFACE +ONBOOT=yes +NM_CONTROLLED=no +EOF + +DEFIFACEFILE="/etc/sysconfig/network-scripts/ifcfg-eno16777736" + +sed -i 's/^IPV6INIT=.*/IPV6INIT=no/g' $DEFIFACEFILE + +exit 0 - change mode from '' to '0755' - restore selinux security context * template[/sbin/ifup-local] action create - create new file /sbin/ifup-local - update content in file /sbin/ifup-local from none to 7a8504 --- /sbin/ifup-local 2016-01-22 23:16:19.986866006 +0000 +++ /tmp/chef-rendered-template20160122-9436-1fnfxra 2016-01-22 23:16:19.985866006 +0000 @@ -1 +1,56 @@ +#!/bin/bash +# File: /sbin/ifup-local +# +# This script is run after normal sysconfig network-script configuration +# is performed on RHEL/CentOS-based systems. +# +# Parameters: +# $1: network interface name +# +# Post ifup configuration for tuning capture interfaces +# This is compatible with the ixgbe driver, YMMV + +# Change this to something like /tmp/ifup-local.log for troubleshooting +#LOG=/dev/null +LOG=/tmp/ifup-local.log + +case $1 in +eno33554984) + + for i in rx tx sg tso ufo gso gro lro rxvlan txvlan + do + ethtool -K $1 $i off &>$LOG + done + + ethtool -N $1 rx-flow-hash udp4 sdfn &>$LOG + ethtool -N $1 rx-flow-hash udp6 sdfn &>$LOG + ethtool -n $1 rx-flow-hash udp6 &>$LOG + ethtool -n $1 rx-flow-hash udp4 &>$LOG + ethtool -C $1 rx-usecs 10 &>$LOG + ethtool -C $1 adaptive-rx off &>$LOG + ethtool -G $1 rx 4096 &>$LOG + + # Disable ipv6 + echo 1 > /proc/sys/net/ipv6/conf/$1/disable_ipv6 &>$LOG + echo 0 > /proc/sys/net/ipv6/conf/$1/autoconf &>$LOG + + # Set promiscuous mode + ip link set $1 promisc on &>$LOG + + # Just in case ipv6 is already on this interfaces, let's kill it + ip addr show dev $1 | grep --silent inet6 + + if [ $? -eq 0 ] + then + ADDR=$(ip addr show dev $1 | grep inet6 | awk '{ print $2 }') + ip addr del $ADDR dev $1 &>$LOG + fi + +;; + +*) +# No post commands needed for this interface +;; + +esac - change mode from '' to '0755' - change owner from '' to 'root' - change group from '' to 'root' - restore selinux security context * template[/etc/stenographer/config] action create - update content in file /etc/stenographer/config from 5ea80c to 19efc6 --- /etc/stenographer/config 2015-08-11 21:20:23.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-139529e 2016-01-22 23:16:20.023866005 +0000 @@ -1,11 +1,12 @@ { "Threads": [ - { "PacketsDirectory": "/path/to/thread0/packets/directory" - , "IndexDirectory": "/path/to/thread0/index/directory" + { "PacketsDirectory": "/data/stenographer/packets" + , "IndexDirectory": "/data/stenographer/index" + , "DiskFreePercentage": 20 } ] , "StenotypePath": "/usr/bin/stenotype" - , "Interface": "em1" + , "Interface": "eno33554984" , "Port": 1234 , "Host": "127.0.0.1" , "Flags": [] - restore selinux security context * template[/etc/sysconfig/snort] action create - update content in file /etc/sysconfig/snort from 960638 to 9757cf --- /etc/sysconfig/snort 2015-11-18 18:59:14.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-1wbhpla 2016-01-22 23:16:20.056866004 +0000 @@ -12,7 +12,7 @@ # What interface should snort listen on? [Pick only 1 of the next 3!] # This is -i {interface} on the command line # This is the snort.conf config interface: {interface} directive -INTERFACE=eth0 +INTERFACE=eno33554984 # # The following two options are not directly supported on the command line # or in the conf file and assume the same Snort configuration for all @@ -56,7 +56,7 @@ # Where should Snort log? # -l {/path/to/logdir} # config logdir: {/path/to/logdir} -LOGDIR=/var/log/snort +LOGDIR=/data/snort # How should Snort alert? Valid alert modes include fast, full, none, and # unsock. Fast writes alerts to the default "alert" file in a single-line, @@ -66,7 +66,7 @@ # out over a UNIX socket to another process that attaches to that socket. # -A {alert-mode} # output alert_{type}: {options} -ALERTMODE=fast +# ALERTMODE=fast # Should Snort dump the application layer data when displaying packets in # verbose or packet logging mode. @@ -84,7 +84,7 @@ # alerts normally. # -N # config nolog -NO_PACKET_LOG=0 +NO_PACKET_LOG=1 # Print out the receiving interface name in alerts. # -I @@ -100,7 +100,7 @@ # To add a BPF filter to the command line uncomment the following variable # syntax corresponds to tcpdump(8) -#BPF="not host 192.168.1.1" +BPF="-m 0133 ip or not ip" # To use an external BPF filter file uncomment the following variable # syntax corresponds to tcpdump(8) - restore selinux security context * bash[run_pulledpork] action run - execute "bash" "/tmp/chef-script20160122-9436-8t2x02" * template[/opt/bro/etc/node.cfg] action create - update content in file /opt/bro/etc/node.cfg from dd9a93 to 6c5178 --- /opt/bro/etc/node.cfg 2015-09-06 19:43:34.000000000 +0000 +++ /tmp/chef-rendered-template20160122-9436-ej1t39 2016-01-22 23:17:57.614863704 +0000 @@ -1,38 +1,16 @@ -# Example BroControl node configuration. -# -# This example has a standalone node ready to go except for possibly changing -# the sniffing interface. +[manager] +type=manager +host=localhost -# This is a complete standalone configuration. Most likely you will -# only need to change the interface. -[bro] -type=standalone +[proxy-1] +type=proxy host=localhost -interface=eth0 -## Below is an example clustered configuration. If you use this, -## remove the [bro] node above. +[worker-1] +type=worker +host=localhost +interface=eno33554984 +lb_method=pf_ring +lb_procs=2 -#[manager] -#type=manager -#host=host1 -# -#[proxy-1] -#type=proxy -#host=host1 -# -#[worker-1] -#type=worker -#host=host2 -#interface=eth0 -# -#[worker-2] -#type=worker -#host=host3 -#interface=eth0 -# -#[worker-3] -#type=worker -#host=host4 -#interface=eth0 - change mode from '0664' to '0644' - change group from 'bro' to 'root' - restore selinux security context * execute[ifcfg_hacks] action run - execute /tmp/ifcfg_hacks.sh * execute[start_bro] action run ================================================================================ Error executing action `run` on resource 'execute[start_bro]' ================================================================================ Mixlib::ShellOut::ShellCommandFailed ------------------------------------ Expected process to exit with [0], but received '1' ---- Begin output of /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start ---- STDOUT: creating policy directories ... installing site policies ... generating cluster-layout.bro ... generating local-networks.bro ... generating broctl-config.bro ... generating broctl-config.sh ... updating nodes ... manager scripts are ok. proxy-1 scripts are ok. worker-1-1 scripts are ok. worker-1-2 scripts failed. internal warning in /opt/bro/lib/bro/plugins/Kafka_KafkaWriter/scripts/Kafka/KafkaWriter/logs-to-kafka.bro, line 1: Discarded extraneous Broxygen comment: ROCK ## /opt/bro/share/broctl/scripts/check-config: line 48: 32775 Segmentation fault "${bro}" "$@" STDERR: ---- End output of /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start ---- Ran /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start returned 1 Resource Declaration: --------------------- # In /root/.chef/local-mode-cache/cache/cookbooks/simplerock/recipes/default.rb 444: execute 'start_bro' do 445: command '/opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start' 446: action :nothing 447: notifies :write, "log[branding]", :delayed 448: end 449: Compiled Resource: ------------------ # Declared in /root/.chef/local-mode-cache/cache/cookbooks/simplerock/recipes/default.rb:444:in `from_file' execute("start_bro") do action [:nothing] retries 0 retry_delay 2 default_guard_interpreter :execute command "/opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start" backup 5 returns 0 declared_type :execute cookbook_name "simplerock" recipe_name "default" end Running handlers: [2016-01-22T23:18:06+00:00] ERROR: Running exception handlers Running handlers complete [2016-01-22T23:18:06+00:00] ERROR: Exception handlers complete [2016-01-22T23:18:06+00:00] FATAL: Stacktrace dumped to /root/.chef/local-mode-cache/cache/chef-stacktrace.out Chef Client failed. 130 resources updated in 576.181615364 seconds [2016-01-22T23:18:06+00:00] ERROR: execute[start_bro] (simplerock::default line 444) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1' ---- Begin output of /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start ---- STDOUT: creating policy directories ... installing site policies ... generating cluster-layout.bro ... generating local-networks.bro ... generating broctl-config.bro ... generating broctl-config.sh ... updating nodes ... manager scripts are ok. proxy-1 scripts are ok. worker-1-1 scripts are ok. worker-1-2 scripts failed. internal warning in /opt/bro/lib/bro/plugins/Kafka_KafkaWriter/scripts/Kafka/KafkaWriter/logs-to-kafka.bro, line 1: Discarded extraneous Broxygen comment: ROCK ## /opt/bro/share/broctl/scripts/check-config: line 48: 32775 Segmentation fault "${bro}" "$@" STDERR: ---- End output of /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start ---- Ran /opt/bro/bin/broctl install; /opt/bro/bin/broctl check && /opt/bro/bin/broctl start returned 1 [2016-01-22T23:18:06+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) [root@rock00 SimpleRock]#