Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Moloch uses a tiered system for configuration variables. A tiered system allows Moloch to share one config file for many machines. The ordering of sections within the config file doesn't matter.
Order of config variables:
- [optional] The section titled with the node name is used first. Moloch will always tag sessions with node: <node name>
- [optional] If a node has a nodeClass variable, the section titled with the nodeClass name is used next. Sessions will be tagged with node:<node class name> which is useful if watching different network classes.
- The section titled "default" is used last.
|bpf||EMPTY||The bpf filter used to reduce traffic. Used both on live and file traffic.|
|certFile||EMPTY||Public certificate to use for https, if not set then http will be used. keyFile must also be set.|
|dontSaveBPFs||EMPTY||Semicolon ';' separated list of bpf filters which when matched for a session causes the remaining pcap from being saved for the session. It is possible to specify the number of packets to save per filter by ending with a :num. For example dontSaveBPFs = port 22:5 will only save 5 packets for port 22 sessions. Currently only the initial packet is matched against the bpfs.|
|dontSaveTags||EMPTY||Semicolon ';' separated list of tags which once capture sets for a session causes the remaining pcap from being saved for the session. It is likely that the initial packets WILL be saved for the session since tags usually aren't set until after several packets. It is possible to specify the number of packets to save per filter by ending with a :num.|
|dropGroup||EMPTY||Group to drop privileges to. The pcapDir must be writable by this group or to the user specified by dropUser|
|dropUser||EMPTY||User to drop privileges to. The pcapDir must be writable by this user or to the group specified by dropGroup|
|espTimeout||600||For ESP sessions, Moloch writes a session record after this many seconds of inactivity since the last session save.|
|elasticsearch||http://localhost:9200||A comma separated list of urls to use to connect to the Elasticsearch cluster. If not using a VIP, a different Elasticsearch node can be specified for each Moloch node. If Elasticsearch requires a user/password those can be placed in the url also, http://user:pass@hostname:port|
5% (>= v0.12)
100 (< v0.12)
|Delete pcap files when free space is lower then this. This does NOT delete the session records in the database. It is recommended this value is between 5% and 10% of the disk. Database deletes are done by the db.pl expire. Can also be specified using a percentage.|
|geoipASNFile||EMPTY||(Pre 1.0) Path to the maxmind geoip ASN file. Download free version|
|geoipFile||EMPTY||(Pre 1.0) Path to the maxmind geoip country file. Download free version|
|geoLite2ASN||/data/moloch/etc/GeoLite2-ASN.mmdb||(>= 1.0) Path to the maxmind geoip ASN file. Download free version|
|geoLite2Country||/data/moloch/etc/GeoLite2-Country.mmdb||(>= 1.0) Path to the maxmind geoip country file. Download free version|
|httpRealm||Moloch||HTTP Digest Realm - Must be in the default section of configuration file. Changing the value will cause all previous stored passwords to no longer work.|
|icmpTimeout||10||For ICMP sessions, Moloch writes a session record after this many seconds of inactivity since the last session save.|
|interface||EMPTY||Semicolon ';' separated list of interfaces to listen on for live traffic. (Prior to 0.14 only 1 interface is allowed)|
|keyFile||EMPTY||Private certificate to use for https, if not set then http will be used. certFile must also be set.|
|magicMode||both (since 1.5.0) libmagic (0.16.1)||
(since 0.16.1) libfile can be VERY slow. Less accurate "magicing" is available for http/smtp bodies:
|maxFileSizeG||4||The max raw pcap file size in gigabytes. The disk should have room for at least 10*maxFileSizeG files.|
|maxFileTimeM||0||The max time in minutes between rotating pcap files. Useful if there is an external utility that needs to look for closed files every so many minutes. Setting to 0 means only use maxFileSizeG|
|maxPackets||10000||Moloch writes a session record after this many packets since the last save. Moloch is only tested at 10k, anything above is not recommended.|
|maxStreams||1500000||An aproximiate maximum number of active sessions Moloch/libnids will try and monitor|
|minPacketsSaveBPFs||(since 0.14.2) Semicolon ';' separated list of bpf filters which when matched for a session have their SPI data NOT saved to Elasticsearch. PCAP data is still saved however. It is possible to specify the number of min packets required for SPI to be saved by ending with a :num. A use case is a scanning host inside the network that you only want to capture if their is a conversation "tcp and host 10.10.10.10:1".|
|ouiFile||EMPTY||The mac address lookup for manufactures file Download free version|
|packetThreads||1||Number of threads to use to process packets AFTER the reader has received the packets. This also controls how many packet queues there are, since each thread has its own queue. Basically how much CPU to dedicate to parsing the packets. Increase this if you get errors about dropping packets or the packetQ is over flowing.|
|serverSecret||Value of password secret||The server-to-server shared key. It must be set in the [default] section of the config file, and all viewers in the moloch cluster must have the same value. It can be, and should be changed periodically. Since 0.18.3.|
|passwordSecret||EMPTY||Server-to-server and password hash secret - Must be in [default] section of the config file, and all viewers in the moloch cluster must have the same value. Since elasticsearch is wide open by default, we encrypt the stored password hashes with this so a malicious person can't insert a working new account. It is also used for secure server-to-server communication if serverSecret is not set, if using 0.18.3 or later please set serverSecret also to a different value. Don't set if you do not want user authentication. Changing the value will make all previously stored passwords no longer work.|
|parseCookieValue||false||Parse HTTP request cookie values, cookie keys are always parsed.|
|parseQSValue||false||Parse HTTP query string values, query string keys are always parsed.|
|parseSMB||true||Parse extra SMB traffic info|
|parseDNSRecordAll||false||(Since 1.6) Parse a full DNS record (query, answer, authoritative, and additional) and store various DNS information (look up hostname, name server IPs, mail exchange server IPs, and so on) into multiple ES fields|
|parseSMTP||true||Parse extra SMTP traffic info|
|parseSMTPHeaderAll||false||(Since 1.6) Parse ALL SMTP request headers not already parsed using the [headers-email] section|
|parseHTTPRequestHeaderAll||false||(Since 1.6) Parse ALL HTTP request headers not already parsed using the [headers-http-request] section|
|parseHTTPResponseHeaderAll||false||(Since 1.6) Parse ALL HTTP request headers not already parsed using the [headers-http-response] section|
|pcapDir||EMPTY||Semicolon separated list of directories to save pcap files to. The directory to save to is currently picked using round robin, need to add a smarter algorithm eventually.|
|pcapDirTemplate||EMPTY||(Since 0.14.2) When set, this strftime template is appended to pcapDir and allows multiple directories to be created based on time.|
When pcapDir is a list of directories, this determines how Moloch chooses which directory to use for each new pcap file.
Possible values: round-robin (rotate sequentially), max-free-percent (choose the directory on the filesystem with the highest percentage of available space), max-free-bytes (choose the directory on the filesystem with the highest number of available bytes).
|parsersDir||/data/moloch/parsers ; ./parsers||Semicolon separated list of directories to load parsers from|
|pluginsDir||EMPTY||Semicolon separated list of directories to load plugins from|
|plugins||EMPTY||Semicolon separated list of plugins to load and the order to load them in. Must include the trailing .so|
|rootPlugins||EMPTY||Semicolon separated list of plugins to load as root and the order to load them in. Must include the trailing .so|
|rirFile||EMPTY||Path of the RIR assignments file. Download|
Specifies how often to create a new index in Elasticsearch. Use daily or a form of hourly for busy live instances, use weekly or monthly for research instances. When choosing a value, the goal is to have the avg shard be around 30G. Prior to 1.5.0 changing the value will cause previous sessions to be unreachable thru the interface, since 1.5.0 you can set queryAllIndices if you need to change the value. Prior to 1.5.0 if using the multi viewer then all Moloch clusters must have the same value.
Possible values are: hourly, daily, weekly, monthly.
1.5.0 added hourly2, hourly3, hourly4, hourly6, hourly8, hourly12 which will bucket N number of hours together. So hourly3 for example will make it so each shard has 3 hours of data. hourly1 would be the same as hourly and hourly24 would be the same as daily.
|rulesFiles||EMPTY||(Since 0.19) A semicolon separated list of files that contain rules. These rules match against fields set and can set other fields or meta data about the sessions. See RulesFormat for the format of the files.|
|sctpTimeout||60||For SCTP sessions, Moloch writes a session record after this many seconds of inactivity since the last session save.|
|smtpIpHeaders||EMPTY||Semicolon separated list of SMTP Headers that have ips, need to have the terminating colon ':'|
|spiDataMaxIndices||1||Specify the max number of indices we calculate spidata for. Elasticsearch will blow up if we allow the spiData to search too many indices.|
|tcpSaveTimeout||480||For TCP sessions, Moloch writes a session record after this many seconds since the last session save, no matter if active or inactive.|
|tcpTimeout||480||For TCP sessions, Moloch writes a session record after this many seconds of inactivity since the last session save.|
|titleTemplate||_cluster_ - _page_ _-view_ _-expression_||
|udpTimeout||60||For UDP sessions, Moloch writes a session record after this many seconds of inactivity since the last session save.|
|userNameHeader||EMPTY||Header to use for determining the username to check in the database for instead of using http digest. Use this if you have apache or something else doing the auth. The Web Auth Header checkbox must be set for user. The userNameHeader m should be the lower case version of the header apache is setting|
|viewerPlugins||EMPTY||Semicolon separated list of viewer plugins to load and the order to load them in. Must include the trailing .js|
|viewHost||EMPTY||The ip used to listen. See the `host` section of https://nodejs.org/docs/latest-v8.x/api/net.html#net_server_listen_port_host_backlog_callback|
|viewPort||8005||Port for viewer to listen on|
|viewUrl||http[s]://hostname:[viewport]||The URL used to accecss this viewer instance|
|webBasePath||/||The base url for Moloch web requests. Must end with a / or bad things will happen.|
|yara||EMPTY||Where to load Yara rules from.|
|yaraEveryPacket||TRUE||(Since 1.5.0) Apply yara to every packet or just the packets that are classified.|
|yaraFastMode||TRUE||(Since 1.5.0) Set the Yara Fast Mode flag.|
|autoGenerateId||false||(Since 1.6) Use Elasticsearch auto generated ids|
|compressES||false||Compress requests to ES, reduces ES bandwidth by ~80% at the cost of increased CPU. MUST have "http.compression: true" in elasticsearch.yml file|
|dbBulkSize||20000||Size of indexing request to send to Elasticsearch. Increase if monitoring a high bandwidth network.|
|dbFlushTimeout||5||Number of seconds before we force a flush to Elasticsearch|
|filenameOps||EMPTY||(Since 1.5.0) A semicolon separated list of operations that use the filename. Format is `fieldexpr=match%value` so if you wanted to set a tag based on part of filenames that start with gre use `tags=/gre-(.*)\\.pcap%gretest-\\1`. Notice two backslashes are required everywhere you want one because of ini formating.|
|huntWarn||100000||(Since 1.6.0) Warn users when creating a hunt if more then this many sessions will be searched|
|huntLimit||1000000||(Since 1.6.0) Do not create hunts for non admin users if more then this many sessions will be searched|
|huntAdminLimit||10000000||(Since 1.6.0) Do not create hunts for admin users if more then this many sessions will be searched|
|includes||Semicolon ';' separated list of files to load for config values. Files are loaded in order and can replace values set in this file or previous files. Setting includes is only supported in the top level config file.|
|interfaceOps||EMPTY||(Since 1.5.0) A semicolon separated list of a comma separted list of operations to set for each interface. The semicolon list must have the same number of elements as the interface setting. The format is `fieldexp=value`.|
|isLocalViewRegExp||EMPTY||(Since 1.6.1) If the node matches the supplied regexp, then the node is considered local.|
|maxESConns||20||Max number of connections to Elasticsearch from capture process|
|maxESRequests||500||Max number of Elasticsearch requests outstanding in queue|
|maxMemPercentage||100||If moloch exceeds this amount of memory it will log an error and send itself a SIGSEGV that should produce a core file. Since 0.20.0|
|maxTcpOutOfOrderPackets||256||(Since 1.5.0) Max number of tcp packets to track while trying to reassemble the TCP stream|
|offlineFilenameRegex||(?i)\.(pcap|cap)$||Regexp to control which filenames are processed when using the -R option to moloch-capture. Since 0.11.3|
|pcapBufferSize||100000||pcap library buffer size, see man pcap_set_buffer_size|
Specify how packets are read from network cards. Since 0.14.0
Specify how packets are written to disk
|pcapWriteSize||262144||Buffer size when writing pcap files. Should be a multiple of the raid 5/xfs stripe size and multiple of 4096 if using direct/thread-direct pcapWriteMethod|
|prefix||EMPTY||It is possible for multiple Moloch clusters to live inside the same Elasticsearch cluster by giving each Moloch cluster a different prefix value. (Added 0.11.3)|
(Since 1.5.2) Save unknown ether, ip, or corrupt packets into separate files. This variable takes a semicolon separated list of the following values, applied in order.
|snapLen||16384||(Since 0.18.2) The maximum size of a packet Moloch will read off the interface. This can be changed to fix the "Moloch requires full packet captures" error. It is recommend that instead of changing this value that all the card "offline" features are turned off so that moloch gets a picture of whats on the network instead of what the capture card has reassembled. For VMs, those features must be turned off on the physical interface and not the virtual interface. This setting can be used when changing the settings isn't possible or desired.|
|trackESP||FALSE||(Since 1.5.0) Add ESP sessions to Moloch, no decoding|
|usersElasticsearch||EMPTY||It is possible for multiple Moloch clusters to share the same users table, so that the same accounts and settings work across all clusters. (Added 0.11.3)|
|usersPrefix||[PREFIX] if set otherwise EMPTY||Like prefix but only for the users table.|
|valueAutoComplete||![multiES] otherwise true||Autocomplete field values in the search expression.|
|queryAllIndices||FALSE for viewer, TRUE for multiviewer||(Since 1.5.0) Always query all indices instead of trying to calculate which ones. Use this if searching across that use different rotateIndex values.|
Tpacketv3 is the preferred reader for moloch and can be used on most 3.x kernels. Configure moloch-capture to use tpacketv3 as the reader method with
pcapReadMethod=tpacketv3 in your configuration file. This is also know as afpacket.
|tpacketv3BlockSize||2097152||The block size in bytes used for reads from each interface. There are 120 blocks per interface.|
|tpacketv3NumThreads||2||The number of threads used to read packets from each interface. These threads take the packets from the AF packet interface and place them into the packet queues.|
[default] pcapReadMethod=tpacketv3 tpacketv3BlockSize=2097152 interface=eth0 tpacketv3NumThreads=2
To use daq
- load the daq plugin by changing configuration file so it has reader-daq.so as a rootPlugins
- tell moloch-capture to use daq as the reader method with
pcapReadMethod=daqin your configuration file
- optinally change
interfaceto any special daq interface value
|daqModuleDirs||/usr/local/lib/daq||Directories where the daq modules live|
|daqModule||pcap||The daq module to use|
- We suggest you try tpacketv3 (afpacket) first if available on the host
- install the pfring package on all hosts that will run moloch-capture from http://packages.ntop.org/
- load the pfring plugin by changing configuration file so it has reader-pfring.so as a rootPlugins
- Configure moloch-capture to use pfring as the reader method with
pcapReadMethod=pfringin your configuration file
- optionally change
interfaceto any special pfring interface value
|pfringClusterId||0||The pfring cluster id|
- install the snf package on all hosts that will run moloch-capture
- load the snf plugin by changing configuration file so it has reader-snf.so as a rootPlugins
- Configure moloch-capture to use snf as the reader method with
pcapReadMethod=snfin your configuration file
- optionally change
interfaceto any special snf interface value
|snfNumRings||1||Number of rings per interface|
|snfDataRingSize||0||The data ring size to use, 0 means use the SNF default|
|snfFlags||-1||Controls process-sharing (1), port aggregation (2), and packet duplication (3). (Default value uses SNF_FLAGS environment variable)|
|s3Region||us-east-1||S3 Region to send data to|
|s3Bucket||none||S3 Bucket to store pcap info|
|s3Compress||FALSE||Should traffic be compressed before sending|
|s3MaxConns||20||Max connections to S3|
|s3MaxRequests||500||Max number of outstanding S3 requests|
|logESRequests||false||Write to stdout Elasticsearch requests|
|logEventXPackets||50000||Write to stdout info every X packets. Set to -1 to never log status.|
|logFileCreation||false||Write to stdout file creation information|
|logUnknownProtocols||false||Write to stdout unknown IP protocols|
You can add custom fields to Moloch several ways, including the wise and tagger plugins. Since Moloch 1.5.2 the easiest way is to use a [custom-fields] section in the ini file. At capture startup it will check to make sure all those fields exist in the database. The format of the line is the similar as used in wise and tagger, except you use `expression=definition`. You will need to also create a [custom-views] section to display the data on the Sessions tab.
|field||The field expression, overrides the key in the ini file|
|count||false||Track number of items with a .cnt field auto created|
|friendly||fieldname||A SHORT description, used in SPI View|
|db||REQUIRED||The DB field name|
|group||Before first dot in field or general||Category for SPI view|
|help||fieldname||Help to display in about box or help page|
[custom-fields] # Format is FieldExpr=text format theexpression=db:theexpression sample.md5=db:sample.md5;kind:lotermfield;friendly:Sample MD5;count:true;help:MD5 of the sample
With Moloch "views" are how the SPI data is displayed in the Sessions tab. Usually there is a unique "view" for each category of data. You can add custom views to Moloch several ways, including the wise and tagger plugins. Since Moloch 1.5.2 the easiest way is to use a [custom-views] section in the ini file. At viewer startup, a new section will be created for each entry. The format of the line is `name=definition`. Viewer will sort all views by name when choosing the order to display in.
|fields||REQUIRED||A comma separated list of field expression to display. They will be displayed in order listed.|
|title||Defaults to name||The title to get the section on Sessions tab|
|require||REQUIRED||The db session name to require be set to show the section.|
[custom-views] sample=title:Samples;require:sample;fields:sample.md5,sample.house [custom-fields] sample.md5=db:sample.md5;kind:lotermfield;friendly:Sample MD5;count:true;help:MD5 of the sample sample.house=db:sample.house;kind:termfield;friendly:Sample House;count:true;help:House the sample lives in
The moloch-clusters section is used to describe the various moloch-clusters that are available to forward traffic to either manually or through the CRON functionality. Each line represents a single cluster, with the name just being any unique string.
[moloch-clusters] cluster1=url:https://moloch.example.com:8005;passwordSecret:password;name:Cluster cluster2=url:https://cluster2.example.com:8005;passwordSecret:foo;name:Test Cluster
|url||The base url to use to contact cluster|
|passwordSecret||The passwordSecret for the remote cluster, if it is different from current cluster|
|name||Friendly name to display in UI|
The multi viewer is useful when you have multiple Moloch clusters that you want to search across. To use the multi viewer, an extra viewer process and a multies process must be started. The viewer process works like a normal viewer process, except instead of talking to a elasticsearch server, it talks to a multies server that proxies the queries to all the real elasticsearch servers. These two processes can share the same config file and node name section. The viewer part uses the SAME configuration values as above if you need to set anything.
A sample configuration for multi viewer. The elasticsearch variable points to the multies.js process.
[multi-viewer] elasticsearch=127.0.0.1:8200 viewPort = 8009 viewHost = localhost multiES = true multiESPort = 8200 multiESHost = localhost multiESNodes = escluster1.example.com:9200,prefix:PREFIX;escluster2.example.com:9200
Which you would then use by starting both the multiviewer and multies. This is a sample for running manually, but you should setup startup scripts to run for real.
cd /data/moloch/viewer /data/moloch/bin/node multies.js -n multi-viewer /data/moloch/bin/node viewer.js -n multi-viewer
|multiES||false||This is the multiES node|
|multiESPort||8200||Port that multies.js should listen on|
|multiESHost||EMPTY||Host interface that multies.js should listen on|
|multiESNodes||EMPTY||Semicolon separated list of Elasticsearch nodes that MultiES should connect to. The first node listed will be considered the primary node and is used for users/views/queries. An optional "prefix:" element can follow each host if that cluster was setup with an Elasticsearch prefix.|
override-ips is a special section that overrides the MaxMind databases for the fields set, but fields not set will still use MaxMind (example if you set tags but not country it will use MaxMind for the country) Spaces and capitalization are very important.
[override-ips] 10.1.0.0/16=tag:ny-office;country:USA;asn:AS0000 This is neat
|tag||A single tag to set for matches|
|country||A 3 character country code to set for matches|
|asn||An ASN value to set for matches|
|rir||A RIR value to set for matches|
This section makes it easy to specify HTTP Request headers to index. They will be searchable in the UI using http.[HEADERNAME]
|unique||true by default, only index unique values|
|count||false by default, create a second field http.[HEADERNAME].cnt with the number items found|
This section makes it easy to specify HTTP Request headers to index. They will be searchable in the UI using http.[HEADERNAME]
[headers-http-response] location=type:string server=type:string
This section makes it easy to specify email headers to index. They will be searchable in the UI using email.[HEADERNAME]
Starting with Moloch 0.11.4 it is possible to configure right click actions on any SPI data fields. Currently the only actions supported are url opening actions, but other action types will be added in the future. Right click actions can be added based on either the field name or the category of the data. They can be added either in the configuration file or by enabled WISE data sources.
Configuration File Sample:
[right-click] VTIP=url:https://www.virustotal.com/en/ip-address/%TEXT%/information/;name:Virus Total IP;category:ip VTHOST=url:https://www.virustotal.com/en/domain/%HOST%/information/;name:Virus Total Host;category:host VTURL=url:https://www.virustotal.com/latest-scan/%URL%;name:Virus Total URL;category:url
Each line in the right-click section contains possible actions
|[key]=||The key must be unique and is also used as the right click menu name if the name field is missing|
|url||The url to open if selected. There are some basic URL substitutions defined below.|
|name||The menu text to display|
|category||If the field that is right clicked on has this category then display this menu item. All right click entries must have either a category or fields variable defined.|
|fields||A comma separated list of field names. If the field that is right clicked on has one the field expressions in the list then display this menu item. All right click entries must have either a category or fields variable defined.|
|regex||A regex to use on the right clicked text to extract a piece for the URL. The first matching group is substituted for %REGEX% in the url. If the regex doesn't match at all then the menu isn't displayed.|
|users||A comma separated list of user names that can see the right click item. If not set then all users can see the right click item.|
The possible URL Substitutions are
|%TEXT%||The text clicked on in raw form|
|%URL%||An URL extracted from the text clicked on|
|%HOST%||A hostname extracted from the text clicked on. Sometimes the same as %TEXT% sometimes a subset.|
|%REGEX%||The first regex group match|
The categories that Moloch uses are
|asn||An ASN field|
|country||A three letter country code|
|host||A domain or host name|
|ip||An ip address|
|md5||A md5 of a payload, such as for http body or smtp attachment|
|port||The TCP/UDP port|
|rir||The Regional Internet Registry|
|user||A user name or email address|
(Since 0.16.1) This section allows you to specify ips or cidrs to drop from being processed. This is different from a bpf filter since the packets will actually reach moloch-capture (and counted) but won't be fully processed. However if you have many ranges/ips to drop it can be more efficient then bpfs. It is possible to also allow ranges inside of dropped ranges using the "allow" keyword. Order added doesn't matter, searching always finds the best match.
[packet-drop-ips] 10.0.0.0/8=drop 10.10.0.0/16=allow 10.10.10.0/24=drop 10.10.10.10=allow
(Since version 0.17.0, 1.5.3 recommended) Moloch provides support for PCAP at rest encodings. Two forms of encodings are presently supported: "aes-256-ctr" and "xor-2048". Please note that xor-2048 does not provide the same level of protection as aes-256-ctr.
The current implementation is based around openssl.
The encoding that you wish to use is configured in the config file by setting the
The simpleEncoding may be set either globally or per node.
Each file on disk is encoded by a unique data encryption key (dek).
The dek is encrypted using a key encryption key (kek) when stored.
The encrypted dek, the id of the kek, and the initialization vector (IV) are all stored per file in Elasticsearch.
Which kek is used when creating files, is selected with the
The simpleKEKId may be set either globally or per node.
The kek passwords that may be used should be placed in a
[keks] section of the config file.
There is one line for each kekid to kek mapping.
An easy way to create kek passwords is
openssl rand -base64 30.
Remember a kek is password used to encrypt the dek and NOT the password used to encrypt the files.
The dek is what is used to encrypt the files, and is unique per file.
You MUST secure your config file.
You are not required to use the same keks on all nodes, however, you can if you wish. It is recommended that you rotate your keks occasionally (timing dependent on your risk tolerance) and create new keks to be used. Do NOT delete the old keks until all pcaps which have been encoded with those keks have been expired.
Currently it is not possible to re-encrypt a data encryption key, however, this should be possible in the future with a db.pl command.
[default] pcapWriteMethod=simple simpleEncoding=aes-256-ctr simpleKEKId=kekid1 [keks] kekid1=Randomkekpassword1 kekid2=Randomkekpassword2
- Don't need to mess with disk encryption
- Access to just the host doesn't give you access to the pcap file
- Normal pcap tools will no longer work
- If the files index in elasticsearch gets corrupt, you can not reingest the files
Moloch can generate netflow for all sessions it saves SPI data for. Add `netflow.so` to the `plugins=` line in the config file.
|netflowSNMPInput||0||What to fill in the input field|
|netflowSNMPOutput||0||What to fill in the output field|
|netflowVersion||5||Version of netflow to send: 1, 5, 7|
|netflowDestinations||EMPTY||Semicolon ';' separated list of host:port destinations to send the netflow|
Since 1.5 Moloch can enrich sessions with Suricata alerts.
Suricata and Moloch must be running on the same machine and looking at the same traffic for the plugin to work.
Sessions that have been enriched will have several new fields, all starting with suricata, and will be displayed in a Suricata section of the standard Moloch session UI.
Moloch matches sessions based on the 5 tuple from the alert.json or eve.json file, only using items with event_type of alert.
A very simple query to find all sessions that have Suricata data is
suricata.signature == EXISTS!.
Note, there isn't a special Suricata UI inside Moloch, this is just adding new fields to Moloch sessions like wise or tagger do. The Suricata enrichment is done by moloch-capture, so moloch-capture must see the traffic.
suricata.so to the
plugins= line in the config file.
|suricataAlertFile||REQUIRED||The full path to either the alert.json or eve.json file, make sure the `dropUser` or `dropGroup` can open the file|
|suricataExpireMinutes||60||(Since 1.5.1) The number of minutes to keep Suricata alerts in memory before expiring them based on the Suricata alert timestamp. For example if a Suricata alert has a timestamp of 1am, the default is to keep looking for matching traffic until 2am (60 minutes).|
# Add suricata.so to your plugins line, or add a new plugins line plugins=suricata.so # suricataAlertFile should be the full path to your alert.json or eve.json file suricataAlertFile=/nids/suricata/eve.json
Sample config items for max performance with 0.16.1 or higher. Most of the defaults are fine. Reading Moloch FAQ - Why am I dropping packets and https://github.com/pevma/SEPTun or https://github.com/pevma/SEPTun-Mark-II may be helpful.
# MOST IMPORTANT, use basic magicMode, libfile kills performance magicMode=basic pcapReadMethod=tpacketv3 tpacketv3BlockSize=8388608 # Increase by 1 if still getting Input Drops tpacketv3NumThreads=2 # Defaults pcapWriteMethod=simple pcapWriteSize = 2560000 # Start with 5 packet threads, increase by 1 if getting thread drops. Should be about 1.5 x # Gbps that need to be captured packetThreads=5 # Increase the size of ES messages and compress them for lower traffic rates dbBulkSize=4000000 compressES=true # Set to number of packets a second, if still overflowing try 400k maxPacketsInQueue = 300000 # Uncomment to disable features you don't need # parseQSValue=false # parseCookieValue=false
The following rules can help greatly reduce the number of SPI sessions being written to disk
--- version: 1 rules: - name: "Dont write tls packets or check yara after 10 packets, still save SPI" when: "fieldSet" fields: protocols: - tls ops: _stopYara: 1 _maxPacketsToSave: 10 - name: "Dont save SPI sessions to ES with only 1 src packet" when: "beforeFinalSave" fields: packets.src: 1 packets.dst: 0 tcpflags.syn: 1 ops: _dontSaveSPI: 1 - name: "Dont save SPI data for listed hostnames tracked by dst ip:port, use on cloud destinations" when: "fieldSet" fields: host.http: - ad.doubleclick.net protocols: - tls ops: _dontSaveSPI: 1 _maxPacketsToSave: 1 _dropByDst: 10