Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alerts not displaying on Wazuh Dashboard after deployment of Wazuh with Puppet #19607

Closed
thony4uu opened this issue Oct 11, 2023 · 2 comments · Fixed by wazuh/wazuh-puppet#797

Comments

@thony4uu
Copy link
Member

Wazuh version Component Install type Install method Platform
4.6.0 RC 1 Wazuh component Wazuh Dashboard and Wazuh Indexer Puppet Ubuntu 18.04

During the testing of the issue, I deployed Wazuh single with puppet. I discovered that alerts are not showing on the Wazuh dashboard, although alerts are generated in the alerts.json and alerts.log. When I log in to the Wazuh dashboard initially, it presented an error message that no template was found for the alerts index pattern. When I finally accessed the Wazuh dashboard, and checked the indices, there were no indices for wazuh-alerts.

image

image

wazuh@server:~$ sudo cat /var/ossec/logs/alerts/alerts.json | grep "agent"
{"timestamp":"2023-10-11T09:52:04.808-0700","rule":{"level":3,"description":"Wazuh server started.","id":"502","firedtimes":1,"mail":false,"groups":["ossec"],"pci_dss":["10.6.1"],"gpg13":["10.1"],"gdpr":["IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.6"],"tsc":["CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043124.0","full_log":"ossec: Manager started.","decoder":{"name":"ossec"},"location":"wazuh-monitord"}
{"timestamp":"2023-10-11T09:52:22.774-0700","rule":{"level":4,"description":"Puppet Agent: Error","id":"80059","firedtimes":1,"mail":false,"groups":["puppet"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043142.242","full_log":"Oct 11 09:52:21 server puppet-agent[1657]: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Could not find node statement with name 'default' or 'server.localdomain' on node server.localdomain","predecoder":{"program_name":"puppet-agent","timestamp":"Oct 11 09:52:21","hostname":"server"},"decoder":{"name":"puppet-agent"},"location":"/var/log/syslog"}
{"timestamp":"2023-10-11T09:52:26.782-0700","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","mitre":{"id":["T1078"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access"],"technique":["Valid Accounts"]},"firedtimes":1,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043146.621","full_log":"Oct 11 09:52:25 server systemd: pam_unix(systemd-user:session): session opened for user wazuh by (uid=0)","predecoder":{"program_name":"systemd","timestamp":"Oct 11 09:52:25","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"wazuh","uid":"0"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:53:06.839-0700","rule":{"level":4,"description":"Puppet Agent: Error","id":"80059","firedtimes":2,"mail":false,"groups":["puppet"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043186.1037","full_log":"Oct 11 09:53:05 server puppet-agent[1657]: (/Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/wazuh-template.json]) Dependency Exec[cleanup /etc/filebeat/wazuh-template.json] has failures: true","predecoder":{"program_name":"puppet-agent","timestamp":"Oct 11 09:53:05","hostname":"server"},"decoder":{"name":"puppet-agent"},"location":"/var/log/syslog"}
{"timestamp":"2023-10-11T09:53:06.839-0700","rule":{"level":4,"description":"Puppet Agent: Error","id":"80059","firedtimes":3,"mail":false,"groups":["puppet"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043186.1387","full_log":"Oct 11 09:53:05 server puppet-agent[1657]: (/Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/wazuh-template.json]) Skipping because of failed dependencies","predecoder":{"program_name":"puppet-agent","timestamp":"Oct 11 09:53:05","hostname":"server"},"decoder":{"name":"puppet-agent"},"location":"/var/log/syslog"}
{"timestamp":"2023-10-11T09:53:06.839-0700","rule":{"level":4,"description":"Puppet Agent: Error","id":"80059","firedtimes":4,"mail":false,"groups":["puppet"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043186.1699","full_log":"Oct 11 09:53:05 server puppet-agent[1657]: (/Stage[main]/Wazuh::Filebeat_oss/Service[filebeat]) Skipping because of failed dependencies","predecoder":{"program_name":"puppet-agent","timestamp":"Oct 11 09:53:05","hostname":"server"},"decoder":{"name":"puppet-agent"},"location":"/var/log/syslog"}
{"timestamp":"2023-10-11T09:53:46.882-0700","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","mitre":{"id":["T1078"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access"],"technique":["Valid Accounts"]},"firedtimes":2,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043226.1989","full_log":"Oct 11 09:53:46 server pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000)","predecoder":{"program_name":"pkexec","timestamp":"Oct 11 09:53:46","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root","uid":"1000"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:53:54.890-0700","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","mitre":{"id":["T1078"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access"],"technique":["Valid Accounts"]},"firedtimes":3,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043234.2405","full_log":"Oct 11 09:53:54 server sudo: pam_unix(sudo:session): session opened for user root by (uid=0)","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:53:54","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root","uid":"0"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:53:54.890-0700","rule":{"level":3,"description":"Successful sudo to ROOT executed.","id":"5402","mitre":{"id":["T1548.003"],"tactic":["Privilege Escalation","Defense Evasion"],"technique":["Sudo and Sudo Caching"]},"firedtimes":1,"mail":false,"groups":["syslog","sudo"],"pci_dss":["10.2.5","10.2.2"],"gpg13":["7.6","7.8","7.13"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7","AC.6"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043234.2809","full_log":"Oct 11 09:53:54 server sudo:    wazuh : TTY=pts/0 ; PWD=/home/wazuh ; USER=root ; COMMAND=/bin/systemctl status wazuh-manager","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:53:54","hostname":"server"},"decoder":{"parent":"sudo","name":"sudo","ftscomment":"First time user executed the sudo command"},"data":{"srcuser":"wazuh","dstuser":"root","tty":"pts/0","pwd":"/home/wazuh","command":"/bin/systemctl status wazuh-manager"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:53:58.895-0700","rule":{"level":3,"description":"PAM: Login session closed.","id":"5502","firedtimes":1,"mail":false,"groups":["pam","syslog"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043238.3340","full_log":"Oct 11 09:53:58 server sudo: pam_unix(sudo:session): session closed for user root","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:53:58","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:06.903-0700","rule":{"level":3,"description":"Successful sudo to ROOT executed.","id":"5402","mitre":{"id":["T1548.003"],"tactic":["Privilege Escalation","Defense Evasion"],"technique":["Sudo and Sudo Caching"]},"firedtimes":2,"mail":false,"groups":["syslog","sudo"],"pci_dss":["10.2.5","10.2.2"],"gpg13":["7.6","7.8","7.13"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7","AC.6"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043246.3703","full_log":"Oct 11 09:54:06 server sudo:    wazuh : TTY=pts/0 ; PWD=/home/wazuh ; USER=root ; COMMAND=/bin/systemctl status wazuh-indexer","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:06","hostname":"server"},"decoder":{"parent":"sudo","name":"sudo","ftscomment":"First time user executed the sudo command"},"data":{"srcuser":"wazuh","dstuser":"root","tty":"pts/0","pwd":"/home/wazuh","command":"/bin/systemctl status wazuh-indexer"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:06.903-0700","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","mitre":{"id":["T1078"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access"],"technique":["Valid Accounts"]},"firedtimes":4,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043246.4234","full_log":"Oct 11 09:54:06 server sudo: pam_unix(sudo:session): session opened for user root by (uid=0)","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:06","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root","uid":"0"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:10.906-0700","rule":{"level":3,"description":"PAM: Login session closed.","id":"5502","firedtimes":2,"mail":false,"groups":["pam","syslog"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043250.4638","full_log":"Oct 11 09:54:09 server sudo: pam_unix(sudo:session): session closed for user root","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:09","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:16.913-0700","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","mitre":{"id":["T1078"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access"],"technique":["Valid Accounts"]},"firedtimes":5,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043256.5001","full_log":"Oct 11 09:54:16 server sudo: pam_unix(sudo:session): session opened for user root by (uid=0)","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:16","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root","uid":"0"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:16.913-0700","rule":{"level":3,"description":"Successful sudo to ROOT executed.","id":"5402","mitre":{"id":["T1548.003"],"tactic":["Privilege Escalation","Defense Evasion"],"technique":["Sudo and Sudo Caching"]},"firedtimes":3,"mail":false,"groups":["syslog","sudo"],"pci_dss":["10.2.5","10.2.2"],"gpg13":["7.6","7.8","7.13"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7","AC.6"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043256.5405","full_log":"Oct 11 09:54:16 server sudo:    wazuh : TTY=pts/0 ; PWD=/home/wazuh ; USER=root ; COMMAND=/bin/systemctl status wazuh-dashboard","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:16","hostname":"server"},"decoder":{"parent":"sudo","name":"sudo","ftscomment":"First time user executed the sudo command"},"data":{"srcuser":"wazuh","dstuser":"root","tty":"pts/0","pwd":"/home/wazuh","command":"/bin/systemctl status wazuh-dashboard"},"location":"/var/log/auth.log"}
{"timestamp":"2023-10-11T09:54:24.921-0700","rule":{"level":3,"description":"PAM: Login session closed.","id":"5502","firedtimes":3,"mail":false,"groups":["pam","syslog"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"server"},"manager":{"name":"server"},"id":"1697043264.5940","full_log":"Oct 11 09:54:23 server sudo: pam_unix(sudo:session): session closed for user root","predecoder":{"program_name":"sudo","timestamp":"Oct 11 09:54:23","hostname":"server"},"decoder":{"parent":"pam","name":"pam"},"data":{"dstuser":"root"},"location":"/var/log/auth.log"}

Filebeat service status:

wazuh@ubuntu:~$ sudo systemctl status filebeat
[sudo] password for wazuh: 
Sorry, try again.
[sudo] password for wazuh: 
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2023-10-11 10:24:22 PDT; 3h 44min ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 62549 (filebeat)
    Tasks: 8 (limit: 3439)
   CGroup: /system.slice/filebeat.service
           └─62549 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/

Oct 11 14:07:54 server1 filebeat[62549]: 2023-10-11T14:07:54.755-0700        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Oct 11 14:07:54 server1 filebeat[62549]: 2023-10-11T14:07:54.755-0700        INFO        [publisher]        pipeline/retry.go:223          done
Oct 11 14:07:54 server1 filebeat[62549]: 2023-10-11T14:07:54.757-0700        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticse
Oct 11 14:07:54 server1 filebeat[62549]: 2023-10-11T14:07:54.760-0700        INFO        template/load.go:183        Existing template will be overwritten, as overwrite is enabled.
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-11T14:08:54.399-0700        ERROR        [publisher_pipeline_output]        pipeline/output.go:154        Failed to connect to backo
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-11T14:08:54.399-0700        INFO        [publisher_pipeline_output]        pipeline/output.go:145        Attempting to reconnect to 
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-11T14:08:54.400-0700        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-11T14:08:54.400-0700        INFO        [publisher]        pipeline/retry.go:223          done
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-11T14:08:54.402-0700        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticse
Oct 11 14:08:54 server1 filebeat[62549]: 2023-10-1

Wazuh Manager Service Status

wazuh@ubuntu:~$ sudo systemctl status wazuh-manager
● wazuh-manager.service - Wazuh manager
   Loaded: loaded (/usr/lib/systemd/system/wazuh-manager.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2023-10-10 16:17:53 PDT; 21h ago
    Tasks: 122 (limit: 3439)
   CGroup: /system.slice/wazuh-manager.service
           ├─52390 /var/ossec/framework/python/bin/python3 /var/ossec/api/scripts/wazuh-apid.py
           ├─52429 /var/ossec/bin/wazuh-authd
           ├─52442 /var/ossec/bin/wazuh-db
           ├─52467 /var/ossec/bin/wazuh-execd
           ├─52470 /var/ossec/framework/python/bin/python3 /var/ossec/api/scripts/wazuh-apid.py
           ├─52473 /var/ossec/framework/python/bin/python3 /var/ossec/api/scripts/wazuh-apid.py
           ├─52476 /var/ossec/framework/python/bin/python3 /var/ossec/api/scripts/wazuh-apid.py
           ├─52490 /var/ossec/bin/wazuh-analysisd
           ├─52532 /var/ossec/bin/wazuh-syscheckd
           ├─52546 /var/ossec/bin/wazuh-remoted
           ├─52578 /var/ossec/bin/wazuh-logcollector
           ├─52595 /var/ossec/bin/wazuh-monitord
           └─52606 /var/ossec/bin/wazuh-modulesd

Oct 10 16:17:46 server1 env[52331]: Started wazuh-db...
Oct 10 16:17:47 server1 env[52331]: Started wazuh-execd...
Oct 10 16:17:48 server1 env[52331]: Started wazuh-analysisd...
Oct 10 16:17:48 server1 env[52331]: Started wazuh-syscheckd...
Oct 10 16:17:49 server1 env[52331]: Started wazuh-remoted...
Oct 10 16:17:49 server1 env[52331]: Started wazuh-logcollector...
Oct 10 16:17:50 server1 env[52331]: Started wazuh-monitord...
Oct 10 16:17:51 server1 env[52331]: Started wazuh-modulesd...
Oct 10 16:17:53 server1 env[52331]: Completed.
Oct 10 16:17:53 server1 systemd[1]: Started Wazuh manager.
wazuh@ubuntu:~$ 

Wazuh Indexer Servcie Status

wazuh@ubuntu:~$ sudo systemctl status wazuh-indexer
● wazuh-indexer.service - Wazuh-indexer
   Loaded: loaded (/usr/lib/systemd/system/wazuh-indexer.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2023-10-10 16:44:36 PDT; 21h ago
     Docs: https://documentation.wazuh.com
 Main PID: 55145 (java)
    Tasks: 71 (limit: 3439)
   CGroup: /system.slice/wazuh-indexer.service
           └─55145 /usr/share/wazuh-indexer/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch 

Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at org.opensearch.jobscheduler.sweeper.JobSweeper.lambda$initBackgroundSweep$10(JobSweeper.java:298)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at org.opensearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:239)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
Oct 11 06:39:55 server1 systemd-entrypoint[55145]:         at java.base/java.lang.Thread.run(Thread.java:833)

Wazuh Dashboard Service Status

wazuh@ubuntu:~$ sudo systemctl status wazuh-dashboard
● wazuh-dashboard.service - wazuh-dashboard
   Loaded: loaded (/etc/systemd/system/wazuh-dashboard.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2023-10-10 16:47:23 PDT; 21h ago
 Main PID: 55760 (node)
    Tasks: 11 (limit: 3439)
   CGroup: /system.slice/wazuh-dashboard.service
           └─55760 /usr/share/wazuh-dashboard/node/bin/node --no-warnings --max-http-header-size=65536 --unhandled-rejections=warn /usr/share/wazuh-dashboard/src/cli/dist -c /etc/wa

Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"get","statusCode":200,"req":{"url":"/ui/
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"get","statusCode":200,"req":{"url":"/460
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"get","statusCode":200,"req":{"url":"/ui/
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/in
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/in
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/in
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/el
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/in
Oct 11 13:25:17 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:17Z","tags":[],"pid":55760,"method":"post","statusCode":200,"req":{"url":"/in
Oct 11 13:25:18 server1 opensearch-dashboards[55760]: {"type":"response","@timestamp":"2023-10-11T20:25:18Z","tags":[],"pid":55760,"method":"get","statusCode":200,"req":{"url":"/ui/
lines 1-18/18 (END)
@vcerenu
Copy link
Member

vcerenu commented Oct 12, 2023

The error is generated because the absolute path of the commands is wrong, a path parameter will be added with all the possible locations of the necessary binaries

@vcerenu
Copy link
Member

vcerenu commented Oct 12, 2023

Log for deployed stack:

Log
root@ip-172-31-2-18:/# puppet agent -t
Info: Using environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for ip-172-31-2-18.us-west-1.compute.internal
Info: Applying configuration version '1697118102'
Notice: /Stage[main]/Wazuh::Indexer/Package[wazuh-indexer]/ensure: created (corrective)
Info: /Stage[main]/Wazuh::Indexer/Package[wazuh-indexer]: Scheduling refresh of Exec[set recusive ownership of /etc/wazuh-indexer]
Info: /Stage[main]/Wazuh::Indexer/Package[wazuh-indexer]: Scheduling refresh of Exec[set recusive ownership of /usr/share/wazuh-indexer]
Info: /Stage[main]/Wazuh::Indexer/Package[wazuh-indexer]: Scheduling refresh of Exec[set recusive ownership of /var/lib/wazuh-indexer]
Notice: /Stage[main]/Wazuh::Indexer/Exec[ensure full path of /etc/wazuh-indexer/certs]/returns: executed successfully (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs]/owner: owner changed 'root' to 'wazuh-indexer' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs]/group: group changed 'root' to 'wazuh-indexer' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs]/mode: mode changed '0755' to '0500' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs/indexer.pem]/ensure: defined content as '{sha256}3988772e8ad6255be2637f772265c8d5fda17b34dafea644170a63e978fdd5db' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs/indexer-key.pem]/ensure: defined content as '{sha256}7adac83960b70c122218659849654994532549e31a44aae8ad5fd7f64920aeb2' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs/root-ca.pem]/ensure: defined content as '{sha256}2c811e3cd703ecdaf5c981a8ee6ca5c142487997221e7b70fde8c6533cf88a99' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs/admin.pem]/ensure: defined content as '{sha256}54e0e3cdf1beae4ba5fc8f812882657afedfc6b4f09771d99a6d870e4568c5a2' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[/etc/wazuh-indexer/certs/admin-key.pem]/ensure: defined content as '{sha256}f1363e016b7a7818821a922ace2a0346fc2ff714da49de300f846b3203ea50e4' (corrective)
Notice: /Stage[main]/Wazuh::Indexer/File[configuration file]/content: 
--- /etc/wazuh-indexer/opensearch.yml	2023-10-06 14:20:28.000000000 +0000
+++ /tmp/puppet-file20231012-21041-11okrcv	2023-10-12 13:42:52.356239401 +0000
@@ -2,16 +2,10 @@
 node.name: "node-1"
 cluster.initial_master_nodes:
 - "node-1"
-#- "node-2"
-#- "node-3"
 cluster.name: "wazuh-cluster"
-#discovery.seed_hosts:
-#  - "node-1-ip"
-#  - "node-2-ip"
-#  - "node-3-ip"
-node.max_local_storage_nodes: "3"
-path.data: /var/lib/wazuh-indexer
-path.logs: /var/log/wazuh-indexer
+node.max_local_storage_nodes: "1"
+path.data: "/var/lib/wazuh-indexer"
+path.logs: "/var/log/wazuh-indexer"
 
 plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/indexer.pem
 plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/indexer-key.pem
@@ -29,14 +23,12 @@
 plugins.security.enable_snapshot_restore_privilege: true
 plugins.security.nodes_dn:
 - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
-#- "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
-#- "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
 plugins.security.restapi.roles_enabled:
 - "all_access"
 - "security_rest_api_access"
 
 plugins.security.system_indices.enabled: true
-plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
+plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
 
 ### Option to allow Filebeat-oss 7.10.2 to work ###
-compatibility.override_main_response_version: true
\ No newline at end of file
+compatibility.override_main_response_version: true

Notice: /Stage[main]/Wazuh::Indexer/File[configuration file]/content: 

Notice: /Stage[main]/Wazuh::Indexer/File[configuration file]/content: content changed '{sha256}d95d40b8ee093f122d8015d4a267eddbd92ba3e323c70f2ac7ab7d8ff9e584fe' to '{sha256}9e09ebb124ae8798fbd3a7643bdb0f29a4376d50ca9cb6b9d137a76f18b55e65' (corrective)
Info: /Stage[main]/Wazuh::Indexer/File[configuration file]: Scheduling refresh of Service[wazuh-indexer]
Notice: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /etc/wazuh-indexer]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /etc/wazuh-indexer]: Scheduling refresh of Service[wazuh-indexer]
Notice: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /usr/share/wazuh-indexer]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /usr/share/wazuh-indexer]: Scheduling refresh of Service[wazuh-indexer]
Notice: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /var/lib/wazuh-indexer]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Wazuh::Indexer/Exec[set recusive ownership of /var/lib/wazuh-indexer]: Scheduling refresh of Service[wazuh-indexer]
Notice: /Stage[main]/Wazuh::Indexer/Service[wazuh-indexer]/ensure: ensure changed 'stopped' to 'running' (corrective)
Info: /Stage[main]/Wazuh::Indexer/Service[wazuh-indexer]: Unscheduling refresh on Service[wazuh-indexer]
Notice: /Stage[main]/Wazuh::Dashboard/Package[wazuh-dashboard]/ensure: created (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/Exec[ensure full path of /etc/wazuh-dashboard/certs]/returns: executed successfully (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs]/owner: owner changed 'root' to 'wazuh-dashboard' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs]/group: group changed 'root' to 'wazuh-dashboard' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs]/mode: mode changed '0755' to '0500' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs/dashboard.pem]/ensure: defined content as '{sha256}02da9c9d4f5bba0783f67d604196521151c9b399d427cd0ce54ec9d63dc49c65' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs/dashboard-key.pem]/ensure: defined content as '{sha256}b9bad3467b133fa5185bbf27b473916a839c8806652cffa159b0c3f68445e862' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/certs/root-ca.pem]/ensure: defined content as '{sha256}2c811e3cd703ecdaf5c981a8ee6ca5c142487997221e7b70fde8c6533cf88a99' (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/opensearch_dashboards.yml]/content: 
--- /etc/wazuh-dashboard/opensearch_dashboards.yml	2023-10-06 20:04:47.000000000 +0000
+++ /tmp/puppet-file20231012-21041-1el4kwb	2023-10-12 13:43:55.028162230 +0000
@@ -2,9 +2,9 @@
 server.port: 443
 opensearch.hosts: https://localhost:9200
 opensearch.ssl.verificationMode: certificate
-#opensearch.username:
-#opensearch.password:
-opensearch.requestHeadersAllowlist: ["securitytenant","Authorization"]
+opensearch.username: kibanaserver
+opensearch.password: kibanaserver
+opensearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
 opensearch_security.multitenancy.enabled: false
 opensearch_security.readonly_mode.roles: ["kibana_read_only"]
 server.ssl.enabled: true
@@ -12,4 +12,3 @@
 server.ssl.certificate: "/etc/wazuh-dashboard/certs/dashboard.pem"
 opensearch.ssl.certificateAuthorities: ["/etc/wazuh-dashboard/certs/root-ca.pem"]
 uiSettings.overrides.defaultRoute: /app/wazuh
-

Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/opensearch_dashboards.yml]/content: 

Notice: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/opensearch_dashboards.yml]/content: content changed '{sha256}a706129e58858490e14dab0a11d0b6ec3c4afb7e89fbbebda8f327570879fc5d' to '{sha256}e7dcefa3591e0557500ae74c953d8d60ebbb5fd6e934bd7673443cbce47ef8a6' (corrective)
Info: /Stage[main]/Wazuh::Dashboard/File[/etc/wazuh-dashboard/opensearch_dashboards.yml]: Scheduling refresh of Service[wazuh-dashboard]
Notice: /Stage[main]/Wazuh::Dashboard/File[/usr/share/wazuh-dashboard/data/wazuh/]/ensure: created (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/usr/share/wazuh-dashboard/data/wazuh/config]/ensure: created (corrective)
Notice: /Stage[main]/Wazuh::Dashboard/File[/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml]/ensure: defined content as '{sha256}797b2fd0b16994267394ffb667430d1e7d7f86d1789e27b6946f85e10376be8f' (corrective)
Info: /Stage[main]/Wazuh::Dashboard/File[/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml]: Scheduling refresh of Service[wazuh-dashboard]
Notice: /Stage[main]/Wazuh::Dashboard/Service[wazuh-dashboard]/ensure: ensure changed 'stopped' to 'running' (corrective)
Info: /Stage[main]/Wazuh::Dashboard/Service[wazuh-dashboard]: Unscheduling refresh on Service[wazuh-dashboard]
Notice: /Stage[main]/Wazuh::Manager/Package[wazuh-manager]/ensure: created (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]/content: 
--- /var/ossec/etc/shared/default/agent.conf	2023-10-06 14:23:25.000000000 +0000
+++ /tmp/puppet-file20231012-21041-1muv045	2023-10-12 13:44:45.444101313 +0000
@@ -2,4 +2,4 @@
 
   <!-- Shared agent configuration here -->
 
-</agent_config>
+</agent_config>
\ No newline at end of file

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]/content: 

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]/content: content changed '{sha256}d76908d51018ec72afc1a7e17fbc3971c6a812446fd930fdba5ed66f1af47ed0' to '{sha256}ea2cf84c0fdc6dd290d7cba0ad0eac63850d56203aeb882568f69f22d98dccf9' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]/owner: owner changed 'wazuh' to 'root' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]/mode: mode changed '0660' to '0640' (corrective)
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/shared/default/agent.conf]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]/content: 
--- /var/ossec/etc/rules/local_rules.xml	2023-10-06 14:23:25.000000000 +0000
+++ /tmp/puppet-file20231012-21041-v6mwwp	2023-10-12 13:44:45.476101273 +0000
@@ -1,14 +1,12 @@
-<!-- Local rules -->
-
 <!-- Modify it at your will. -->
-<!-- Copyright (C) 2015, Wazuh Inc. -->
 
-<!-- Example -->
 <group name="local,syslog,sshd,">
 
-  <!--
-  Dec 10 01:02:02 host sshd[1234]: Failed none for root from 1.1.1.1 port 1066 ssh2
-  -->
+  <!-- Note that rule id 5711 is defined at the ssh_rules file
+    -  as a ssh failed login. This is just an example
+    -  since ip 1.1.1.1 shouldn't be used anywhere.
+    -  Level 0 means ignore.
+    -->
   <rule id="100001" level="5">
     <if_sid>5716</if_sid>
     <srcip>1.1.1.1</srcip>
@@ -16,4 +14,28 @@
     <group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,</group>
   </rule>
 
-</group>
+
+  <!-- This example will ignore ssh failed logins for the user name XYZABC.
+    -->
+  <!--
+  <rule id="100020" level="0">
+    <if_sid>5711</if_sid>
+    <user>XYZABC</user>
+    <description>Example of rule that will ignore sshd </description>
+    <description>failed logins for user XYZABC.</description>
+  </rule>
+  -->
+
+
+  <!-- Specify here a list of rules to ignore. -->
+  <!--
+  <rule id="100030" level="0">
+    <if_sid>12345, 23456, xyz, abc</if_sid>
+    <description>List of rules to be ignored.</description>
+  </rule>
+  -->
+
+</group> <!-- SYSLOG,LOCAL -->
+
+
+<!-- EOF -->

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]/content: 

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]/content: content changed '{sha256}991dc926bd2e3aec88bd79be1c8b458777f64f489b3e6524e682ac33620425f4' to '{sha256}4b0ffe3d22c782a75fa5559839751959cc9cb33256ca06efcca298cb0109a342' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]/owner: owner changed 'wazuh' to 'root' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]/mode: mode changed '0660' to '0640' (corrective)
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/rules/local_rules.xml]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]/content: 
--- /var/ossec/etc/decoders/local_decoder.xml	2023-10-06 14:23:25.000000000 +0000
+++ /tmp/puppet-file20231012-21041-1xbn3ad	2023-10-12 13:44:45.504101240 +0000
@@ -1,8 +1,6 @@
 <!-- Local Decoders -->
 
 <!-- Modify it at your will. -->
-<!-- Copyright (C) 2015, Wazuh Inc. -->
-
 <!--
   - Allowed static fields:
   - location   - where the log came from (only on FTS)

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]/content: 

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]/content: content changed '{sha256}21f5e1ff2ea096f2b1b6acdc1fc25bcac46734614b253f6ad1352d9c2a1c5c13' to '{sha256}7e45d35ee7a35a68fe13cd5e3f7f69ec2776322cd2d3fa42bb474ba06279aecc' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]/owner: owner changed 'wazuh' to 'root' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]/mode: mode changed '0660' to '0640' (corrective)
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/etc/decoders/local_decoder.xml]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/bin/.process_list]/ensure: defined content as '{sha256}5309904b42512c478b2da5e23cf756e3733d61834a9749e549af895f5d5b478c' (corrective)
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/bin/.process_list]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]/content: 
--- /var/ossec/api/configuration/api.yaml	2023-10-06 14:23:25.000000000 +0000
+++ /tmp/puppet-file20231012-21041-obdo82	2023-10-12 13:44:45.536101202 +0000
@@ -1,74 +1,51 @@
-# USE THIS FILE AS A TEMPLATE. UNCOMMENT LINES TO APPLY CUSTOM CONFIGURATION
-
-# host: 0.0.0.0
-# port: 55000
-
-# Advanced configuration
-
-# https:
-#  enabled: yes
-#  key: "server.key"
-#  cert: "server.crt"
-#  use_ca: False
-#  ca: "ca.crt"
-#  ssl_protocol: "TLSv1.2"
-#  ssl_ciphers: ""
-
-# Modify API's intervals (time in seconds)
-# intervals:
-#   request_timeout: 10
-
-# Logging configuration
-# Values for API log level: disabled, info, warning, error, debug, debug2 (each level includes the previous level).
-# Values for API log max_size: <value><unit>. Valid units: K (kilobytes), M (megabytes)
-# Enabling the API log max_size will disable the time based rotation (on midnight)
-# logs:
-#  level: "info"
-#  format: "plain"
-#  max_size:
-#    enabled: False
-#    size: "1M"
-
-# Cross-origin resource sharing: https://github.com/aio-libs/aiohttp-cors#usage
-# cors:
-#  enabled: no
-#  source_route: "*"
-#  expose_headers: "*"
-#  allow_headers: "*"
-#  allow_credentials: no
-
-# Cache (time in seconds)
-# cache:
-#  enabled: yes
-#  time: 0.750
-
-# Access parameters
-# access:
-#  max_login_attempts: 50
-#  block_time: 300
-#  max_request_per_minute: 300
-
-# Drop privileges (Run as wazuh user)
-# drop_privileges: yes
-
-# Enable features under development
-# experimental_features: no
-
-# Maximum body size that the API can accept, in bytes (0 -> limitless)
-# max_upload_size: 10485760
-
-# Uploadable Wazuh configuration sections
-# upload_configuration:
-#   remote_commands:
-#     localfile:
-#       allow: yes
-#       exceptions: []
-#     wodle_command:
-#       allow: yes
-#       exceptions: []
-#   limits:
-#     eps:
-#       allow: yes
-#   agents:
-#     allow_higher_versions:
-#       allow: yes
+#
+# Wazuh API configuration file
+# Copyright (C) 2015, Wazuh Inc.
+#
+host: 0.0.0.0
+port: 55000
+# Advanced configuration
+https:
+  enabled: yes
+  key: server.key
+  cert: server.crt
+  use_ca: False
+  ca: ca.crt
+  ssl_protocol: TLSv1.2
+  ssl_ciphers: ""
+# Logging configuration
+# Values for API log level: disabled, info, warning, error, debug, debug2 (each level includes the previous level).
+logs:
+  level: info
+# Cross-origin resource sharing: https://github.com/aio-libs/aiohttp-cors#usage
+cors:
+  enabled: no
+  source_route: "*"
+  expose_headers: "*"
+  allow_headers: "*"
+  allow_credentials: no
+# Cache (time in seconds)
+cache:
+  enabled: yes
+  time: 0.750
+# Access parameters
+access:
+  max_login_attempts: 5
+  block_time: 300
+  max_request_per_minute: 300
+# Drop privileges (Run as ossec user)
+drop_privileges: yes
+# Enable features under development
+experimental_features: no
+# Enable remote commands
+upload_configuration:
+  remote_commands:
+    localfile:
+      allow: yes
+      exceptions: []
+    wodle_command:
+      allow: yes
+      exceptions: []
+  limits:
+    eps:
+      allow: yes

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]/content: 

Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]/content: content changed '{sha256}f40da38299971720e6a418d25d30fe66b93ed33a2279185aaeda510e3d5c6a95' to '{sha256}6feac61594606155588fe4827f86b8b6653d378f3913a7c6c217b4e5d9a7ec0c' (corrective)
Notice: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]/mode: mode changed '0660' to '0640' (corrective)
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]: Scheduling refresh of Service[wazuh-manager]
Info: /Stage[main]/Wazuh::Manager/File[/var/ossec/api/configuration/api.yaml]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Filebeat_oss/Package[filebeat]/ensure: created (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]/content: 
--- /etc/filebeat/filebeat.yml	2021-01-12 22:10:03.000000000 +0000
+++ /tmp/puppet-file20231012-21041-lm3ztk	2023-10-12 13:44:50.064095779 +0000
@@ -1,270 +1,34 @@
-###################### Filebeat Configuration Example #########################
+# Wazuh - Filebeat configuration file
+filebeat.modules:
+  - module: wazuh
+    alerts:
+      enabled: true
+    archives:
+      enabled: false
+
+setup.template.json.enabled: true
+setup.template.json.path: "/etc/filebeat/wazuh-template.json"
+setup.template.json.name: "wazuh"
+setup.template.overwrite: true
 
-# This file is an example configuration file highlighting only the most common
-# options. The filebeat.reference.yml file from the same directory contains all the
-# supported options with more comments. You can use it as a reference.
-#
-# You can find the full configuration reference here:
-# https://www.elastic.co/guide/en/beats/filebeat/index.html
-
-# For more available modules and options, please see the filebeat.reference.yml sample
-# configuration file.
-
-# ============================== Filebeat inputs ===============================
-
-filebeat.inputs:
-
-# Each - is an input. Most options can be set at the input level, so
-# you can use different inputs for various configurations.
-# Below are the input specific configurations.
-
-- type: log
-
-  # Change to true to enable this input configuration.
-  enabled: false
-
-  # Paths that should be crawled and fetched. Glob based paths.
-  paths:
-    - /var/log/*.log
-    #- c:\programdata\elasticsearch\logs\*
-
-  # Exclude lines. A list of regular expressions to match. It drops the lines that are
-  # matching any regular expression from the list.
-  #exclude_lines: ['^DBG']
-
-  # Include lines. A list of regular expressions to match. It exports the lines that are
-  # matching any regular expression from the list.
-  #include_lines: ['^ERR', '^WARN']
-
-  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
-  # are matching any regular expression from the list. By default, no files are dropped.
-  #exclude_files: ['.gz$']
-
-  # Optional additional fields. These fields can be freely picked
-  # to add additional information to the crawled log files for filtering
-  #fields:
-  #  level: debug
-  #  review: 1
-
-  ### Multiline options
-
-  # Multiline can be used for log messages spanning multiple lines. This is common
-  # for Java Stack Traces or C-Line Continuation
-
-  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
-  #multiline.pattern: ^\[
-
-  # Defines if the pattern set under pattern should be negated or not. Default is false.
-  #multiline.negate: false
-
-  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
-  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
-  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
-  #multiline.match: after
-
-# filestream is an experimental input. It is going to replace log input in the future.
-- type: filestream
-
-  # Change to true to enable this input configuration.
-  enabled: false
-
-  # Paths that should be crawled and fetched. Glob based paths.
-  paths:
-    - /var/log/*.log
-    #- c:\programdata\elasticsearch\logs\*
-
-  # Exclude lines. A list of regular expressions to match. It drops the lines that are
-  # matching any regular expression from the list.
-  #exclude_lines: ['^DBG']
-
-  # Include lines. A list of regular expressions to match. It exports the lines that are
-  # matching any regular expression from the list.
-  #include_lines: ['^ERR', '^WARN']
-
-  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
-  # are matching any regular expression from the list. By default, no files are dropped.
-  #prospector.scanner.exclude_files: ['.gz$']
-
-  # Optional additional fields. These fields can be freely picked
-  # to add additional information to the crawled log files for filtering
-  #fields:
-  #  level: debug
-  #  review: 1
-
-# ============================== Filebeat modules ==============================
-
-filebeat.config.modules:
-  # Glob pattern for configuration loading
-  path: ${path.config}/modules.d/*.yml
-
-  # Set to true to enable config reloading
-  reload.enabled: false
-
-  # Period on which files under path should be checked for changes
-  #reload.period: 10s
-
-# ======================= Elasticsearch template setting =======================
-
-setup.template.settings:
-  index.number_of_shards: 1
-  #index.codec: best_compression
-  #_source.enabled: false
-
-
-# ================================== General ===================================
-
-# The name of the shipper that publishes the network data. It can be used to group
-# all the transactions sent by a single shipper in the web interface.
-#name:
-
-# The tags of the shipper are included in their own field with each
-# transaction published.
-#tags: ["service-X", "web-tier"]
-
-# Optional fields that you can specify to add additional information to the
-# output.
-#fields:
-#  env: staging
-
-# ================================= Dashboards =================================
-# These settings control loading the sample dashboards to the Kibana index. Loading
-# the dashboards is disabled by default and can be enabled either by setting the
-# options here or by using the `setup` command.
-#setup.dashboards.enabled: false
-
-# The URL from where to download the dashboards archive. By default this URL
-# has a value which is computed based on the Beat name and version. For released
-# versions, this URL points to the dashboard archive on the artifacts.elastic.co
-# website.
-#setup.dashboards.url:
-
-# =================================== Kibana ===================================
-
-# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
-# This requires a Kibana endpoint configuration.
-setup.kibana:
-
-  # Kibana Host
-  # Scheme and port can be left out and will be set to the default (http and 5601)
-  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
-  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
-  #host: "localhost:5601"
-
-  # Kibana Space ID
-  # ID of the Kibana Space into which the dashboards should be loaded. By default,
-  # the Default Space will be used.
-  #space.id:
-
-# =============================== Elastic Cloud ================================
-
-# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
-
-# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
-# `setup.kibana.host` options.
-# You can find the `cloud.id` in the Elastic Cloud web UI.
-#cloud.id:
-
-# The cloud.auth setting overwrites the `output.elasticsearch.username` and
-# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
-#cloud.auth:
-
-# ================================== Outputs ===================================
-
-# Configure what output to use when sending the data collected by the beat.
-
-# ---------------------------- Elasticsearch Output ----------------------------
+# Send events directly to Indexer
 output.elasticsearch:
-  # Array of hosts to connect to.
-  hosts: ["localhost:9200"]
-
-  # Protocol - either `http` (default) or `https`.
-  #protocol: "https"
-
-  # Authentication credentials - either API key or username/password.
-  #api_key: "id:api_key"
-  #username: "elastic"
-  #password: "changeme"
-
-# ------------------------------ Logstash Output -------------------------------
-#output.logstash:
-  # The Logstash hosts
-  #hosts: ["localhost:5044"]
-
-  # Optional SSL. By default is off.
-  # List of root certificates for HTTPS server verifications
-  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
-
-  # Certificate for SSL client authentication
-  #ssl.certificate: "/etc/pki/client/cert.pem"
-
-  # Client Certificate Key
-  #ssl.key: "/etc/pki/client/cert.key"
-
-# ================================= Processors =================================
-processors:
-  - add_host_metadata:
-      when.not.contains.tags: forwarded
-  - add_cloud_metadata: ~
-  - add_docker_metadata: ~
-  - add_kubernetes_metadata: ~
-
-# ================================== Logging ===================================
-
-# Sets log level. The default log level is info.
-# Available log levels are: error, warning, info, debug
-#logging.level: debug
-
-# At debug level, you can selectively enable logging only for some components.
-# To enable all selectors use ["*"]. Examples of other selectors are "beat",
-# "publish", "service".
-#logging.selectors: ["*"]
-
-# ============================= X-Pack Monitoring ==============================
-# Filebeat can export internal metrics to a central Elasticsearch monitoring
-# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
-# reporting is disabled by default.
-
-# Set to true to enable the monitoring reporter.
-#monitoring.enabled: false
-
-# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
-# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
-# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
-#monitoring.cluster_uuid:
-
-# Uncomment to send the metrics to Elasticsearch. Most settings from the
-# Elasticsearch output are accepted here as well.
-# Note that the settings should point to your Elasticsearch *monitoring* cluster.
-# Any setting that is not set is automatically inherited from the Elasticsearch
-# output configuration, so if you have the Elasticsearch output configured such
-# that it is pointing to your Elasticsearch monitoring cluster, you can simply
-# uncomment the following line.
-#monitoring.elasticsearch:
-
-# ============================== Instrumentation ===============================
-
-# Instrumentation support for the filebeat.
-#instrumentation:
-    # Set to true to enable instrumentation of filebeat.
-    #enabled: false
-
-    # Environment in which filebeat is running on (eg: staging, production, etc.)
-    #environment: ""
-
-    # APM Server hosts to report instrumentation results to.
-    #hosts:
-    #  - http://localhost:8200
-
-    # API Key for the APM Server(s).
-    # If api_key is set then secret_token will be ignored.
-    #api_key:
-
-    # Secret token for the APM Server(s).
-    #secret_token:
-
-
-# ================================= Migration ==================================
-
-# This allows to enable 6.7 migration aliases
-#migration.6_to_7.enabled: true
-
+  hosts: ["https://127.0.0.1:9200"]
+  username: admin
+  password: admin
+  protocol: https
+  ssl.certificate_authorities:
+    - /etc/filebeat/certs/root-ca.pem
+  ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
+  ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
+
+setup.ilm.enabled: false
+
+logging.metrics.enabled: false
+
+seccomp:
+  default_action: allow
+  syscalls:
+  - action: allow
+    names:
+    - rseq

Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]/content: 

Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]/content: content changed '{sha256}d4d7b4d818401d90b4425814dcf01ad4e1d7b6ec51acb9b926b4ff1c0673a02e' to '{sha256}dbb85d6fd9d8401b09f9c0aeb514b0f0adf970b6af1dbc7997e5b00aa872a4c2' (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]/mode: mode changed '0600' to '0640' (corrective)
Info: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]: Scheduling refresh of Service[filebeat]
Info: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/filebeat.yml]: Scheduling refresh of Service[filebeat]
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/wazuh-template.json]/ensure: defined content as '{mtime}2023-10-12 13:44:50 +0000' (corrective)
Info: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/wazuh-template.json]: Scheduling refresh of Service[filebeat]
Notice: /Stage[main]/Wazuh::Filebeat_oss/Archive[/tmp/wazuh-filebeat-0.2.tar.gz]/ensure: download archive from https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.2.tar.gz to /tmp/wazuh-filebeat-0.2.tar.gz and extracted in /usr/share/filebeat/module with cleanup (corrective)
Info: /Stage[main]/Wazuh::Filebeat_oss/Archive[/tmp/wazuh-filebeat-0.2.tar.gz]: Scheduling refresh of Service[filebeat]
Notice: /Stage[main]/Wazuh::Filebeat_oss/Exec[ensure full path of /etc/filebeat/certs]/returns: executed successfully (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/certs]/mode: mode changed '0755' to '0500' (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/certs/filebeat.pem]/ensure: defined content as '{sha256}7957016e16d896c3da8e2d83238868e966dcd1442377803b46cc758c0edeed50' (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/certs/filebeat-key.pem]/ensure: defined content as '{sha256}a7ccf9a71ef587404aee9a7916fcba57458d96a701b08201001229a7091ea010' (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/File[/etc/filebeat/certs/root-ca.pem]/ensure: defined content as '{sha256}2c811e3cd703ecdaf5c981a8ee6ca5c142487997221e7b70fde8c6533cf88a99' (corrective)
Notice: /Stage[main]/Wazuh::Filebeat_oss/Service[filebeat]: Triggered 'refresh' from 4 events
Notice: /Stage[main]/Wazuh::Manager/Concat[manager_ossec.conf]/File[/var/ossec/etc/ossec.conf]/content: 
--- /var/ossec/etc/ossec.conf	2023-10-12 13:44:08.580145760 +0000
+++ /tmp/puppet-file20231012-21041-1wb33g0	2023-10-12 13:44:50.480095280 +0000
@@ -1,23 +1,15 @@
-<!--
-  Wazuh - Manager - Default configuration for ubuntu 18.04
-  More info at: https://documentation.wazuh.com
-  Mailing list: https://groups.google.com/forum/#!forum/wazuh
--->
-
 <ossec_config>
   <global>
     <jsonout_output>yes</jsonout_output>
     <alerts_log>yes</alerts_log>
     <logall>no</logall>
     <logall_json>no</logall_json>
-    <email_notification>no</email_notification>
-    <smtp_server>smtp.example.wazuh.com</smtp_server>
-    <email_from>wazuh@example.wazuh.com</email_from>
-    <email_to>recipient@example.wazuh.com</email_to>
-    <email_maxperhour>12</email_maxperhour>
-    <email_log_source>alerts.log</email_log_source>
     <agents_disconnection_time>10m</agents_disconnection_time>
     <agents_disconnection_alert_time>0</agents_disconnection_alert_time>
+    <email_notification>no</email_notification>
+    <white_list>127.0.0.1</white_list>
+    <white_list>^localhost.localdomain$</white_list>
+    <white_list>10.0.0.2</white_list>
   </global>
 
   <alerts>
@@ -25,7 +17,6 @@
     <email_alert_level>12</email_alert_level>
   </alerts>
 
-  <!-- Choose between "plain", "json", or "plain,json" for the format of internal logs -->
   <logging>
     <log_format>plain</log_format>
   </logging>
@@ -37,8 +28,9 @@
     <queue_size>131072</queue_size>
   </remote>
 
-  <!-- Policy monitoring -->
-  <rootcheck>
+
+
+<rootcheck>
     <disabled>no</disabled>
     <check_files>yes</check_files>
     <check_trojans>yes</check_trojans>
@@ -47,210 +39,220 @@
     <check_pids>yes</check_pids>
     <check_ports>yes</check_ports>
     <check_if>yes</check_if>
-
-    <!-- Frequency that rootcheck is executed - every 12 hours -->
     <frequency>43200</frequency>
-
-    <rootkit_files>etc/rootcheck/rootkit_files.txt</rootkit_files>
-    <rootkit_trojans>etc/rootcheck/rootkit_trojans.txt</rootkit_trojans>
-
+    <rootkit_files>/var/ossec/etc/rootcheck/rootkit_files.txt</rootkit_files>
+    <rootkit_trojans>/var/ossec/etc/rootcheck/rootkit_trojans.txt</rootkit_trojans>
     <skip_nfs>yes</skip_nfs>
-  </rootcheck>
+</rootcheck>
 
-  <wodle name="cis-cat">
+<wodle name="open-scap">
     <disabled>yes</disabled>
     <timeout>1800</timeout>
     <interval>1d</interval>
     <scan-on-start>yes</scan-on-start>
 
+</wodle>
+<wodle name="cis-cat">    
+    <disabled>yes</disabled>
+    <timeout>1800</timeout>
+    <interval>1d</interval>
+    <scan-on-start>yes</scan-on-start>
     <java_path>wodles/java</java_path>
     <ciscat_path>wodles/ciscat</ciscat_path>
-  </wodle>
+</wodle>
 
-  <!-- Osquery integration -->
-  <wodle name="osquery">
+
+<wodle name="osquery">
     <disabled>yes</disabled>
     <run_daemon>yes</run_daemon>
-    <log_path>/var/log/osquery/osqueryd.results.log</log_path>
+      <log_path>/var/log/osquery/osqueryd.results.log</log_path>
     <config_path>/etc/osquery/osquery.conf</config_path>
     <add_labels>yes</add_labels>
-  </wodle>
+</wodle>
 
-  <!-- System inventory -->
-  <wodle name="syscollector">
-    <disabled>no</disabled>
-    <interval>1h</interval>
-    <scan_on_start>yes</scan_on_start>
-    <hardware>yes</hardware>
-    <os>yes</os>
-    <network>yes</network>
-    <packages>yes</packages>
-    <ports all="no">yes</ports>
-    <processes>yes</processes>
-
-    <!-- Database synchronization settings -->
-    <synchronization>
-      <max_eps>10</max_eps>
-    </synchronization>
-  </wodle>
+  
+<wodle name="syscollector">
+  <disabled>no</disabled>
+  <interval>1h</interval>
+  <scan_on_start>yes</scan_on_start>
+  <hardware>yes</hardware>
+  <os>yes</os>
+  <network>yes</network>
+  <packages>yes</packages>
+  <ports all="no">yes</ports>
+  <processes>yes</processes>
+</wodle>
 
-  <sca>
+<sca>
     <enabled>yes</enabled>
     <scan_on_start>yes</scan_on_start>
     <interval>12h</interval>
     <skip_nfs>yes</skip_nfs>
+  
   </sca>
+    
+  
+<vulnerability-detector>
+  <enabled>no</enabled>
+  <interval>5m</interval>
+  <run_on_start>yes</run_on_start>
+  <min_full_scan_interval>6h</min_full_scan_interval>
 
-  <vulnerability-detector>
+  <provider name="canonical">
     <enabled>no</enabled>
-    <interval>5m</interval>
-    <min_full_scan_interval>6h</min_full_scan_interval>
-    <run_on_start>yes</run_on_start>
+    
+    
+    <os>trusty</os>
+    
+    <os>xenial</os>
+    
+    <os>bionic</os>
+    
+    <os>focal</os>
+    
+    <os>jammy</os>
+    
+    
+    <update_interval>1h</update_interval>
+  </provider>
 
-    <!-- Ubuntu OS vulnerabilities -->
-    <provider name="canonical">
-      <enabled>no</enabled>
-      <os>trusty</os>
-      <os>xenial</os>
-      <os>bionic</os>
-      <os>focal</os>
-      <os>jammy</os>
-      <update_interval>1h</update_interval>
-    </provider>
-
-    <!-- Debian OS vulnerabilities -->
-    <provider name="debian">
-      <enabled>no</enabled>
-      <os>buster</os>
-      <os>bullseye</os>
-      <os>bookworm</os>
-      <update_interval>1h</update_interval>
-    </provider>
 
-    <!-- RedHat OS vulnerabilities -->
-    <provider name="redhat">
-      <enabled>no</enabled>
-      <os>5</os>
-      <os>6</os>
-      <os>7</os>
-      <os>8</os>
-      <os>9</os>
-      <update_interval>1h</update_interval>
-    </provider>
+  <provider name="debian">
+    <enabled>no</enabled>
+    
+    
+    <os>buster</os>
+    
+    <os>bullseye</os>
+    
+    
+    <update_interval>1h</update_interval>
+  </provider>
 
-    <!-- Amazon Linux OS vulnerabilities -->
-    <provider name="alas">
-      <enabled>no</enabled>
-      <os>amazon-linux</os>
-      <os>amazon-linux-2</os>
-      <os>amazon-linux-2022</os>
-      <update_interval>1h</update_interval>
-    </provider>
 
-    <!-- SUSE OS vulnerabilities -->
-    <provider name="suse">
-      <enabled>no</enabled>
-      <os>11-server</os>
-      <os>11-desktop</os>
-      <os>12-server</os>
-      <os>12-desktop</os>
-      <os>15-server</os>
-      <os>15-desktop</os>
-      <update_interval>1h</update_interval>
-    </provider>
+  <provider name="redhat">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+    
+    
+    <os>5</os>
+    
+    <os>6</os>
+    
+    <os>7</os>
+    
+    <os>8</os>
+    
+    <os>9</os>
+    
+    
+  </provider>
 
-    <!-- Arch OS vulnerabilities -->
-    <provider name="arch">
-      <enabled>no</enabled>
-      <update_interval>1h</update_interval>
-    </provider>
 
-    <!-- Alma Linux OS vulnerabilities -->
-    <provider name="almalinux">
-      <enabled>no</enabled>
-      <os>8</os>
-      <os>9</os>
-      <update_interval>1h</update_interval>
-    </provider>
+  <provider name="nvd">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+  </provider>
 
-    <!-- Windows OS vulnerabilities -->
-    <provider name="msu">
-      <enabled>yes</enabled>
-      <update_interval>1h</update_interval>
-    </provider>
 
-    <!-- Aggregate vulnerabilities -->
-    <provider name="nvd">
-      <enabled>yes</enabled>
-      <update_interval>1h</update_interval>
-    </provider>
+  <provider name="arch">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+  </provider>
 
-  </vulnerability-detector>
 
-  <!-- File integrity monitoring -->
-  <syscheck>
-    <disabled>no</disabled>
+  <provider name="alas">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+    
+    
+    <os>amazon-linux</os>
+    
+    <os>amazon-linux-2</os>
+    
+    
+    </provider>
 
-    <!-- Frequency that syscheck is executed default every 12 hours -->
-    <frequency>43200</frequency>
 
-    <scan_on_start>yes</scan_on_start>
+  <provider name="suse">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+    
+    
+    <os>11-server</os>
+    
+    <os>11-desktop</os>
+    
+    <os>12-server</os>
+    
+    <os>12-desktop</os>
+    
+    <os>15-server</os>
+    
+    <os>15-desktop</os>
+    
+    
+    </provider>
 
-    <!-- Generate alert when new file detected -->
-    <alert_new_files>yes</alert_new_files>
 
-    <!-- Don't ignore files that change more than 'frequency' times -->
-    <auto_ignore frequency="10" timeframe="3600">no</auto_ignore>
+  <provider name="msu">
+    <enabled>no</enabled>
+    <update_interval>1h</update_interval>
+  </provider>
 
-    <!-- Directories to check  (perform all possible verifications) -->
-    <directories>/etc,/usr/bin,/usr/sbin</directories>
-    <directories>/bin,/sbin,/boot</directories>
-
-    <!-- Files/directories to ignore -->
-    <ignore>/etc/mtab</ignore>
-    <ignore>/etc/hosts.deny</ignore>
-    <ignore>/etc/mail/statistics</ignore>
-    <ignore>/etc/random-seed</ignore>
-    <ignore>/etc/random.seed</ignore>
-    <ignore>/etc/adjtime</ignore>
-    <ignore>/etc/httpd/logs</ignore>
-    <ignore>/etc/utmpx</ignore>
-    <ignore>/etc/wtmpx</ignore>
-    <ignore>/etc/cups/certs</ignore>
-    <ignore>/etc/dumpdates</ignore>
-    <ignore>/etc/svc/volatile</ignore>
 
-    <!-- File types to ignore -->
-    <ignore type="sregex">.log$|.swp$</ignore>
+  <provider name="almalinux">
+    <enabled>no</enabled>
+    
+    
+    <os>8</os>
+    
+    <os>9</os>
+    
+    
+    <update_interval>1h</update_interval>
+  </provider>
+
+</vulnerability-detector>
+
+<syscheck>
+  <disabled>no</disabled>
+  <frequency>43200</frequency>
+  <scan_on_start>yes</scan_on_start>
+  <auto_ignore frequency="10" timeframe="3600">no</auto_ignore>
+  <process_priority>10</process_priority>
+  <synchronization>
+    <enabled>yes</enabled>
+    <interval>5m</interval>
+    <max_interval>1h</max_interval>
+    <max_eps>10</max_eps>
+  </synchronization>
+
+  <directories check_all="yes" >/etc,/usr/bin,/usr/sbin</directories>
+  <directories check_all="yes" >/bin,/sbin,/boot</directories>
+  <ignore>/etc/mtab</ignore>
+  <ignore>/etc/hosts.deny</ignore>
+  <ignore>/etc/mail/statistics</ignore>
+  <ignore>/etc/random-seed</ignore>
+  <ignore>/etc/random.seed</ignore>
+  <ignore>/etc/adjtime</ignore>
+  <ignore>/etc/httpd/logs</ignore>
+  <ignore>/etc/utmpx</ignore>
+  <ignore>/etc/wtmpx</ignore>
+  <ignore>/etc/cups/certs</ignore>
+  <ignore>/etc/dumpdates</ignore>
+  <ignore>/etc/svc/volatile</ignore>
+  <ignore>/sys/kernel/security</ignore>
+  <ignore>/sys/kernel/debug</ignore>
+  <ignore>/dev/core</ignore>
+  <ignore type="sregex">^/proc</ignore>
+  <ignore type="sregex">.log$|.swp$</ignore>
+  <nodiff>/etc/ssl/private.key</nodiff>
+  <skip_nfs>yes</skip_nfs>
+</syscheck>
 
-    <!-- Check the file, but never compute the diff -->
-    <nodiff>/etc/ssl/private.key</nodiff>
 
-    <skip_nfs>yes</skip_nfs>
-    <skip_dev>yes</skip_dev>
-    <skip_proc>yes</skip_proc>
-    <skip_sys>yes</skip_sys>
-
-    <!-- Nice value for Syscheck process -->
-    <process_priority>10</process_priority>
-
-    <!-- Maximum output throughput -->
-    <max_eps>100</max_eps>
-
-    <!-- Database synchronization settings -->
-    <synchronization>
-      <enabled>yes</enabled>
-      <interval>5m</interval>
-      <max_eps>10</max_eps>
-    </synchronization>
-  </syscheck>
 
-  <!-- Active response -->
-  <global>
-    <white_list>127.0.0.1</white_list>
-    <white_list>^localhost.localdomain$</white_list>
-    <white_list>127.0.0.53</white_list>
-  </global>
 
   <command>
     <name>disable-account</name>
@@ -259,8 +261,8 @@
   </command>
 
   <command>
-    <name>restart-wazuh</name>
-    <executable>restart-wazuh</executable>
+    <name>restart-ossec</name>
+    <executable>restart-ossec</executable>
   </command>
 
   <command>
@@ -283,118 +285,124 @@
 
   <command>
     <name>win_route-null</name>
-    <executable>route-null.exe</executable>
+    <executable>route-null</executable>
+    <timeout_allowed>yes</timeout_allowed>
+  </command>
+
+  <command>
+    <name>win_route-null-2012</name>
+    <executable>route-null-2012</executable>
     <timeout_allowed>yes</timeout_allowed>
   </command>
 
   <command>
     <name>netsh</name>
-    <executable>netsh.exe</executable>
+    <executable>netsh</executable>
     <timeout_allowed>yes</timeout_allowed>
   </command>
 
-  <!--
-  <active-response>
-    active-response options here
-  </active-response>
-  -->
+  <command>
+    <name>netsh-win-2016</name>
+    <executable>netsh-win-2016</executable>
+    <timeout_allowed>yes</timeout_allowed>
+  </command>
+
+  
+  <localfile>
+    <log_format>syslog</log_format>
+    <location>/var/log/syslog</location>
+  </localfile>
+  <localfile>
+    <log_format>syslog</log_format>
+    <location>/var/log/dpkg.log</location>
+  </localfile>
+  <localfile>
+    <log_format>syslog</log_format>
+    <location>/var/log/kern.log</location>
+  </localfile>
+  <localfile>
+    <log_format>syslog</log_format>
+    <location>/var/log/auth.log</location>
+  </localfile>
+  <localfile>
+    <log_format>syslog</log_format>
+    <location>/var/ossec/logs/active-responses.log</location>
+  </localfile>
 
-  <!-- Log analysis -->
   <localfile>
     <log_format>command</log_format>
     <command>df -P</command>
     <frequency>360</frequency>
   </localfile>
-
   <localfile>
     <log_format>full_command</log_format>
     <command>netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d</command>
     <alias>netstat listening ports</alias>
     <frequency>360</frequency>
   </localfile>
-
   <localfile>
     <log_format>full_command</log_format>
     <command>last -n 20</command>
     <frequency>360</frequency>
   </localfile>
 
-  <ruleset>
-    <!-- Default ruleset -->
+
+
+
+<ruleset>
+  <!-- Default ruleset -->
     <decoder_dir>ruleset/decoders</decoder_dir>
     <rule_dir>ruleset/rules</rule_dir>
     <rule_exclude>0215-policy_rules.xml</rule_exclude>
-    <list>etc/lists/audit-keys</list>
-    <list>etc/lists/amazon/aws-eventnames</list>
-    <list>etc/lists/security-eventchannel</list>
-
-    <!-- User-defined ruleset -->
+  <list>etc/lists/audit-keys</list>
+  <list>etc/lists/amazon/aws-eventnames</list>
+  <list>etc/lists/security-eventchannel</list>
+  
+  <!-- User-defined ruleset -->
     <decoder_dir>etc/decoders</decoder_dir>
     <rule_dir>etc/rules</rule_dir>
-  </ruleset>
+</ruleset>
 
-  <rule_test>
-    <enabled>yes</enabled>
-    <threads>1</threads>
-    <max_sessions>64</max_sessions>
-    <session_timeout>15m</session_timeout>
-  </rule_test>
 
-  <!-- Configuration for wazuh-authd -->
-  <auth>
-    <disabled>no</disabled>
-    <port>1515</port>
-    <use_source_ip>no</use_source_ip>
-    <purge>yes</purge>
-    <use_password>no</use_password>
-    <ciphers>HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH</ciphers>
-    <!-- <ssl_agent_ca></ssl_agent_ca> -->
-    <ssl_verify_host>no</ssl_verify_host>
-    <ssl_manager_cert>etc/sslmanager.cert</ssl_manager_cert>
-    <ssl_manager_key>etc/sslmanager.key</ssl_manager_key>
-    <ssl_auto_negotiate>no</ssl_auto_negotiate>
-  </auth>
-
-  <cluster>
-    <name>wazuh</name>
-    <node_name>node01</node_name>
-    <node_type>master</node_type>
-    <key></key>
-    <port>1516</port>
-    <bind_addr>0.0.0.0</bind_addr>
-    <nodes>
-        <node>NODE_IP</node>
-    </nodes>
-    <hidden>no</hidden>
-    <disabled>yes</disabled>
-  </cluster>
 
-</ossec_config>
 
-<ossec_config>
-  <localfile>
-    <log_format>syslog</log_format>
-    <location>/var/ossec/logs/active-responses.log</location>
-  </localfile>
+<!-- Client Authentication Settings -->
+<auth>
+  <disabled>no</disabled>
+  <port>1515</port>
+  <use_source_ip>yes</use_source_ip>
+  <force>
+    <enabled>yes</enabled>
+    <key_mismatch>yes</key_mismatch>  
+    <disconnected_time enabled="yes">1h</disconnected_time>
+    <after_registration_time>1h</after_registration_time>
+  </force>
+  <purge>yes</purge>
+  <use_password>no</use_password>
+  <limit_maxagents>yes</limit_maxagents>
+  <ciphers>HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH</ciphers>
+  <ssl_verify_host>no</ssl_verify_host>
+  <ssl_manager_cert>/var/ossec/etc/sslmanager.cert</ssl_manager_cert>
+  <ssl_manager_key>/var/ossec/etc/sslmanager.key</ssl_manager_key>
+  <ssl_auto_negotiate>no</ssl_auto_negotiate>
+</auth>
 
-  <localfile>
-    <log_format>syslog</log_format>
-    <location>/var/log/auth.log</location>
-  </localfile>
 
-  <localfile>
-    <log_format>syslog</log_format>
-    <location>/var/log/syslog</location>
-  </localfile>
 
-  <localfile>
-    <log_format>syslog</log_format>
-    <location>/var/log/dpkg.log</location>
-  </localfile>
+  
+<cluster>
+ <name>wazuh</name>
+ <node_name>node01</node_name>
+ <node_type>master</node_type>
+ <key>KEY</key>
+ <port>1516</port>
+ <bind_addr>0.0.0.0</bind_addr>
+ <nodes>
+    <node>NODE_IP</node>
+   </nodes>
+ <hidden>no</hidden>
+ <disabled>yes</disabled>
+</cluster>
 
-  <localfile>
-    <log_format>syslog</log_format>
-    <location>/var/log/kern.log</location>
-  </localfile>
 
 </ossec_config>

Info: Computing checksum on file /var/ossec/etc/ossec.conf
Info: /Stage[main]/Wazuh::Manager/Concat[manager_ossec.conf]/File[/var/ossec/etc/ossec.conf]: Filebucketed /var/ossec/etc/ossec.conf to puppet with sum f287985d6a30862ca8f9cf00bf04c4cdf45663e49c2c7a56210d5df3ddc7e2c0
Notice: /Stage[main]/Wazuh::Manager/Concat[manager_ossec.conf]/File[/var/ossec/etc/ossec.conf]/content: 

Notice: /Stage[main]/Wazuh::Manager/Concat[manager_ossec.conf]/File[/var/ossec/etc/ossec.conf]/content: content changed '{sha256}f287985d6a30862ca8f9cf00bf04c4cdf45663e49c2c7a56210d5df3ddc7e2c0' to '{sha256}c1318b647fb0688218e6e3c3c7fa61fb587f8afab5854db36bb69228dd7ce638' (corrective)
Notice: /Stage[main]/Wazuh::Manager/Concat[manager_ossec.conf]/File[/var/ossec/etc/ossec.conf]/mode: mode changed '0660' to '0640' (corrective)
Info: Concat[manager_ossec.conf]: Scheduling refresh of Service[wazuh-manager]
Notice: /Stage[main]/Wazuh::Manager/Service[wazuh-manager]/ensure: ensure changed 'stopped' to 'running' (corrective)
Info: /Stage[main]/Wazuh::Manager/Service[wazuh-manager]: Unscheduling refresh on Service[wazuh-manager]
Notice: Applied catalog in 204.82 seconds
root@ip-172-31-2-18:/#

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants