Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve events ingestion pipeline #150

Closed
AlexRuiz7 opened this issue Feb 9, 2024 · 10 comments
Closed

Improve events ingestion pipeline #150

AlexRuiz7 opened this issue Feb 9, 2024 · 10 comments
Assignees
Labels
level/task Task issue request/operational Operational requests type/research Research issue

Comments

@AlexRuiz7
Copy link
Member

AlexRuiz7 commented Feb 9, 2024

Description

In 4.8.0, we are including automatic policy-managed index rotation to the indexer. This feature is set up during the indexer's initialization (indexer-init.sh and/or indexer-ism-init.sh scripts), but sometimes, due to race conditions, it happens that Filebeat starts indexing before everything is properly configured, creating an index that doesn't match with any of our index templates and settings.

We need to investigate whether it is possible to prevent Filebeat from indexing if the wazuh-alerts alias is not created, so it actually indexes events where it should.

A good approach is to not grant admin privileges to Filebeat, so it can't create indexes (and all other kind of operations), and grant write or index privileges. Check https://opensearch.org/docs/latest/security/access-control/default-action-groups/#index-level.

This would require modifications in our documentation, but would restrict Filebeat's permissions to exactly what it needs, so we'd have a better control over it.

echo admin | filebeat keystore add username --stdin --force
echo admin | filebeat keystore add password --stdin --force
@AlexRuiz7 AlexRuiz7 added level/task Task issue request/operational Operational requests type/research Research issue labels Feb 9, 2024
@wazuhci wazuhci moved this to Backlog in Release 4.8.0 Feb 9, 2024
@f-galland
Copy link
Member

f-galland commented Feb 9, 2024

We can disallow creation of indices altogether:

DeepinScreenshot_select-area_20240209161750

DeepinScreenshot_select-area_20240209161844

References:

@wazuhci wazuhci moved this from Backlog to In progress in Release 4.8.0 Feb 9, 2024
@f-galland
Copy link
Member

f-galland commented Feb 9, 2024

The following disables new index creation for wazuh-alerts-*:

PUT _cluster/settings
{
  "persistent": {
    "action.auto_create_index": "-wazuh-alerts-*"
  }
}

@AlexRuiz7 AlexRuiz7 self-assigned this Feb 12, 2024
@AlexRuiz7
Copy link
Member Author

AlexRuiz7 commented Feb 12, 2024

It's possible to prevent Filebeat from creating indices using a dedicated user with the necessary permissions, instead of using the admin user. I followed the official documentation.

  1. Create a new role named filebeat_writer with the following permissions:
    • cluster_monitor and cluster_manage_pipelines as Cluster permissions
    • index permission on wazuh-alerts-4.x-* index pattern as Index Permissions
  2. Map the role to a new user named filebeat
    image
  3. Update Filebeat's keystore with the new credentials
    echo filebeat | filebeat keystore add username --stdin --force
    echo filebeat | filebeat keystore add password --stdin --force
  4. Run filebeat test ouptut
    elasticsearch: https://127.0.0.1:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 127.0.0.1
        dial up... OK
      TLS...
        security: server's certificate chain verification is enabled
        handshake... OK
        TLS version: TLSv1.2
        dial up... OK
      talk to server... OK
      version: 7.10.2

To verify that Filebeat cannot create new indices, but can index data to existing ones, edit /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json and change

    {
      "set": {
        "field": "_index",
        "value": "wazuh-alerts"
      }
    },

with any other value to use as index name (the index must not exist). For example:

    {
      "set": {
        "field": "_index",
        "value": "wazuh-alerts-filebeat"
      }
    },

Then run filebeat setup --pipelines and check the logs at /var/log/filebeat/filebeat.

2024-02-12T12:16:43.583Z        WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc16aa0eaa157356c, ext:142083953443, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"d6947a3e-cb55-4b30-83b0-d6b7cf497b57","hostname":"ubuntu2204.localdomain","id":"6ddfe3f0-12ff-45f8-acb1-3cf8dcc850e5","name":"ubuntu2204.localdomain","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"ubuntu2204.localdomain"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":489837},"message":"{\"timestamp\":\"2024-02-12T12:16:42.373+0000\",\"rule\":{\"level\":7,\"description\":\"Host-based anomaly detection event (rootcheck).\",\"id\":\"510\",\"firedtimes\":2,\"mail\":false,\"groups\":[\"ossec\",\"rootcheck\"],\"pci_dss\":[\"10.6.1\"],\"gdpr\":[\"IV_35.7.d\"]},\"agent\":{\"id\":\"000\",\"name\":\"ubuntu2204.localdomain\"},\"manager\":{\"name\":\"ubuntu2204.localdomain\"},\"id\":\"1707740202.770562\",\"full_log\":\"Trojaned version of file '/usr/bin/diff' detected. Signature used: 'bash|^/bin/sh|file\\\\.h|proc\\\\.h|/dev/[^n]|^/bin/.*sh' (Generic).\",\"decoder\":{\"name\":\"rootcheck\"},\"data\":{\"title\":\"Trojaned version of file detected.\",\"file\":\"/usr/bin/diff\"},\"location\":\"rootcheck\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3165986-64768", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00057cd00), Source:"/var/ossec/logs/alerts/alerts.json", Offset:490476, Timestamp:time.Time{wall:0xc16aa0d3a10639f2, ext:50078646264, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x304f22, Device:0xfd00}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=403): {"type":"security_exception","reason":"no permissions for [indices:admin/create] and User [name=filebeat, backend_roles=[], requestedTenant=null]"}

Now, if we revert the pipelines back to wazuh-alerts and restart the manager, we see that documents can be indexed without errors.

Before
image

After
image

2024-02-12T12:35:12.338Z        INFO    [beat]  instance/beat.go:993    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.14.12"}}}
2024-02-12T12:35:12.340Z        INFO    [beat]  instance/beat.go:997    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2024-02-12T11:20:13Z","containerized":false,"name":"ubuntu2204.localdomain","ip":["127.0.0.1/8","192.168.121.110/24","192.168.56.10/24"],"kernel_version":"5.15.0-91-generic","mac":["52:54:00:9a:04:01","52:54:00:9f:8e:77"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"22.04.3 LTS (Jammy Jellyfish)","major":22,"minor":4,"patch":3,"codename":"jammy"},"timezone":"UTC","timezone_offset_sec":0,"id":"cb18148f883f4f28a5e519d9a393c892"}}}
2024-02-12T12:35:12.341Z        INFO    [beat]  instance/beat.go:1026   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/home/vagrant/wazuh-install-files", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 61824, "ppid": 56769, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2024-02-12T12:35:12.000Z"}}}
2024-02-12T12:35:12.341Z        INFO    instance/beat.go:299    Setup Beat: filebeat; Version: 7.10.2
2024-02-12T12:35:12.341Z        INFO    eslegclient/connection.go:99    elasticsearch url: https://127.0.0.1:9200
2024-02-12T12:35:12.342Z        INFO    [publisher]     pipeline/module.go:113  Beat name: ubuntu2204.localdomain
2024-02-12T12:35:12.347Z        INFO    beater/filebeat.go:117  Enabled modules/filesets:  (), wazuh (alerts)
2024-02-12T12:35:12.348Z        INFO    eslegclient/connection.go:99    elasticsearch url: https://127.0.0.1:9200
2024-02-12T12:35:12.360Z        INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.2
2024-02-12T12:35:12.416Z        INFO    fileset/pipelines.go:143        Elasticsearch pipeline with ID 'filebeat-7.10.2-wazuh-alerts-pipeline' loaded

Notes: I also added setup.ilm.check_exists: false to filebeat.yml as stated in the documentation. Moreover, the cluster permission read_pipeline does not exist in OpenSearch, but cluster_manage_pipelines does. It's possible that it grants more permissions than strictly required. We can further fine-tune that by creating a custom action group.

References:

@AlexRuiz7
Copy link
Member Author

AlexRuiz7 commented Feb 12, 2024

We can apply these settings by default, adding each chunk of code to the corresponding file:

# roles.yml
filebeat_writer:
  reserved: true
  hidden: false
  cluster_permissions: 
    - "cluster_monitor"
    - "cluster_manage_pipelines"
  index_permissions:
  - index_patterns:
    - "wazuh-alerts-4.x-*"
    - "wazuh-archives-4.x-*"
    allowed_actions:
    - "index"

# internal_users.yml
filebeat:
  reserved: true
  hidden: false
  hash: $2y$12$Xiobfelar.b0WAaVXz8Vx.go8Zfu..Oieh/ctlA8lX2s5uGsMeE9S

# roles_mapping.yml
filebeat_writer:
  reserved: true
  hidden: false
  users:
  - "filebeat"

# filebeat.yml
setup.ilm.check_exists: false

Note: credentials are filebeat:filebeat. Password is hashed using this tool.

I've tested these settings and the creation of the role and user is successful. Also, Filebeat holds the events and retries indefinitely until the index exists. Testing that is as simple as removing the wazuh-alerts-4.x-* indices and re-initializating the cluster with /usr/share/wazuh-indexer/bin/indexer-init.sh script.

@AlexRuiz7
Copy link
Member Author

Commenting out setup.ilm.check_exists: false has no apparent side effects. However, it would be better if we test it out on a fresh installation.

@AlexRuiz7
Copy link
Member Author

AlexRuiz7 commented Feb 12, 2024

If we include this in the next stage of 4.8.0, the plan would be:

  • Update indexer files in wazuh-packages (+ propagation to all deployments methods).
  • Update Filebeat credentials keystore in the different deployment methods.
  • Update Filebeat credentials keystore in wazuh-documentation.
  • Update filebeat module (optional but recommended).
  • Update indexer files in wazuh-indexer (towards 4.9.0).

I blocked this issue for discussion with management. cc: @gdiazlo

@wazuhci wazuhci moved this from In progress to Blocked in Release 4.8.0 Feb 12, 2024
@AlexRuiz7
Copy link
Member Author

We are unable to verify that Filebeat retries indefinitely and doesn't drop the events. We've found several resources affirming so, however, it looks like the actual drop of events takes place in the indexer side, which rejects the event for its own reasons. If the indexer replies with any status code, Filebeat assumes it delivery as completed, as it could reach the output. As a result, the event is delivered but not indexed.

We have been able to confirm this behavior by stopping the indexer and seeing how Filebeat retries to reach it.

@f-galland
Copy link
Member

In order to verify disabling auto_create_index won't affect filebeat, its ingestion pipeline or the indexer initialization routines we run the following tests:

  • Disabled auto_create_index on wazuh-alerts-*:
root@ubuntu2204:~# curl -k -u admin:admin -XPUT "https://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "action.auto_create_index": "-wazuh-alerts-*"
  }
}'
{
  "acknowledged" : true,
  "persistent" : {
    "action" : {
      "auto_create_index" : "-wazuh-alerts-*"
    }
  },
  "transient" : { }
}
  • Delete filebeat's ingest pipeline:
root@ubuntu2204:~# curl -k -u admin:admin -XDELETE "https://127.0.0.1:9200/_ingest/pipeline/filebeat-7.10.2-wazuh-alerts-pipeline?pretty"
{
  "acknowledged" : true
}
  • Delete wazuh-alerts alias to indices:
root@ubuntu2204:~# curl -k -u admin:admin -XDELETE "https://127.0.0.1:9200/wazuh-alerts-4.x-2024.02.14-000001/_alias/wazuh-alerts?pretty"
{
  "acknowledged" : true
}

root@ubuntu2204:~# curl -k -u admin:admin -XGET "https://127.0.0.1:9200/_cat/aliases?pretty"
.kibana                                     .kibana_1                                          - - - -
wazuh-archives                              wazuh-archives-4.x-2024.02.14-000001               - - - true
.opendistro-ism-managed-index-history-write .opendistro-ism-managed-index-history-2024.02.14-1 - - - -
  • Delete indices themselves:
root@ubuntu2204:~# curl -k -u admin:admin -XDELETE "https://127.0.0.1:9200/wazuh-alerts-4.x-*?pretty"
{
  "acknowledged" : true
}
  • filebeat's attempt to index a new event is rejected by the Wazuh Indexer:
root@ubuntu2204:~# log='{"timestamp":"2024-02-14T13:49:15.552-0300","rule":{"level":3,"description":"sshd: authentication success.","id":"5715","mitre":{"id":["T1078","T1021"],"tactic":["Defense Evasion","Persistence","Privilege Escalation","Initial Access","Lateral Movement"],"technique":["Valid Accounts","Remote Services"]},"firedtimes":53,"mail":false,"groups":["syslog","sshd","authentication_success"],"gdpr":["IV_32.2"],"gpg13":["7.1","7.2"],"hipaa":["164.312.b"],"nist_800_53":["AU.14","AC.7"],"pci_dss":["10.2.5"],"tsc":["CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"ubuntu2204"},"manager":{"name":"ubuntu2204"},"id":"9999999999.9999999","full_log":"Jan 30 11:19:16 ubuntu2204 sshd[221325]: Accepted publickey for root from 192.168.83.5 port 54094 ssh2: RSA SHA256:000000000000000/000000/00000000000000000000","predecoder":{"program_name":"sshd","timestamp":"Jan 30 11:19:16","hostname":"ubuntu2204"},"decoder":{"parent":"sshd","name":"sshd"},"data":{"srcip":"192.168.83.5","srcport":"54094","dstuser":"root"},"location":"syslog"}'

root@ubuntu2204:~# echo $log >> /var/ossec/logs/alerts/alerts.json 

root@ubuntu2204:~# grep '9999999999.9999999' /var/log/filebeat/filebeat*
/var/log/filebeat/filebeat.1:  "message": "{\"timestamp\":\"2024-02-14T13:49:15.552-0300\",\"rule\":{\"level\":3,\"description\":\"sshd: authentication success.\",\"id\":\"5715\",\"mitre\":{\"id\":[\"T1078\",\"T1021\"],\"tactic\":[\"Defense Evasion\",\"Persistence\",\"Privilege Escalation\",\"Initial Access\",\"Lateral Movement\"],\"technique\":[\"Valid Accounts\",\"Remote Services\"]},\"firedtimes\":53,\"mail\":false,\"groups\":[\"syslog\",\"sshd\",\"authentication_success\"],\"gdpr\":[\"IV_32.2\"],\"gpg13\":[\"7.1\",\"7.2\"],\"hipaa\":[\"164.312.b\"],\"nist_800_53\":[\"AU.14\",\"AC.7\"],\"pci_dss\":[\"10.2.5\"],\"tsc\":[\"CC6.8\",\"CC7.2\",\"CC7.3\"]},\"agent\":{\"id\":\"000\",\"name\":\"ubuntu2204\"},\"manager\":{\"name\":\"ubuntu2204\"},\"id\":\"9999999999.9999999\",\"full_log\":\"Jan 30 11:19:16 ubuntu2204 sshd[221325]: Accepted publickey for root from 192.168.83.5 port 54094 ssh2: RSA SHA256:000000000000000/000000/00000000000000000000\",\"predecoder\":{\"program_name\":\"sshd\",\"timestamp\":\"Jan 30 11:19:16\",\"hostname\":\"ubuntu2204\"},\"decoder\":{\"parent\":\"sshd\",\"name\":\"sshd\"},\"data\":{\"srcip\":\"192.168.83.5\",\"srcport\":\"54094\",\"dstuser\":\"root\"},\"location\":\"syslog\"}",
/var/log/filebeat/filebeat.1:2024-02-14T15:29:43.306-0300	WARN	[elasticsearch]	elasticsearch/client.go:408	Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc16b5f85906b7957, ext:3635343387376, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"f2778bdf-53c5-462b-b68b-a392e9299298","hostname":"ubuntu2204","id":"54329818-a92c-48b1-a99b-7c91822aec19","name":"ubuntu2204","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"ubuntu2204"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":877454},"message":"{\"timestamp\":\"2024-02-14T13:49:15.552-0300\",\"rule\":{\"level\":3,\"description\":\"sshd: authentication success.\",\"id\":\"5715\",\"mitre\":{\"id\":[\"T1078\",\"T1021\"],\"tactic\":[\"Defense Evasion\",\"Persistence\",\"Privilege Escalation\",\"Initial Access\",\"Lateral Movement\"],\"technique\":[\"Valid Accounts\",\"Remote Services\"]},\"firedtimes\":53,\"mail\":false,\"groups\":[\"syslog\",\"sshd\",\"authentication_success\"],\"gdpr\":[\"IV_32.2\"],\"gpg13\":[\"7.1\",\"7.2\"],\"hipaa\":[\"164.312.b\"],\"nist_800_53\":[\"AU.14\",\"AC.7\"],\"pci_dss\":[\"10.2.5\"],\"tsc\":[\"CC6.8\",\"CC7.2\",\"CC7.3\"]},\"agent\":{\"id\":\"000\",\"name\":\"ubuntu2204\"},\"manager\":{\"name\":\"ubuntu2204\"},\"id\":\"9999999999.9999999\",\"full_log\":\"Jan 30 11:19:16 ubuntu2204 sshd[221325]: Accepted publickey for root from 192.168.83.5 port 54094 ssh2: RSA SHA256:000000000000000/000000/00000000000000000000\",\"predecoder\":{\"program_name\":\"sshd\",\"timestamp\":\"Jan 30 11:19:16\",\"hostname\":\"ubuntu2204\"},\"decoder\":{\"parent\":\"sshd\",\"name\":\"sshd\"},\"data\":{\"srcip\":\"192.168.83.5\",\"srcport\":\"54094\",\"dstuser\":\"root\"},\"location\":\"syslog\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::806327-64768", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000400340), Source:"/var/ossec/logs/alerts/alerts.json", Offset:878479, Timestamp:time.Time{wall:0xc16b5f72cfd9e98c, ext:3560333847846, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0xc4db7, Device:0xfd00}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=404): {"type":"index_not_found_exception","reason":"no such index [wazuh-alerts] and [action.auto_create_index] ([-wazuh-alerts-*]) doesn't match","index":"wazuh-alerts","index_uuid":"_na_"}
  • Re-run Indexer initialization:
root@ubuntu2204:~# /usr/share/wazuh-indexer/bin/indexer-ism-init.sh -i localhost -p admin
Will create 'wazuh' index template
 SUCC: 'wazuh' template created or updated
Will create 'ism_history_indices' index template
 SUCC: 'ism_history_indices' template created or updated
Will disable replicas for 'plugins.index_state_management.history' indices
 SUCC: cluster's settings saved
Will create index templates to configure the alias
 SUCC: 'wazuh-alerts' template created or updated
 SUCC: 'wazuh-archives' template created or updated
Will create the 'rollover_policy' policy
  INFO: policy 'rollover_policy' already exists. Skipping policy creation
Will create initial indices for the aliases
  SUCC: 'wazuh-alerts' write index created
  INFO: 'wazuh-archives' write index already exists. Skipping write index creation
SUCC: Indexer ISM initialization finished successfully.

root@ubuntu2204:~# curl -k -u admin:admin -XGET "https://127.0.0.1:9200/_cat/aliases?pretty"
wazuh-alerts                                wazuh-alerts-4.x-2024.02.14-000001                 - - - true
.kibana                                     .kibana_1                                          - - - -
wazuh-archives                              wazuh-archives-4.x-2024.02.14-000001               - - - true
.opendistro-ism-managed-index-history-write .opendistro-ism-managed-index-history-2024.02.14-1 - - - -
  • Reload ingest pipeline:
root@ubuntu2204:~# filebeat setup --pipelines
Loaded Ingest pipelines

root@ubuntu2204:~# curl -k -u admin:admin -XGET "https://127.0.0.1:9200/_ingest/pipeline?pretty"
{
  "filebeat-7.10.2-wazuh-alerts-pipeline" : {
    "description" : "Wazuh alerts pipeline",
    "processors" : [
      {
        "json" : {
          "field" : "message",
          "add_to_root" : true
        }
      },
      {
        "set" : {
          "field" : "data.aws.region",
          "value" : "{{data.aws.awsRegion}}",
          "override" : false,
          "ignore_failure" : true,
          "ignore_empty_value" : true
        }
      },
      {
        "set" : {
          "field" : "data.aws.accountId",
          "value" : "{{data.aws.aws_account_id}}",
          "override" : false,
          "ignore_failure" : true,
          "ignore_empty_value" : true
        }
      },
      {
        "geoip" : {
          "field" : "data.srcip",
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "geoip" : {
          "field" : "data.win.eventdata.ipAddress",
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "geoip" : {
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "data.aws.sourceIPAddress",
          "target_field" : "GeoLocation"
        }
      },
      {
        "geoip" : {
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "data.aws.client_ip"
        }
      },
      {
        "geoip" : {
          "field" : "data.aws.service.action.networkConnectionAction.remoteIpDetails.ipAddressV4",
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "geoip" : {
          "field" : "data.aws.httpRequest.clientIp",
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "geoip" : {
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "data.gcp.jsonPayload.sourceIP",
          "target_field" : "GeoLocation"
        }
      },
      {
        "geoip" : {
          "field" : "data.office365.ClientIP",
          "target_field" : "GeoLocation",
          "properties" : [
            "city_name",
            "country_name",
            "region_name",
            "location"
          ],
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "date" : {
          "field" : "timestamp",
          "target_field" : "@timestamp",
          "formats" : [
            "ISO8601"
          ],
          "ignore_failure" : false
        }
      },
      {
        "set" : {
          "field" : "_index",
          "value" : "wazuh-alerts"
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "message"
        }
      },
      {
        "remove" : {
          "field" : "ecs",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "beat",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "input_type"
        }
      },
      {
        "remove" : {
          "field" : "tags",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "count",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "@version"
        }
      },
      {
        "remove" : {
          "field" : "log",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "offset",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "type",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "host",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "fields",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "event",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "fileset",
          "ignore_missing" : true,
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "ignore_failure" : true,
          "field" : "service"
        }
      }
    ],
    "on_failure" : [
      {
        "drop" : { }
      }
    ]
  }
}

@f-galland
Copy link
Member

With regards to @AlexRuiz7 considerations on max_retries above, we observed that the setting only makes filebeat repeat its attempts to index an event on network issues, but not 404 errors such as the one we are getting from elastic when auto_create_index is set to true.

Below is an debug log excerpt of filebeat (re-)trying to connect to an unreachable Wazuh Indexer:

2024-02-14T14:44:18.085-0300	ERROR	[elasticsearch]	elasticsearch/client.go:224	failed to perform any bulk index operations: Post "https://127.0.0.1:9200/_bulk": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:18.085-0300	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2024-02-14T14:44:18.085-0300	INFO	[publisher]	pipeline/retry.go:223	  done
2024-02-14T14:44:19.817-0300	ERROR	[publisher_pipeline_output]	pipeline/output.go:180	failed to publish events: Post "https://127.0.0.1:9200/_bulk": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:19.818-0300	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(elasticsearch(https://127.0.0.1:9200/))
2024-02-14T14:44:19.818-0300	DEBUG	[esclientleg]	eslegclient/connection.go:290	ES Ping(url=https://127.0.0.1:9200/)
2024-02-14T14:44:19.818-0300	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2024-02-14T14:44:19.818-0300	INFO	[publisher]	pipeline/retry.go:223	  done
2024-02-14T14:44:19.818-0300	DEBUG	[esclientleg]	eslegclient/connection.go:294	Ping request failed with: Get "https://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:20.085-0300	DEBUG	[harvester]	log/log.go:107	End of file reached: /var/ossec/logs/alerts/alerts.json; Backoff now.
2024-02-14T14:44:22.845-0300	ERROR	[publisher_pipeline_output]	pipeline/output.go:154	Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200/)): Get "https://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:22.846-0300	INFO	[publisher_pipeline_output]	pipeline/output.go:145	Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200/)) with 1 reconnect attempt(s)
2024-02-14T14:44:22.846-0300	DEBUG	[esclientleg]	eslegclient/connection.go:290	ES Ping(url=https://127.0.0.1:9200/)
2024-02-14T14:44:22.846-0300	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2024-02-14T14:44:22.846-0300	INFO	[publisher]	pipeline/retry.go:223	  done
2024-02-14T14:44:22.846-0300	DEBUG	[esclientleg]	eslegclient/connection.go:294	Ping request failed with: Get "https://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:24.085-0300	DEBUG	[harvester]	log/log.go:107	End of file reached: /var/ossec/logs/alerts/alerts.json; Backoff now.
2024-02-14T14:44:27.083-0300	DEBUG	[input]	input/input.go:139	Run input
2024-02-14T14:44:27.083-0300	DEBUG	[input]	log/input.go:205	Start next scan
2024-02-14T14:44:27.083-0300	DEBUG	[input]	log/input.go:439	Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2024-02-14T14:44:27.083-0300	DEBUG	[input]	log/input.go:530	Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 872060
2024-02-14T14:44:27.083-0300	DEBUG	[input]	log/input.go:582	Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2024-02-14T14:44:27.083-0300	DEBUG	[input]	log/input.go:226	input states cleaned up. Before: 1, After: 1, Pending: 0
2024-02-14T14:44:28.816-0300	ERROR	[publisher_pipeline_output]	pipeline/output.go:154	Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200/)): Get "https://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
2024-02-14T14:44:28.816-0300	INFO	[publisher_pipeline_output]	pipeline/output.go:145	Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200/)) with 2 reconnect attempt(s)
2024-02-14T14:44:28.816-0300	DEBUG	[esclientleg]	eslegclient/connection.go:290	ES Ping(url=https://127.0.0.1:9200/)
2024-02-14T14:44:28.817-0300	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2024-02-14T14:44:28.817-0300	INFO	[publisher]	pipeline/retry.go:223	  done
2024-02-14T14:44:28.817-0300	DEBUG	[esclientleg]	eslegclient/connection.go:294	Ping request failed with: Get "https://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused

However, when filebeat tries and fails to create a new index, there is no retry attempt:

(status=404): {"type":"index_not_found_exception","reason":"no such index [wazuh-alerts] and [action.auto_create_index] ([-wazuh-alerts-*]) doesn't match","index":"wazuh-alerts","index_uuid":"_na_"}

@AlexRuiz7
Copy link
Member Author

We have discussed both solutions internally with the Wazuh development directors, and we are finally retiring the rollover+alias project from the 4.8.0 release, as both solutions represent a breaking change for backwards compatibility. We'll revisit this for the next major release 5.0.0, probably in the form of a plugin.

Nevertheless, we've gathered very valuable information, which will for sure be put in practice in the future.

As a result, I'm closing this research issue as completed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
level/task Task issue request/operational Operational requests type/research Research issue
Projects
No open projects
Status: Done
Development

No branches or pull requests

2 participants