Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [alerting_exception] analyzer [analyzer_keyword] has not been configured in mappings #961

Closed
paasi6666 opened this issue Jun 7, 2023 · 10 comments
Labels
bug Something isn't working

Comments

@paasi6666
Copy link

What is the bug?
When defining a new monitor (under alerting) and selecting the type 'Per document monitor', the monitor saves with following error:
alert_monitoring

Furthermore, when "testing" the query it times out:
alert_monitoring_timeout

The query looks like this:
image

and as shown below, the query works when discovering:
image

How can one reproduce the bug?
Steps to reproduce the behavior:

  1. Go to Alerting>Monitors>Create monitor
  2. Select 'Per document monitor', select any index and choose a query
  3. Go to Preview query and performance and wait..
  4. Try to save the monitor

What is the expected behavior?
I don't know, hence it never worked for me.

What is your host/environment?

  • OS: Centos7
  • Opensearch Version: 2.7.0
  • Opensearch-Dashboards Version: 2.7.0

NOTE
We are ingesting the logs using graylog.

@paasi6666 paasi6666 added bug Something isn't working untriaged labels Jun 7, 2023
@paasi6666 paasi6666 changed the title [BUG] [alerting_exception] analyzer [analyzer_keyword [BUG] [alerting_exception] analyzer [analyzer_keyword] has not been configured in mappings Jun 7, 2023
@AWSHurneyt AWSHurneyt transferred this issue from opensearch-project/security-analytics Jun 8, 2023
@AWSHurneyt
Copy link
Collaborator

Transferred this issue to the Alerting plugin folder as alerting owns development and maintenance of Document level monitors.

@lezzago
Copy link
Member

lezzago commented Jun 9, 2023

@paasi6666 Could you please share your index mapping for us to reproduce the issue?

@paasi6666
Copy link
Author

Sure. Note, that the mapping is generated by graylog.

{
  "rpz_0": {
    "mappings": {
      "dynamic_templates": [
        {
          "internal_fields": {
            "match": "gl2_*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        },
        {
          "store_generic": {
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        }
      ],
      "properties": {
        "@metadata_beat": {
          "type": "keyword"
        },
        "@metadata_type": {
          "type": "keyword"
        },
        "@metadata_version": {
          "type": "keyword"
        },
        "@timestamp": {
          "type": "date"
        },
        "agent_ephemeral_id": {
          "type": "keyword"
        },
        "agent_name": {
          "type": "keyword"
        },
        "beats_type": {
          "type": "keyword"
        },
        "client_id": {
          "type": "keyword"
        },
        "event_action": {
          "type": "keyword"
        },
        "full_message": {
          "type": "text",
          "analyzer": "standard"
        },
        "gl2_accounted_message_size": {
          "type": "long"
        },
        "gl2_message_id": {
          "type": "keyword"
        },
        "gl2_processing_error": {
          "type": "keyword"
        },
        "gl2_processing_timestamp": {
          "type": "date",
          "format": "uuuu-MM-dd HH:mm:ss.SSS"
        },
        "gl2_receive_timestamp": {
          "type": "date",
          "format": "uuuu-MM-dd HH:mm:ss.SSS"
        },
        "gl2_remote_ip": {
          "type": "keyword"
        },
        "gl2_remote_port": {
          "type": "long"
        },
        "gl2_source_input": {
          "type": "keyword"
        },
        "gl2_source_node": {
          "type": "keyword"
        },
        "host_name": {
          "type": "keyword"
        },
        "hostname": {
          "type": "keyword"
        },
        "log_file_path": {
          "type": "keyword"
        },
        "log_offset": {
          "type": "long"
        },
        "loglevel": {
          "type": "keyword"
        },
        "message": {
          "type": "text",
          "analyzer": "standard"
        },
        "query_action": {
          "type": "keyword"
        },
        "query_class": {
          "type": "keyword"
        },
        "query_name": {
          "type": "keyword"
        },
        "query_type": {
          "type": "keyword"
        },
        "rpz_category": {
          "type": "keyword"
        },
        "rpz_message": {
          "type": "keyword"
        },
        "rpz_zone": {
          "type": "keyword"
        },
        "source": {
          "type": "text",
          "analyzer": "analyzer_keyword",
          "fielddata": true
        },
        "source_ip": {
          "type": "keyword"
        },
        "source_port": {
          "type": "keyword"
        },
        "streams": {
          "type": "keyword"
        },
        "timestamp": {
          "type": "date",
          "format": "uuuu-MM-dd HH:mm:ss.SSS"
        },
        "url_domain": {
          "type": "keyword"
        },
        "url_short": {
          "type": "keyword"
        }
      }
    }
  }
}

@paasi6666
Copy link
Author

paasi6666 commented Jun 12, 2023

@lezzago After taking a look at the index mapping, I see the issue:

"source": {
          "type": "text",
          "analyzer": "analyzer_keyword",
          "fielddata": true
        },

How do i update this to work as intended?

@paasi6666
Copy link
Author

@lezzago The settings of the index look like follows:

GET rpz_0/_settings
{
  "rpz_0": {
    "settings": {
      "index": {
        "number_of_shards": "4",
        "provided_name": "rpz_0",
        "creation_date": "1649938793819",
        "analysis": {
          "analyzer": {
            "analyzer_keyword": {
              "filter": "lowercase",
              "tokenizer": "keyword"
            }
          }
        },
        "number_of_replicas": "0",
        "uuid": "e8NRlQCHQfau984C3QGMPQ",
        "version": {
          "created": "7100299",
          "upgraded": "136287827"
        }
      }
    }
  }
}

@pawelw1
Copy link

pawelw1 commented Jun 21, 2023

This issue has been reported in the OpenSearch forum https://forum.opensearch.org/t/alerting-exception-analyzer-analyzer-keyword-has-not-been-configured-in-mappings/14777.

I've tested this issue with an example from Elastic https://www.elastic.co/guide/en/elasticsearch/reference/current/analyzer.html

The result was exactly the same as reported in the forum case.

opensearch-node1_2.6.0   | org.opensearch.alerting.util.AlertingException: analyzer [my_analyzer] has not been configured in mappings
opensearch-node1_2.6.0   |      at org.opensearch.alerting.util.AlertingException$Companion.wrap(AlertingException.kt:70) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0   |      at org.opensearch.alerting.util.DocLevelMonitorQueries.updateQueryIndexMappings(DocLevelMonitorQueries.kt:369) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0   |      at org.opensearch.alerting.util.DocLevelMonitorQueries.access$updateQueryIndexMappings(DocLevelMonitorQueries.kt:45) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0   |      at org.opensearch.alerting.util.DocLevelMonitorQueries$updateQueryIndexMappings$1.invokeSuspend(DocLevelMonitorQueries.kt) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0   |      at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.6.10.jar:1.6.10-release-923(1.6.10)]
opensearch-node1_2.6.0   |      at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:285) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0   |      at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0   |      at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0   |      at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:742) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0   | Caused by: java.lang.Exception: java.lang.IllegalArgumentException: analyzer [my_analyzer] has not been configured in mappings
opensearch-node1_2.6.0   |      ... 9 more
opensearch-node1_2.6.0   | [2023-06-21T15:04:14,379][ERROR][o.o.a.u.AlertingException] [opensearch-node1] Alerting error: AlertingException[analyzer [my_analyzer] has not been configured in mappings]; nested: Exception[java.lang.IllegalArgumentException: analyzer [my_analyzer] has not been configured in mappings];

I've tested that analyzer with examples from the link and it works with no issues but for some reason Alerting plugin can't see the definition of that analyzer when creating a Monitor.
Looking at the logs, it looks like the Analyzer plugin trigger errors when the index is selected in the Monitor and before hitting the create button. I assume there is a validation running before the Monitor is created.

@petardz
Copy link
Contributor

petardz commented Jun 21, 2023

This is likely because Alerting plugin doesn't copy analyzer def from source index settings to queryIndex settings.

@paasi6666
Copy link
Author

paasi6666 commented Jun 22, 2023

Missing feature or bug? Maybe a OpenSearch developer can take a look at this?

@eirsep
Copy link
Member

eirsep commented Dec 15, 2023

Analyzer updates are static config changes to an index. they would require closing an index > apply analyzer setting change > re-open index.

Closing alerting query index is not possible as all monitors share query index and monitors are running in parallel.
We cannot support this in current architecture

@eirsep eirsep closed this as completed Dec 15, 2023
@mvanderlee
Copy link

@eirsep So any index with a custom analyzer or normalizer simply can not have a monitor configured on it? And this is acceptable to the OpenSearch team?!
Since this is tied to security detectors as well, this should be a considered a major bug, not to be closed lightly!

opensearch-project/security-analytics#697

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants