forked from elastic/elasticsearch
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase from fork #2
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The boost factor doesn't seem to be needed and can be removed.
This reverts commit 27346a0.
If the master is stepping or shutting down, the error-level logging can cause quite a bit of noise.
Today the `DisruptibleMockTransport` always allows a connection to a node to be established, and then fails requests sent to that node such as the subsequent handshake. Since #42342, we log handshake failures on an open connection as a warning, and this makes the test logs rather noisy. This change fails the connection attempt first, avoiding these unrealistic warnings.
Relates to #43271
AggregatorTestCase will NPE if only a single, null MappedFieldType is provided (which is required to simulate an unmapped field). While it's possible to test unmapped fields by supplying other, non-related field types... that's clunky and unnecessary. AggregatorTestCase just needs to filter out null field types when setting up.
With async durability, it does not hold true anymore after #43205. This is fine.
This removes the previous Painless API Doc Generator prior to contexts existing. It has been replaced with the new doc generator that uses the documentation rest API.
Helps with tests that do async translog syncing
* Let's log the state of the thread to find out if it's dead-locked or just stuck after being suspended * Relates #43392
… dotted path (#43170) Given a nested structure composed of Lists and Maps, getByPath will return the value keyed by path. getByPath is a method on Lists and Maps. The path is string Map keys and integer List indices separated by dot. An optional third argument returns a default value if the path lookup fails due to a missing value. Eg. ['key0': ['a', 'b'], 'key1': ['c', 'd']].getByPath('key1') = ['c', 'd'] ['key0': ['a', 'b'], 'key1': ['c', 'd']].getByPath('key1.0') = 'c' ['key0': ['a', 'b'], 'key1': ['c', 'd']].getByPath('key2', 'x') = 'x' [['key0': 'value0'], ['key1': 'value1']].getByPath('1.key1') = 'value1' Throws IllegalArgumentException if an item cannot be found and a default is not given. Throws NumberFormatException if a path element operating on a List is not an integer. Fixes #42769
Today when searching for an exclusive range the java date math parser rounds up the value with the granularity of the operation. So when searching for values that are greater than "now-2M" the parser rounds up the operation to "now-1M". This behavior was introduced when we migrated to java date but it looks like a bug since the joda math parser rounds up values but only when a rounding is used. So "now/M" is rounded to "now-1ms" (minus 1ms to get the largest inclusive value) in the joda parser if the result should be exclusive but no rounding is applied if the input is a simple operation like "now-1M". This change restores the joda behavior in order to have a consistent parsing in all versions. Closes #43277
This commit tweaks the docs for secure settings to ensure the user is aware adding non secure settings to the keystore will result in elasticsearch not starting. fixes #43328 Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
Removes `@TestLogging` annotations in `*DisruptionIT` tests, so that the only tests with annotations are those with open issues. Also adds links to the open issues in the remaining cases. Relates #43403
…ilarity (#43436) If DocumentLevelSecurity is enabled SecurityIndexSearcherWrapper doesn't carry over the cache, cache policy and similarity from the incoming searcher.
The types exists action was kept around only to not break transport client, as RestGetMappingAction has its own logic that ties into the unified HEAD requests handling. Now that tranport client is removed, we can remove also TransportTypesExistsAction as well as its corresponding request, request builder, response, and action.
This change explains why Painless doesn't natively support datetime now, and gives examples of how to create a version of now through user-defined parameters.
All valid licenses permit security, and the only license state where we don't support security is when there is a missing license. However, for safety we should attach the system (or xpack/security) user to internally originated actions even if the license is missing (or, more strictly, doesn't support security). This allows all nodes to communicate and send internal actions (shard state, handshake/pings, etc) even if a license is transitioning between a broken state and a valid state. Relates: #42215
Document level security was depending on the shared "BitsetFilterCache" which (by design) never expires its entries. However, when using DLS queries - particularly templated ones - the number (and memory usage) of generated bitsets can be significant. This change introduces a new cache specifically for BitSets used in DLS queries, that has memory usage constraints and access time expiry. The whole cache is automatically cleared if the role cache is cleared. Individual bitsets are cleared when the corresponding lucene index reader is closed. The cache defaults to 50MB, and entries expire if unused for 7 days.
Adds a new "/_security/privilege/_builtin" endpoint so that builtin index and cluster privileges can be retrieved via the Rest API Resolves: #29771
This commit changes NoOpEngine so that it refreshes its translog stats once translog is trimmed. Relates #43156
This change fixes the name of the index_prefix sub field when the `index_prefix` option is set on a text field that is nested under an object or a multi-field. We don't use the full path of the parent field to set the index_prefix field name so the field is registered under the wrong name. This doesn't break queries since we always retrieve the prefix field through its parent field but this breaks other APIs like _field_caps which tries to find the parent of the `index_prefix` field in the mapping but fails. Closes #43741
This introduces a `failed` state to which the data frame analytics persistent task is set to when something unexpected fails. It could be the process crashing, the results processor hitting some error, etc. The failure message is then captured and set on the task state. From there, it becomes available via the _stats API as `failure_reason`. The df-analytics stop API now has a `force` boolean parameter. This allows the user to call it for a failed task in order to reset it to `stopped` after we have ensured the failure has been communicated to the user. This commit also adds the analytics version in the persistent task params as this allows us to prevent tasks to run on unsuitable nodes in the future.
This adds the ability to execute an action for each element that occurs in an array, for example you could sent a dedicated slack action for each search hit returned from a search. There is also a limit for the number of actions executed, which is hardcoded to 100 right now, to prevent having watches run forever. The watch history logs each action result and the total number of actions the were executed. Relates #34546
This change adds the new endpoint that allows reloading of search analyzers to the high-level java rest client. Relates to #43313
Now that the transport client has been removed, the client transport profile filter can be removed from security. This filter prevented node actions from being executed using a transport client.
This commit moves the config that stores Cors options into the server package. Currently both nio and netty modules must have a copy of this config. Moving it into server allows one copy and the tests to be in a common location.
When profiling a call to `AllocationService#reroute()` in a large cluster containing allocation filters of the form `node-name-*` I observed a nontrivial amount of time spent in `Regex#simpleMatch` due to these allocation filters. Patterns ending in a wildcard are not uncommon, and this change treats them as a special case in `Regex#simpleMatch` in order to shave a bit of time off this calculation. It also uses `String#regionMatches()` to avoid an allocation in the case that the pattern's only wildcard is at the start. Microbenchmark results before this change: Result "org.elasticsearch.common.regex.RegexStartsWithBenchmark.performSimpleMatch": 1113.839 ±(99.9%) 6.338 ns/op [Average] (min, avg, max) = (1102.388, 1113.839, 1135.783), stdev = 9.486 CI (99.9%): [1107.502, 1120.177] (assumes normal distribution) Microbenchmark results with this change applied: Result "org.elasticsearch.common.regex.RegexStartsWithBenchmark.performSimpleMatch": 433.190 ±(99.9%) 0.644 ns/op [Average] (min, avg, max) = (431.518, 433.190, 435.456), stdev = 0.964 CI (99.9%): [432.546, 433.833] (assumes normal distribution) The microbenchmark in question was: @fork(3) @WarmUp(iterations = 10) @measurement(iterations = 10) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @State(Scope.Benchmark) @SuppressWarnings("unused") //invoked by benchmarking framework public class RegexStartsWithBenchmark { private static final String testString = "abcdefghijklmnopqrstuvwxyz"; private static final String[] patterns; static { patterns = new String[testString.length() + 1]; for (int i = 0; i <= testString.length(); i++) { patterns[i] = testString.substring(0, i) + "*"; } } @benchmark public void performSimpleMatch() { for (int i = 0; i < patterns.length; i++) { Regex.simpleMatch(patterns[i], testString); } } }
Tracked in #43924
IndexAnalyzers has a close() method that should iterate through all its wrapped analyzers and close each one in turn. However, instead of delegating to the analyzers' close() methods, it instead wraps them in a Closeable interface, which just returns a list of the analyzers. In addition, whitespace normalizers are ignored entirely.
* [ML][Data Frame] Adding bwc tests for pivot transform * adding continuous transforms * adding continuous dataframes to bwc * adding continuous data frame tests * Adding rolling upgrade tests for continuous df * Fixing test * Adjusting indices used in BWC, and handling NPE for seq_no_stats * updating and muting specific bwc test
…3718) Provides a hook for aggregations to introspect the `ValuesSourceType` for a user supplied Missing value on an unmapped field, when the type would otherwise be `ANY`. Mapped field behavior is unchanged, and still applies the `ValuesSourceType` of the field. This PR just provides the hook for doing this, no existing aggregations have their behavior changed.
Relates #43927.
After backport to 7.3 adjust yml tests Relates to #43444
AggregatorFactory was generic over itself, but it doesn't appear we use this functionality anywhere (e.g. to allow the super class to declare arguments/return types generically for subclasses to override). Most places use a wildcard constraint, and even when a concrete type is specified it wasn't used. But since AggFactories are widely used, this led to the generic touching many pieces of code and making type signatures fairly complex
Bukhtawar
pushed a commit
that referenced
this pull request
May 27, 2020
* * StartsWith is case sensitive aware * Added case sensitivity to EQL configuration * case_sensitive parameter can be specified when running queries (default is case insensitive) * Added STARTS_WITH function to SQL as well * Add case sensitive aware queryfolder tests * Address reviews * Address review #2
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
gradle check
?