Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into remote_upload_hook
Browse files Browse the repository at this point in the history
  • Loading branch information
ankitkala committed May 15, 2023
2 parents e2b1a3b + e44b3f1 commit 21e6838
Show file tree
Hide file tree
Showing 40 changed files with 606 additions and 211 deletions.
11 changes: 8 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Bump `org.apache.commons:commons-compress` from 1.22 to 1.23.0
- Bump `org.apache.commons:commons-configuration2` from 2.8.0 to 2.9.0
- Bump `com.netflix.nebula:nebula-publishing-plugin` from 19.2.0 to 20.3.0
- Bump `org.apache.commons:commons-compress` from 1.22 to 1.23.0
- Bump `com.diffplug.spotless` from 6.17.0 to 6.18.0
- Bump `io.opencensus:opencensus-api` from 0.18.0 to 0.31.1 ([#7291](https://github.com/opensearch-project/OpenSearch/pull/7291))
- Bump `com.azure:azure-core` from 1.34.0 to 1.39.0

### Changed
- [CCR] Add getHistoryOperationsFromTranslog method to fetch the history snapshot from translogs ([#3948](https://github.com/opensearch-project/OpenSearch/pull/3948))
Expand Down Expand Up @@ -77,6 +75,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Fix compression support for h2c protocol ([#4944](https://github.com/opensearch-project/OpenSearch/pull/4944))
- Support OpenSSL Provider with default Netty allocator ([#5460](https://github.com/opensearch-project/OpenSearch/pull/5460))
- Replaces ZipInputStream with ZipFile to fix Zip Slip vulnerability ([#7230](https://github.com/opensearch-project/OpenSearch/pull/7230))
- Add missing validation/parsing of SearchBackpressureMode of SearchBackpressureSettings ([#7541](https://github.com/opensearch-project/OpenSearch/pull/7541))

### Security

Expand All @@ -89,6 +88,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Add descending order search optimization through reverse segment read. ([#7244](https://github.com/opensearch-project/OpenSearch/pull/7244))
- Add 'unsigned_long' numeric field type ([#6237](https://github.com/opensearch-project/OpenSearch/pull/6237))
- Add back primary shard preference for queries ([#7375](https://github.com/opensearch-project/OpenSearch/pull/7375))
- Add descending order search optimization through reverse segment read. ([#7244](https://github.com/opensearch-project/OpenSearch/pull/7244))
- Adds ExtensionsManager.lookupExtensionSettingsById ([#7466](https://github.com/opensearch-project/OpenSearch/pull/7466))

### Dependencies
- Bump `com.netflix.nebula:gradle-info-plugin` from 12.0.0 to 12.1.0
Expand All @@ -102,21 +103,25 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Bump `commons-io:commons-io` from 2.7 to 2.11.0
- Bump `org.apache.shiro:shiro-core` from 1.9.1 to 1.11.0 ([#7397](https://github.com/opensearch-project/OpenSearch/pull/7397))
- Bump `jetty-server` in hdfs-fixture from 9.4.49.v20220914 to 9.4.51.v20230217 ([#7405](https://github.com/opensearch-project/OpenSearch/pull/7405))
- Bump `com.networknt:json-schema-validator` from 1.0.78 to 1.0.81 (#7460)
- OpenJDK Update (April 2023 Patch releases) ([#7448](https://github.com/opensearch-project/OpenSearch/pull/7448)
- Bump `org.apache.commons:commons-compress` from 1.22 to 1.23.0 (#7462)
- Bump `com.azure:azure-core` from 1.34.0 to 1.39.0
- Bump `com.networknt:json-schema-validator` from 1.0.78 to 1.0.81 (#7460)
- Bump Apache Lucene to 9.6.0 ([#7505](https://github.com/opensearch-project/OpenSearch/pull/7505))
- Bump `com.google.cloud:google-cloud-core-http` from 1.93.3 to 2.17.0 (#7488)

### Changed
- Enable `./gradlew build` on MacOS by disabling bcw tests ([#7303](https://github.com/opensearch-project/OpenSearch/pull/7303))
- Moved concurrent-search from sandbox plugin to server module behind feature flag ([#7203](https://github.com/opensearch-project/OpenSearch/pull/7203))
- Allow access to indices cache clear APIs for read only indexes ([#7303](https://github.com/opensearch-project/OpenSearch/pull/7303))

### Deprecated

### Removed

### Fixed
- Replaces ZipInputStream with ZipFile to fix Zip Slip vulnerability ([#7230](https://github.com/opensearch-project/OpenSearch/pull/7230))
- Add missing validation/parsing of SearchBackpressureMode of SearchBackpressureSettings ([#7541](https://github.com/opensearch-project/OpenSearch/pull/7541))

### Security

Expand Down
9 changes: 5 additions & 4 deletions DEVELOPER_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,8 +614,9 @@ Pass a list of files or directories to limit your search.

### Lucene Snapshots

The Github workflow in [lucene-snapshots.yml](.github/workflows/lucene-snapshots.yml) is a Github worfklow executable by maintainers to build a top-down snapshot build of lucene.
The Github workflow in [lucene-snapshots.yml](.github/workflows/lucene-snapshots.yml) is a GitHub workflow executable by maintainers to build a top-down snapshot build of Lucene.
These snapshots are available to test compatibility with upcoming changes to Lucene by updating the version at [version.properties](buildsrc/version.properties) with the `version-snapshot-sha` version. Example: `lucene = 10.0.0-snapshot-2e941fc`.
Note that these snapshots do not follow the Maven [naming convention](https://maven.apache.org/guides/getting-started/index.html#what-is-a-snapshot-version) with a (case sensitive) SNAPSHOT suffix, so these artifacts are considered "releases" by build systems such as the `mavenContent` repository filter in Gradle or `releases` artifact policies in Maven.

### Flaky Tests

Expand All @@ -626,6 +627,6 @@ If you encounter a build/test failure in CI that is unrelated to the change in y
1. Follow failed CI links, and locate the failing test(s).
2. Copy-paste the failure into a comment of your PR.
3. Search through [issues](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aopen+is%3Aissue+label%3A%22flaky-test%22) using the name of the failed test for whether this is a known flaky test.
5. If an existing issue is found, paste a link to the known issue in a comment to your PR.
6. If no existing issue is found, open one.
7. Retry CI via the GitHub UX or by pushing an update to your PR.
4. If an existing issue is found, paste a link to the known issue in a comment to your PR.
5. If no existing issue is found, open one.
6. Retry CI via the GitHub UX or by pushing an update to your PR.
2 changes: 1 addition & 1 deletion libs/core/src/main/java/org/opensearch/Version.java
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ public class Version implements Comparable<Version>, ToXContentFragment {
public static final Version V_2_6_1 = new Version(2060199, org.apache.lucene.util.Version.LUCENE_9_5_0);
public static final Version V_2_7_0 = new Version(2070099, org.apache.lucene.util.Version.LUCENE_9_5_0);
public static final Version V_2_7_1 = new Version(2070199, org.apache.lucene.util.Version.LUCENE_9_5_0);
public static final Version V_2_8_0 = new Version(2080099, org.apache.lucene.util.Version.LUCENE_9_5_0);
public static final Version V_2_8_0 = new Version(2080099, org.apache.lucene.util.Version.LUCENE_9_6_0);
public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_6_0);
public static final Version CURRENT = V_3_0_0;

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/
package org.opensearch.core.common.io.stream;

import java.io.InputStream;

/**
* Foundation class for reading core types off the transport stream
*
* todo: refactor {@code StreamInput} primitive readers to this class
*
* @opensearch.internal
*/
public abstract class BaseStreamInput extends InputStream {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/
package org.opensearch.core.common.io.stream;

import java.io.OutputStream;

/**
* Foundation class for writing core types over the transport stream
*
* todo: refactor {@code StreamOutput} primitive writers to this class
*
* @opensearch.internal
*/
public abstract class BaseStreamOutput extends OutputStream {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/
package org.opensearch.core.common.io.stream;

import java.io.IOException;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

/**
* Implementers can be written to a {@code StreamOutput} and read from a {@code StreamInput}. This allows them to be "thrown
* across the wire" using OpenSearch's internal protocol. If the implementer also implements equals and hashCode then a copy made by
* serializing and deserializing must be equal and have the same hashCode. It isn't required that such a copy be entirely unchanged.
*
* @opensearch.internal
*/
public interface BaseWriteable<S extends BaseStreamOutput> {
/**
* A WriteableRegistry registers {@link Writer} methods for writing data types over a
* {@link BaseStreamOutput} channel and {@link Reader} methods for reading data from a
* {@link BaseStreamInput} channel.
*
* @opensearch.internal
*/
class WriteableRegistry {
private static final Map<Class<?>, Writer<? extends BaseStreamOutput, ?>> WRITER_REGISTRY = new ConcurrentHashMap<>();
private static final Map<Byte, Reader<? extends BaseStreamInput, ?>> READER_REGISTRY = new ConcurrentHashMap<>();

/**
* registers a streamable writer
*
* @opensearch.internal
*/
public static <W extends Writer<? extends BaseStreamOutput, ?>> void registerWriter(final Class<?> clazz, final W writer) {
if (WRITER_REGISTRY.containsKey(clazz)) {
throw new IllegalArgumentException("Streamable writer already registered for type [" + clazz.getName() + "]");
}
WRITER_REGISTRY.put(clazz, writer);
}

/**
* registers a streamable reader
*
* @opensearch.internal
*/
public static <R extends Reader<? extends BaseStreamInput, ?>> void registerReader(final byte ordinal, final R reader) {
if (READER_REGISTRY.containsKey(ordinal)) {
throw new IllegalArgumentException("Streamable reader already registered for ordinal [" + (int) ordinal + "]");
}
READER_REGISTRY.put(ordinal, reader);
}

/**
* Returns the registered writer keyed by the class type
*/
@SuppressWarnings("unchecked")
public static <W extends Writer<? extends BaseStreamOutput, ?>> W getWriter(final Class<?> clazz) {
return (W) WRITER_REGISTRY.get(clazz);
}

/**
* Returns the ristered reader keyed by the unique ordinal
*/
@SuppressWarnings("unchecked")
public static <R extends Reader<? extends BaseStreamInput, ?>> R getReader(final byte b) {
return (R) READER_REGISTRY.get(b);
}
}

/**
* Write this into the {@linkplain BaseStreamOutput}.
*/
void writeTo(final S out) throws IOException;

/**
* Reference to a method that can write some object to a {@link BaseStreamOutput}.
* <p>
* By convention this is a method from {@link BaseStreamOutput} itself (e.g., {@code StreamOutput#writeString}). If the value can be
* {@code null}, then the "optional" variant of methods should be used!
* <p>
* Most classes should implement {@code Writeable} and the {@code Writeable#writeTo(BaseStreamOutput)} method should <em>use</em>
* {@link BaseStreamOutput} methods directly or this indirectly:
* <pre><code>
* public void writeTo(StreamOutput out) throws IOException {
* out.writeVInt(someValue);
* out.writeMapOfLists(someMap, StreamOutput::writeString, StreamOutput::writeString);
* }
* </code></pre>
*/
@FunctionalInterface
interface Writer<S extends BaseStreamOutput, V> {

/**
* Write {@code V}-type {@code value} to the {@code out}put stream.
*
* @param out Output to write the {@code value} too
* @param value The value to add
*/
void write(final S out, V value) throws IOException;
}

/**
* Reference to a method that can read some object from a stream. By convention this is a constructor that takes
* {@linkplain BaseStreamInput} as an argument for most classes and a static method for things like enums. Returning null from one of these
* is always wrong - for that we use methods like {@code StreamInput#readOptionalWriteable(Reader)}.
* <p>
* As most classes will implement this via a constructor (or a static method in the case of enumerations), it's something that should
* look like:
* <pre><code>
* public MyClass(final StreamInput in) throws IOException {
* this.someValue = in.readVInt();
* this.someMap = in.readMapOfLists(StreamInput::readString, StreamInput::readString);
* }
* </code></pre>
*/
@FunctionalInterface
interface Reader<S extends BaseStreamInput, V> {

/**
* Read {@code V}-type value from a stream.
*
* @param in Input to read the value from
*/
V read(final S in) throws IOException;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/
/** Core transport stream classes */
package org.opensearch.core.common.io.stream;
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@

import org.opensearch.common.io.stream.StreamInput;
import org.opensearch.common.io.stream.StreamOutput;
import org.opensearch.common.io.stream.Writeable;
import org.opensearch.common.util.LongObjectPagedHashMap;
import org.opensearch.core.xcontent.XContentBuilder;
import org.opensearch.search.aggregations.InternalAggregation;
Expand Down Expand Up @@ -69,7 +68,7 @@ protected BaseGeoGrid(String name, int requiredSize, List<BaseGeoGridBucket> buc
this.buckets = buckets;
}

protected abstract Writeable.Reader<B> getBucketReader();
protected abstract Reader<B> getBucketReader();

/**
* Read from a stream.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ protected InternalGeoHashGridBucket createBucket(long hashAsLong, long docCount,
}

@Override
protected Reader getBucketReader() {
protected Reader<InternalGeoHashGridBucket> getBucketReader() {
return InternalGeoHashGridBucket::new;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ protected InternalGeoTileGridBucket createBucket(long hashAsLong, long docCount,
}

@Override
protected Reader getBucketReader() {
protected Reader<InternalGeoTileGridBucket> getBucketReader() {
return InternalGeoTileGridBucket::new;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,32 @@
include_defaults: true

- match: {defaults.node.attr.testattr: "test"}

---
"Test set search backpressure mode":

- do:
cluster.put_settings:
body:
persistent:
search_backpressure.mode: "monitor_only"

- match: {persistent: {search_backpressure: {mode: "monitor_only"}}}

---
"Test set invalid search backpressure mode":

- skip:
version: "- 2.9.99"
reason: "Parsing and validation of SearchBackpressureMode does not exist in versions < 3.0"

- do:
catch: bad_request
cluster.put_settings:
body:
persistent:
search_backpressure.mode: "monitor-only"

- match: {error.root_cause.0.type: "illegal_argument_exception"}
- match: { error.root_cause.0.reason: "Invalid SearchBackpressureMode: monitor-only" }
- match: { status: 400 }
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,12 @@ public void testClearIndicesCacheWithBlocks() {
NumShards numShards = getNumShards("test");

// Request is not blocked
for (String blockSetting : Arrays.asList(SETTING_BLOCKS_READ, SETTING_BLOCKS_WRITE)) {
for (String blockSetting : Arrays.asList(
SETTING_BLOCKS_READ,
SETTING_BLOCKS_WRITE,
SETTING_READ_ONLY,
SETTING_READ_ONLY_ALLOW_DELETE
)) {
try {
enableIndexBlock("test", blockSetting);
ClearIndicesCacheResponse clearIndicesCacheResponse = client().admin()
Expand All @@ -73,7 +78,7 @@ public void testClearIndicesCacheWithBlocks() {
}
}
// Request is blocked
for (String blockSetting : Arrays.asList(SETTING_READ_ONLY, SETTING_BLOCKS_METADATA, SETTING_READ_ONLY_ALLOW_DELETE)) {
for (String blockSetting : Arrays.asList(SETTING_BLOCKS_METADATA)) {
try {
enableIndexBlock("test", blockSetting);
assertBlocked(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -142,33 +142,27 @@ public void testSimplePreference() {
client().prepareIndex("test").setSource("field1", "value1").get();
refresh();

SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_local").execute().actionGet();
SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).get();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_local").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_primary").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));
searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_primary").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet();
searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_primary_first").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica_first").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));
searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica_first").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("1234").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));
searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("1234").execute().actionGet();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));

searchResponse = client().prepareSearch().setQuery(matchAllQuery()).get();
assertThat(searchResponse.getHits().getTotalHits().value, equalTo(1L));
}

public void testThatSpecifyingNonExistingNodesReturnsUsefulError() {
Expand Down

0 comments on commit 21e6838

Please sign in to comment.