Skip to content

Commit

Permalink
Bump version to 7.7 (elastic#1414)
Browse files Browse the repository at this point in the history
  • Loading branch information
polyfractal committed Jan 15, 2020
1 parent 6b1a0c2 commit c784dc1
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 17 deletions.
28 changes: 14 additions & 14 deletions README.md
Expand Up @@ -19,14 +19,14 @@ ES-Hadoop 2.0.x and 2.1.x are compatible with Elasticsearch __1.X__ *only*

## Installation

### Stable Release (currently `7.6.0`)
### Stable Release (currently `7.7.0`)
Available through any Maven-compatible tool:

```xml
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>7.6.0</version>
<version>7.7.0</version>
</dependency>
```
or as a stand-alone [ZIP](http://www.elastic.co/downloads/hadoop).
Expand All @@ -38,7 +38,7 @@ Grab the latest nightly build from the [repository](http://oss.sonatype.org/cont
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>7.6.0-SNAPSHOT</version>
<version>7.7.0-SNAPSHOT</version>
</dependency>
```

Expand All @@ -52,7 +52,7 @@ Grab the latest nightly build from the [repository](http://oss.sonatype.org/cont
</repositories>
```

or [build](#building-the-source) the project yourself.
or [build](#building-the-source) the project yourself.

We do build and test the code on _each_ commit.

Expand All @@ -75,7 +75,7 @@ The latest reference documentation is available online on the project [home page
### Configuration Properties

All configuration properties start with `es` prefix. Note that the `es.internal` namespace is reserved for the library internal use and should _not_ be used by the user at any point.
The properties are read mainly from the Hadoop configuration but the user can specify (some of) them directly depending on the library used.
The properties are read mainly from the Hadoop configuration but the user can specify (some of) them directly depending on the library used.

### Required
```
Expand Down Expand Up @@ -105,7 +105,7 @@ To read data from ES, configure the `EsInputFormat` on your job configuration al
```java
JobConf conf = new JobConf();
conf.setInputFormat(EsInputFormat.class);
conf.set("es.resource", "radio/artists");
conf.set("es.resource", "radio/artists");
conf.set("es.query", "?q=me*"); // replace this with the relevant query
...
JobClient.runJob(conf);
Expand All @@ -124,7 +124,7 @@ JobClient.runJob(conf);
### Reading
```java
Configuration conf = new Configuration();
conf.set("es.resource", "radio/artists");
conf.set("es.resource", "radio/artists");
conf.set("es.query", "?q=me*"); // replace this with the relevant query
Job job = new Job(conf)
job.setInputFormatClass(EsInputFormat.class);
Expand Down Expand Up @@ -178,7 +178,7 @@ TBLPROPERTIES('es.resource' = 'radio/artists');

Any data passed to the table is then passed down to Elasticsearch; for example considering a table `s`, mapped to a TSV/CSV file, one can index it to Elasticsearch like this:
```SQL
INSERT OVERWRITE TABLE artists
INSERT OVERWRITE TABLE artists
SELECT NULL, s.name, named_struct('url', s.url, 'picture', s.picture) FROM source s;
```

Expand Down Expand Up @@ -272,7 +272,7 @@ To read data from ES, create a dedicated `RDD` and specify the query as an argum

```java
import org.apache.spark.api.java.JavaSparkContext;
import org.elasticsearch.spark.rdd.api.java.JavaEsSpark;
import org.elasticsearch.spark.rdd.api.java.JavaEsSpark;

SparkConf conf = ...
JavaSparkContext jsc = new JavaSparkContext(conf);
Expand All @@ -292,15 +292,15 @@ DataFrame playlist = df.filter(df.col("category").equalTo("pikes").and(df.col("y

Use `JavaEsSpark` to index any `RDD` to Elasticsearch:
```java
import org.elasticsearch.spark.rdd.api.java.JavaEsSpark;
import org.elasticsearch.spark.rdd.api.java.JavaEsSpark;

SparkConf conf = ...
JavaSparkContext jsc = new JavaSparkContext(conf);
JavaSparkContext jsc = new JavaSparkContext(conf);

Map<String, ?> numbers = ImmutableMap.of("one", 1, "two", 2);
Map<String, ?> airports = ImmutableMap.of("OTP", "Otopeni", "SFO", "San Fran");

JavaRDD<Map<String, ?>> javaRDD = jsc.parallelize(ImmutableList.of(numbers, airports));
JavaRDD<Map<String, ?>> javaRDD = jsc.parallelize(ImmutableList.of(numbers, airports));
JavaEsSpark.saveToEs(javaRDD, "spark/docs");
```

Expand All @@ -319,7 +319,7 @@ ES-Hadoop provides native integration with Storm: for reading a dedicated `Spout
### Reading
To read data from ES, use `EsSpout`:
```java
import org.elasticsearch.storm.EsSpout;
import org.elasticsearch.storm.EsSpout;

TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("es-spout", new EsSpout("storm/docs", "?q=me*"), 5);
Expand All @@ -330,7 +330,7 @@ builder.setBolt("bolt", new PrinterBolt()).shuffleGrouping("es-spout");
To index data to ES, use `EsBolt`:

```java
import org.elasticsearch.storm.EsBolt;
import org.elasticsearch.storm.EsBolt;

TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RandomSentenceSpout(), 10);
Expand Down
6 changes: 3 additions & 3 deletions buildSrc/esh-version.properties
@@ -1,3 +1,3 @@
eshadoop = 7.6.0
elasticsearch = 7.6.0
build-tools = 7.6.0
eshadoop = 7.7.0
elasticsearch = 7.7.0
build-tools = 7.7.0

0 comments on commit c784dc1

Please sign in to comment.