Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch 1.6 #7

Merged
merged 800 commits into from
Mar 18, 2016
Merged

Branch 1.6 #7

merged 800 commits into from
Mar 18, 2016

Conversation

rekhajoshm
Copy link
Owner

What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)

(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

bllchmbrs and others added 30 commits December 11, 2015 12:56
Adding in Pipeline Import and Export Documentation.

Author: anabranch <wac.chambers@gmail.com>
Author: Bill Chambers <wchambers@ischool.berkeley.edu>

Closes #10179 from anabranch/master.

(cherry picked from commit aa305dc)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
…rasure Issue

As noted in PR #9441, implementing `tallSkinnyQR` uncovered a bug with our PySpark `RowMatrix` constructor.  As discussed on the dev list [here](http://apache-spark-developers-list.1001551.n3.nabble.com/K-Means-And-Class-Tags-td10038.html), there appears to be an issue with type erasure with RDDs coming from Java, and by extension from PySpark.  Although we are attempting to construct a `RowMatrix` from an `RDD[Vector]` in [PythonMLlibAPI](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala#L1115), the `Vector` type is erased, resulting in an `RDD[Object]`.  Thus, when calling Scala's `tallSkinnyQR` from PySpark, we get a Java `ClassCastException` in which an `Object` cannot be cast to a Spark `Vector`.  As noted in the aforementioned dev list thread, this issue was also encountered with `DecisionTrees`, and the fix involved an explicit `retag` of the RDD with a `Vector` type.  `IndexedRowMatrix` and `CoordinateMatrix` do not appear to have this issue likely due to their related helper functions in `PythonMLlibAPI` creating the RDDs explicitly from DataFrames with pattern matching, thus preserving the types.

This PR currently contains that retagging fix applied to the `createRowMatrix` helper function in `PythonMLlibAPI`.  This PR blocks #9441, so once this is merged, the other can be rebased.

cc holdenk

Author: Mike Dusenberry <mwdusenb@us.ibm.com>

Closes #9458 from dusenberrymw/SPARK-11497_PySpark_RowMatrix_Constructor_Has_Type_Erasure_Issue.

(cherry picked from commit 1b82203)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
Added a paragraph regarding StringIndexer#setHandleInvalid to the ml-features documentation.

I wonder if I should also add a snippet to the code example, input welcome.

Author: BenFradet <benjamin.fradet@gmail.com>

Closes #10257 from BenFradet/SPARK-12217.

(cherry picked from commit aea676c)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
…o dataframe_example.py

Since ```Dataset``` has a new meaning in Spark 1.6, we should rename it to avoid confusion.
#9873 finished the work of Scala example, here we focus on the Python one.
Move dataset_example.py to ```examples/ml``` and rename to ```dataframe_example.py```.
BTW, fix minor missing issues of #9873.
cc mengxr

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #9957 from yanboliang/SPARK-11978.

(cherry picked from commit a0ff6d1)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
Modifies the String overload to call the Column overload and ensures this is called in a test.

Author: Ankur Dave <ankurdave@gmail.com>

Closes #10271 from ankurdave/SPARK-12298.

(cherry picked from commit 1e799d6)
Signed-off-by: Yin Huai <yhuai@databricks.com>
…est cases

The existing sample functions miss the parameter `seed`, however, the corresponding function interface in `generics` has such a parameter. Thus, although the function caller can call the function with the 'seed', we are not using the value.

This could cause SparkR unit tests failed. For example, I hit it in another PR:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47213/consoleFull

Author: gatorsmile <gatorsmile@gmail.com>

Closes #10160 from gatorsmile/sampleR.

(cherry picked from commit 1e3526c)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
…rait in order to avoid ClassCastException due to KryoSerializer in KinesisReceiver

Author: Jean-Baptiste Onofré <jbonofre@apache.org>

Closes #10203 from jbonofre/SPARK-11193.

(cherry picked from commit 03138b6)
Signed-off-by: Sean Owen <sowen@cloudera.com>
https://issues.apache.org/jira/browse/SPARK-12199

Follow-up PR of SPARK-11551. Fix some errors in ml-features.md

mengxr

Author: Xusen Yin <yinxusen@gmail.com>

Closes #10193 from yinxusen/SPARK-12199.

(cherry picked from commit 98b212d)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
…ct disconnetion message

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #10261 from zsxwing/SPARK-12267.

(cherry picked from commit 8af2f8c)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
… in the shutdown hook

1. Make sure workers and masters exit so that no worker or master will still be running when triggering the shutdown hook.
2. Set ExecutorState to FAILED if it's still RUNNING when executing the shutdown hook.

This should fix the potential exceptions when exiting a local cluster
```
java.lang.AssertionError: assertion failed: executor 4 state transfer from RUNNING to RUNNING is illegal
	at scala.Predef$.assert(Predef.scala:179)
	at org.apache.spark.deploy.master.Master$$anonfun$receive$1.applyOrElse(Master.scala:260)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
	at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
	at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
	at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
	at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:73)
	at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:474)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
```

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #10269 from zsxwing/executor-state.

(cherry picked from commit 2aecda2)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
When SparkStrategies.BasicOperators's "case BroadcastHint(child) => apply(child)" is hit, it only recursively invokes BasicOperators.apply with this "child". It makes many strategies have no change to process this plan, which probably leads to "No plan" issue, so we use planLater to go through all strategies.

https://issues.apache.org/jira/browse/SPARK-12275

Author: yucai <yucai.yu@intel.com>

Closes #10265 from yucai/broadcast_hint.

(cherry picked from commit ed87f6d)
Signed-off-by: Yin Huai <yhuai@databricks.com>
Follow-up of [SPARK-12199](https://issues.apache.org/jira/browse/SPARK-12199) and #10193 where a broken link has been left as is.

Author: BenFradet <benjamin.fradet@gmail.com>

Closes #10282 from BenFradet/SPARK-12199.

(cherry picked from commit e25f1fe)
Signed-off-by: Sean Owen <sowen@cloudera.com>
cc yhuai felixcheung shaneknapp

Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>

Closes #10300 from shivaram/comment-lintr-disable.

(cherry picked from commit fb3778d)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
cc\ tdas zsxwing , please review. Thanks a lot.

Author: jerryshao <sshao@hortonworks.com>

Closes #10305 from jerryshao/fix-typo-state-impl.

(cherry picked from commit bc1ff9f)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
Author: Michael Armbrust <michael@databricks.com>

Closes #10317 from marmbrus/versions.
…ling setConf

This is continuation of SPARK-12056 where change is applied to SqlNewHadoopRDD.scala

andrewor14
FYI

Author: tedyu <yuzhihong@gmail.com>

Closes #10164 from tedyu/master.

(cherry picked from commit f725b2e)
Signed-off-by: Andrew Or <andrew@databricks.com>
…sos cluster mode.

Adding more documentation about submitting jobs with mesos cluster mode.

Author: Timothy Chen <tnachen@gmail.com>

Closes #10086 from tnachen/mesos_supervise_docs.

(cherry picked from commit c2de99a)
Signed-off-by: Andrew Or <andrew@databricks.com>
ExternalBlockStore.scala

Author: Naveen <naveenminchu@gmail.com>

Closes #10313 from naveenminchu/branch-fix-SPARK-9886.

(cherry picked from commit 8a215d2)
Signed-off-by: Andrew Or <andrew@databricks.com>
… completes

This change builds the event history of completed apps asynchronously so the RPC thread will not be blocked and allow new workers to register/remove if the event log history is very large and takes a long time to rebuild.

Author: Bryan Cutler <bjcutler@us.ibm.com>

Closes #10284 from BryanCutler/async-MasterUI-SPARK-12062.

(cherry picked from commit c5b6b39)
Signed-off-by: Andrew Or <andrew@databricks.com>
…lity

Author: Wenchen Fan <cloud0fan@outlook.com>

Closes #8645 from cloud-fan/test.

(cherry picked from commit a89e8b6)
Signed-off-by: Andrew Or <andrew@databricks.com>
This fixes the sidebar, using a pure CSS mechanism to hide it when the browser's viewport is too narrow.
Credit goes to the original author Titan-C (mentioned in the NOTICE).

Note that I am not a CSS expert, so I can only address comments up to some extent.

Default view:
<img width="936" alt="screen shot 2015-12-14 at 12 46 39 pm" src="https://cloud.githubusercontent.com/assets/7594753/11793597/6d1d6eda-a261-11e5-836b-6eb2054e9054.png">

When collapsed manually by the user:
<img width="1004" alt="screen shot 2015-12-14 at 12 54 02 pm" src="https://cloud.githubusercontent.com/assets/7594753/11793669/c991989e-a261-11e5-8bf6-aecf3bdb6319.png">

Disappears when column is too narrow:
<img width="697" alt="screen shot 2015-12-14 at 12 47 22 pm" src="https://cloud.githubusercontent.com/assets/7594753/11793607/7754dbcc-a261-11e5-8b15-e0d074b0e47c.png">

Can still be opened by the user if necessary:
<img width="651" alt="screen shot 2015-12-14 at 12 51 15 pm" src="https://cloud.githubusercontent.com/assets/7594753/11793612/7bf82968-a261-11e5-9cc3-e827a7a6b2b0.png">

Author: Timothy Hunter <timhunter@databricks.com>

Closes #10297 from thunterdb/12324.

(cherry picked from commit a6325fc)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
Add ```write.json``` and ```write.parquet``` for SparkR, and deprecated ```saveAsParquetFile```.

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #10281 from yanboliang/spark-12310.

(cherry picked from commit 22f6cd8)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
cc jkbradley

Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>

Closes #10244 from yu-iskw/SPARK-12215.

(cherry picked from commit 26d70bd)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
shivaram  Please help review.

Author: Jeff Zhang <zjffdu@apache.org>

Closes #10290 from zjffdu/SPARK-12318.

(cherry picked from commit 2eb5af5)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
…h Mesos cluster mode.

SPARK_HOME is now causing problem with Mesos cluster mode since spark-submit script has been changed recently to take precendence when running spark-class scripts to look in SPARK_HOME if it's defined.

We should skip passing SPARK_HOME from the Spark client in cluster mode with Mesos, since Mesos shouldn't use this configuration but should use spark.executor.home instead.

Author: Timothy Chen <tnachen@gmail.com>

Closes #10332 from tnachen/scheduler_ui.

(cherry picked from commit ad8c1f0)
Signed-off-by: Andrew Or <andrew@databricks.com>
… bisecting k-means

This PR includes only an example code in order to finish it quickly.
I'll send another PR for the docs soon.

Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>

Closes #9952 from yu-iskw/SPARK-6518.

(cherry picked from commit 7b6dc29)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
davies and others added 29 commits March 3, 2016 10:09
In order to tell OutputStream that the task has failed or not, we should call the failure callbacks BEFORE calling writer.close().

Added new unit tests.

Author: Davies Liu <davies@databricks.com>

Closes #11450 from davies/callback.
Fix race conditions when cleanup files.

Existing tests.

Author: Davies Liu <davies@databricks.com>

Closes #11507 from davies/flaky.

(cherry picked from commit d062587)
Signed-off-by: Davies Liu <davies.liu@gmail.com>

Conflicts:
	sql/hive/src/test/scala/org/apache/spark/sql/sources/CommitFailureTestRelationSuite.scala
…cled

## What changes were proposed in this pull request?

`sendRpcSync` should copy the response content because the underlying buffer will be recycled and reused.

## How was this patch tested?

Jenkins unit tests.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #11499 from zsxwing/SPARK-13652.

(cherry picked from commit 465c665)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
cc jkbradley

Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>

Closes #9535 from yu-iskw/SPARK-11515.

(cherry picked from commit 574571c)
Signed-off-by: Sean Owen <sowen@cloudera.com>
… string datatypes to Oracle VARCHAR datatype mapping

A test suite added for the bug fix -SPARK 12941; for the mapping of the StringType to corresponding in Oracle

manual tests done
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)

(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Author: thomastechs <thomas.sebastian@tcs.com>
Author: THOMAS SEBASTIAN <thomas.sebastian@tcs.com>

Closes #11489 from thomastechs/thomastechs-12941-master-new.

(cherry picked from commit f6ac7c3)
Signed-off-by: Yin Huai <yhuai@databricks.com>

Conflicts:
	sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
…DataFrames

## What changes were proposed in this pull request?

Change line 113 of QuantileDiscretizer.scala to

`val requiredSamples = math.max(numBins * numBins, 10000.0)`

so that `requiredSamples` is a `Double`.  This will fix the division in line 114 which currently results in zero if `requiredSamples < dataset.count`

## How was the this patch tested?
Manual tests.  I was having a problems using QuantileDiscretizer with my a dataset and after making this change QuantileDiscretizer behaves as expected.

Author: Oliver Pierson <ocp@gatech.edu>
Author: Oliver Pierson <opierson@umd.edu>

Closes #11319 from oliverpierson/SPARK-13444.
…ionSerializer.loads

## What changes were proposed in this pull request?

Set the function's module name to `__main__` if it's missing in `TransformFunctionSerializer.loads`.

## How was this patch tested?

Manually test in the shell.

Before this patch:
```
>>> from pyspark.streaming import StreamingContext
>>> from pyspark.streaming.util import TransformFunction
>>> ssc = StreamingContext(sc, 1)
>>> func = TransformFunction(sc, lambda x: x, sc.serializer)
>>> func.rdd_wrapper(lambda x: x)
TransformFunction(<function <lambda> at 0x106ac8b18>)
>>> bytes = bytearray(ssc._transformerSerializer.serializer.dumps((func.func, func.rdd_wrap_func, func.deserializers)))
>>> func2 = ssc._transformerSerializer.loads(bytes)
>>> print(func2.func.__module__)
None
>>> print(func2.rdd_wrap_func.__module__)
None
>>>
```
After this patch:
```
>>> from pyspark.streaming import StreamingContext
>>> from pyspark.streaming.util import TransformFunction
>>> ssc = StreamingContext(sc, 1)
>>> func = TransformFunction(sc, lambda x: x, sc.serializer)
>>> func.rdd_wrapper(lambda x: x)
TransformFunction(<function <lambda> at 0x108bf1b90>)
>>> bytes = bytearray(ssc._transformerSerializer.serializer.dumps((func.func, func.rdd_wrap_func, func.deserializers)))
>>> func2 = ssc._transformerSerializer.loads(bytes)
>>> print(func2.func.__module__)
__main__
>>> print(func2.rdd_wrap_func.__module__)
__main__
>>>
```

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #11535 from zsxwing/loads-module.

(cherry picked from commit ee913e6)
Signed-off-by: Davies Liu <davies.liu@gmail.com>
…tly refers to StatefulNetworkWordCount

## What changes were proposed in this pull request?
The reference to StatefulNetworkWordCount.scala from updateStatesByKey documentation should be removed, till there is a example for updateStatesByKey.

## How was this patch tested?
Have tested the new documentation with jekyll build.

Author: rmishra <rmishra@pivotal.io>

Closes #11545 from rishitesh/SPARK-13705.

(cherry picked from commit 4b13896)
Signed-off-by: Sean Owen <sowen@cloudera.com>
…-hive and spark-hiveserver (branch 1.6)

## What changes were proposed in this pull request?

This is just the patch of #11449 cherry picked to branch-1.6; the enforcer and dep/ diffs are cut

Modifies the dependency declarations of the all the hive artifacts, to explicitly exclude the groovy-all JAR.

This stops the groovy classes *and everything else in that uber-JAR* from getting into spark-assembly JAR.

## How was this patch tested?

1. Pre-patch build was made: `mvn clean install -Pyarn,hive,hive-thriftserver`
1. spark-assembly expanded, observed to have the org.codehaus.groovy packages and JARs
1. A maven dependency tree was created `mvn dependency:tree -Pyarn,hive,hive-thriftserver  -Dverbose > target/dependencies.txt`
1. This text file examined to confirm that groovy was being imported as a dependency of `org.spark-project.hive`
1. Patch applied
1. Repeated step1: clean build of project with ` -Pyarn,hive,hive-thriftserver` set
1. Examined created spark-assembly, verified no org.codehaus packages
1. Verified that the maven dependency tree no longer references groovy

The `master` version updates the dependency files and an enforcer rule to keep groovy out; this patch strips it out.

Author: Steve Loughran <stevel@hortonworks.com>

Closes #11473 from steveloughran/fixes/SPARK-13599-groovy+branch-1.6.
The description of "spark.memory.offHeap.size" in the current document does not clearly state that memory is counted with bytes....

This PR contains a small fix for this tiny issue

document fix

Author: CodingCat <zhunansjtu@gmail.com>

Closes #11561 from CodingCat/master.

(cherry picked from commit a3ec50a)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
## What changes were proposed in this pull request?

Adding the hive-cli classes to the classloader

## How was this patch tested?

The hive Versionssuite tests were run

This is my original work and I license the work to the project under the project's open source license.

Author: Tim Preece <tim.preece.in.oz@gmail.com>

Closes #11495 from preecet/master.

(cherry picked from commit 46f25c2)
Signed-off-by: Michael Armbrust <michael@databricks.com>
…ient as it's in driver

## What changes were proposed in this pull request?

AppClient runs in the driver side. It should not call `Utils.tryOrExit` as it will send exception to SparkUncaughtExceptionHandler and call `System.exit`. This PR just removed `Utils.tryOrExit`.

## How was this patch tested?

manual tests.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #11566 from zsxwing/SPARK-13711.
When generating Graphviz DOT files in the SQL query visualization we need to escape double-quotes inside node labels. This is a followup to #11309, which fixed a similar graph in Spark Core's DAG visualization.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #11587 from JoshRosen/graphviz-escaping.

(cherry picked from commit 81f54ac)
Signed-off-by: Josh Rosen <joshrosen@databricks.com>
## What changes were proposed in this pull request?

If a job is being scheduled in one thread which has a dependency on an
RDD currently executing a shuffle in another thread, Spark would throw a
NullPointerException. This patch synchronizes access to `mapStatuses` and
skips null status entries (which are in-progress shuffle tasks).

## How was this patch tested?

Our client code unit test suite, which was reliably reproducing the race
condition with 10 threads, shows that this fixes it. I have not found a minimal
test case to add to Spark, but I will attempt to do so if desired.

The same test case was tripping up on SPARK-4454, which was fixed by
making other DAGScheduler code thread-safe.

shivaram srowen

Author: Andy Sloane <asloane@tetrationanalytics.com>

Closes #11505 from a1k0n/SPARK-13631.

(cherry picked from commit cbff280)
Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request?

If there are many branches in a CaseWhen expression, the generated code could go above the 64K limit for single java method, will fail to compile. This PR change it to fallback to interpret mode if there are more than 20 branches.

## How was this patch tested?

Add tests

Author: Davies Liu <davies@databricks.com>

Closes #11606 from davies/fix_when_16.
## What changes were proposed in this pull request?

A very minor change for using `BigDecimal.decimal(f: Float)` instead of `BigDecimal(f: float)`. The latter is deprecated and can result in inconsistencies due to an implicit conversion to `Double`.

## How was this patch tested?

N/A

cc yhuai

Author: Sameer Agarwal <sameer@databricks.com>

Closes #11597 from sameeragarwal/bigdecimal.

(cherry picked from commit 926e9c4)
Signed-off-by: Yin Huai <yhuai@databricks.com>
Update snappy to 1.1.2.1 to pull in a single fix -- the OOM fix we already worked around.
Supersedes #11524

Jenkins tests.

Author: Sean Owen <sowen@cloudera.com>

Closes #11631 from srowen/SPARK-13663.

(cherry picked from commit 927e22e)
Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request?

Today, Spark 1.6.1 and updated docs are release. Unfortunately, there is obsolete hive version information on docs: [Building Spark](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support). This PR fixes the following two lines.
```
-By default Spark will build with Hive 0.13.1 bindings.
+By default Spark will build with Hive 1.2.1 bindings.
-# Apache Hadoop 2.4.X with Hive 13 support
+# Apache Hadoop 2.4.X with Hive 1.2.1 support
```
`sql/README.md` file also describe

## How was this patch tested?

Manual.

(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #11639 from dongjoon-hyun/fix_doc_hive_version.

(cherry picked from commit 88fa866)
Signed-off-by: Reynold Xin <rxin@databricks.com>
Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.attlocal.net>
Author: Oscar D. Lara Yejas <odlaraye@oscars-mbp.usca.ibm.com>

Closes #11220 from olarayej/SPARK-13312-3.

(cherry picked from commit 416e71a)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
…ions

## What changes were proposed in this pull request?
Currently, when a java.net.BindException is thrown, it displays the following message:

java.net.BindException: Address already in use: Service '$serviceName' failed after 16 retries!

This change adds port configuration suggestions to the BindException, for example, for the UI, it now displays

java.net.BindException: Address already in use: Service 'SparkUI' failed after 16 retries! Consider explicitly setting the appropriate port for 'SparkUI' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.

## How was this patch tested?
Manual tests

Author: Bjorn Jonsson <bjornjon@gmail.com>

Closes #11644 from bjornjon/master.

(cherry picked from commit 515e4af)
Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request?
fix typo in DataSourceRegister

## How was this patch tested?

found when going through latest code

Author: Jacky Li <jacky.likun@huawei.com>

Closes #11686 from jackylk/patch-12.

(cherry picked from commit f3daa09)
Signed-off-by: Reynold Xin <rxin@databricks.com>
## What changes were proposed in this pull request?

When studying spark many users just copy examples on the documentation and paste on their terminals
and because of that the missing backlashes lead them run into some shell errors.

The added backslashes avoid that problem for spark users with that behavior.

## How was this patch tested?

I generated the documentation locally using jekyll and checked the generated pages

Author: Daniel Santana <mestresan@gmail.com>

Closes #11699 from danielsan/master.

(cherry picked from commit 9f13f0f)
Signed-off-by: Andrew Or <andrew@databricks.com>
## What changes were proposed in this pull request?

JavaUtils.java has methods to convert time and byte strings for internal use, this change renames a variable used in byteStringAs(), from timeError to byteError.

Author: Bjorn Jonsson <bjornjon@gmail.com>

Closes #11695 from bjornjon/master.

(cherry picked from commit e06493c)
Signed-off-by: Andrew Or <andrew@databricks.com>
…CCESS files.

If a _SUCCESS appears in the inner partitioning dir, partition discovery will treat that _SUCCESS file as a data file. Then, partition discovery will fail because it finds that the dir structure is not valid. We should ignore those `_SUCCESS` files.

In future, it is better to ignore all files/dirs starting with `_` or `.`. This PR does not make this change. I am thinking about making this change simple, so we can consider of getting it in branch 1.6.

To ignore all files/dirs starting with `_` or `, the main change is to let ParquetRelation have another way to get metadata files. Right now, it relies on FileStatusCache's cachedLeafStatuses, which returns file statuses of both metadata files (e.g. metadata files used by parquet) and data files, which requires more changes.

https://issues.apache.org/jira/browse/SPARK-13207

Author: Yin Huai <yhuai@databricks.com>

Closes #11697 from yhuai/SPARK13207_branch16.
## What changes were proposed in this pull request?

This patch contains the functionality to balance the load of the cluster-mode drivers among workers

This patch restores the changes in #1106 which was erased due to the merging of #731

## How was this patch tested?

test with existing test cases

Author: CodingCat <zhunansjtu@gmail.com>

Closes #11702 from CodingCat/SPARK-13803.

(cherry picked from commit bd5365b)
Signed-off-by: Sean Owen <sowen@cloudera.com>
… next locality level

JIRA Issue:https://issues.apache.org/jira/browse/SPARK-13901
In getAllowedLocalityLevel method of TaskSetManager,we get wrong logDebug information when jump to the next locality level.So we should fix it.

Author: trueyao <501663994@qq.com>

Closes #11719 from trueyao/logDebug-localityWait.

(cherry picked from commit ea9ca6f)
Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request?

This change fixes the executor OOM which was recently introduced in PR #11095
(Please fill in changes proposed in this fix)

## How was this patch tested?
Tested by running a spark job on the cluster.
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)

(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

… Sorter

Author: Sital Kedia <skedia@fb.com>

Closes #11794 from sitalkedia/SPARK-13958.

(cherry picked from commit 2e0c528)
Signed-off-by: Davies Liu <davies.liu@gmail.com>
@rekhajoshm rekhajoshm merged commit 022e06d into rekhajoshm:branch-1.6 Mar 18, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment