Mirror of Apache Spark
Scala Java Python R Shell JavaScript Other
Pull request Compare This branch is 173 commits behind apache:master.
Latest commit 06f5dc8 Aug 8, 2016 @nsyca nsyca committed with hvanhovell [SPARK-16804][SQL] Correlated subqueries containing non-deterministic…
… operations return incorrect results

## What changes were proposed in this pull request?

This patch fixes the incorrect results in the rule ResolveSubquery in Catalyst's Analysis phase by returning an error message when the LIMIT is found in the path from the parent table to the correlated predicate in the subquery.

## How was this patch tested?

./dev/run-tests
a new unit test on the problematic pattern.

Author: Nattavut Sutyanyong <nsy.can@gmail.com>

Closes #14411 from nsyca/master.
Failed to load latest commit information.
.github [MINOR][MAINTENANCE] Fix typo for the pull request template. Feb 24, 2016
R [SPARKR][DOCS] fix broken url in doc Jul 25, 2016
assembly [SPARK-16535][BUILD] In pom.xml, remove groupId which is redundant de… Jul 19, 2016
bin [SPARK-16399][PYSPARK] Force PYSPARK_PYTHON to python Jul 7, 2016
build [SPARK-14279][BUILD] Pick the spark version from pom Jun 6, 2016
common [HOTFIX] Remove unnecessary imports from #12944 that broke build Aug 4, 2016
conf [SPARK-13238][CORE] Add ganglia dmax parameter Aug 5, 2016
core [SPARK-16919] Configurable update interval for console progress bar Aug 8, 2016
data [SPARK-16421][EXAMPLES][ML] Improve ML Example Outputs Aug 5, 2016
dev [SPARK-16770][BUILD] Fix JLine dependency management and version (Sca… Aug 4, 2016
docs [SPARK-16911] Fix the links in the programming guide Aug 7, 2016
examples [SPARK-16945] Fix Java Lint errors Aug 8, 2016
external [SPARK-13238][CORE] Add ganglia dmax parameter Aug 5, 2016
graphx [SPARK-16694][CORE] Use for/foreach rather than map for Unit expressi… Jul 30, 2016
launcher [SPARK-14702] Make environment of SparkLauncher launched process more… Jul 20, 2016
licenses [MINOR][BUILD] Add modernizr MIT license; specify "2014 and onwards" … Jun 4, 2016
mllib-local [SPARK-16535][BUILD] In pom.xml, remove groupId which is redundant de… Jul 19, 2016
mllib [SPARK-16404][ML] LeastSquaresAggregators serializes unnecessary data Aug 8, 2016
project [SPARK-16853][SQL] fixes encoder error in DataSet typed select Aug 4, 2016
python [SPARK-16409][SQL] regexp_extract with optional groups causes NPE Aug 7, 2016
repl [SPARK-16770][BUILD] Fix JLine dependency management and version (Sca… Aug 4, 2016
sbin [SPARK-15806][DOCUMENTATION] update doc for SPARK_MASTER_IP Jun 12, 2016
sql [SPARK-16804][SQL] Correlated subqueries containing non-deterministic… Aug 8, 2016
streaming [SPARK-15869][STREAMING] Fix a potential NPE in StreamingJobProgressL… Aug 1, 2016
tools [SPARK-16535][BUILD] In pom.xml, remove groupId which is redundant de… Jul 19, 2016
yarn [SPARK-16110][YARN][PYSPARK] Fix allowing python version to be specif… Jul 27, 2016
.gitattributes [SPARK-3870] EOL character enforcement Oct 31, 2014
.gitignore [GIT] add pydev & Rstudio project file to gitignore list Jul 22, 2016
.travis.yml [SPARK-15207][BUILD] Use Travis CI for Java Linter and JDK7/8 compila… May 10, 2016
CONTRIBUTING.md [SPARK-6889] [DOCS] CONTRIBUTING.md updates to accompany contribution… Apr 22, 2015
LICENSE [MINOR][BUILD] Add modernizr MIT license; specify "2014 and onwards" … Jun 4, 2016
NOTICE [MINOR][BUILD] Add modernizr MIT license; specify "2014 and onwards" … Jun 4, 2016
README.md [SPARK-15821][DOCS] Include parallel build info Jun 14, 2016
pom.xml [SPARK-16770][BUILD] Fix JLine dependency management and version (Sca… Aug 4, 2016
scalastyle-config.xml [SPARK-16877][BUILD] Add rules for preventing to use Java annotations… Aug 4, 2016

README.md

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page and project wiki. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

build/mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.)

You can build Spark using more than one thread by using the -T option with Maven, see "Parallel builds in Maven 3". More detailed documentation is available from the project site, at "Building Spark". For developing Spark using an IDE, see Eclipse and IntelliJ.

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run tests for a module, or individual tests.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.

Configuration

Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.