Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge upstream #22968

Closed
wants to merge 1,423 commits into from
Closed

Merge upstream #22968

wants to merge 1,423 commits into from

Conversation

justinuang
Copy link

What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

dansanduleac and others added 30 commits March 23, 2018 19:41
…ce it needs to be common between different jobs
…emote dependencies

Removal of the init-container for downloading remote dependencies. Built off of the work done by vanzin in an attempt to refactor driver/executor configuration elaborated in [this](https://issues.apache.org/jira/browse/SPARK-22839) ticket.

This patch was tested with unit and integration tests.

Author: Ilan Filonenko <if56@cornell.edu>

Closes apache#20669 from ifilonenko/remove-init-container.
Rebase to upstream's version of Kubernetes support.
mccheah and others added 27 commits October 17, 2018 12:59
Move Spark docker image generator publish before Spark publish
Better would be to avoid putting any non-ignored generated output in the repository, but this is a short term fix.
Ensure bintray upload happens before repository is no clean.
…urce V2 write path

## What changes were proposed in this pull request?

This PR proposes to avoid to make a readsupport and read schema when it writes in other save modes.

apache@5fef6e3 happened to create a readsupport in write path, which ended up with reading schema from readsupport at write path.

This breaks `spark.range(1).format("source").write.save("non-existent-path")` case since there's no way to read the schema from "non-existent-path".

See also apache#22009 (comment)
See also apache#22697
See also http://apache-spark-developers-list.1001551.n3.nabble.com/Possible-bug-in-DatasourceV2-td25343.html

## How was this patch tested?

Unit test and manual tests.

Closes apache#22688 from HyukjinKwon/append-revert-2.

Authored-by: hyukjinkwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
[SPARK-25700][SQL] Creates ReadSupport in only Append Mode in Data So…
Because we use the mainline version now (and have been for awhile), not from the fork.
@justinuang justinuang closed this Nov 7, 2018
@justinuang justinuang deleted the juang/merge-easy-upstream branch November 7, 2018 23:19
@justinuang justinuang restored the juang/merge-easy-upstream branch November 7, 2018 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet