Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-21121][SQL] Support changing storage level via the spark.sql.inMemoryColumnarStorage.level variable #18328

Closed
wants to merge 1 commit into from

Conversation

dosoft
Copy link
Contributor

@dosoft dosoft commented Jun 16, 2017

What changes were proposed in this pull request?

As described in title

How was this patch tested?

query: Dataset[_],
tableName: Option[String] = None
): Unit = writeLock {
cacheQuery(query, tableName, StorageLevel.fromString(query.sparkSession.sessionState.conf.cacheStorageLevel))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line exceeds 100 characters.

def cacheQuery(
query: Dataset[_],
tableName: Option[String] = None
): Unit = writeLock {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Combine the two lines together

@gatorsmile
Copy link
Member

gatorsmile commented Jun 16, 2017

Could you add a test case in org.apache.spark.sql.CachedTableSuite?

@gatorsmile
Copy link
Member

ok to test

@SparkQA
Copy link

SparkQA commented Jun 16, 2017

Test build #78187 has finished for PR 18328 at commit f6dfaab.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 17, 2017

Test build #78201 has started for PR 18328 at commit 26cfaaa.

@AmplabJenkins
Copy link

Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78201/
Test FAILed.

@@ -106,6 +105,11 @@ class CacheManager extends Logging {
}
}

def cacheQuery(query: Dataset[_], tableName: Option[String] = None): Unit = writeLock {
cacheQuery(query, tableName, StorageLevel.fromString(
query.sparkSession.sessionState.conf.cacheStorageLevel))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, this is a bit odd to roundtrip to string <-> storagelevel...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain your question in a bit more details?

@HyukjinKwon
Copy link
Member

@dosoft Is this PR active? then would you mind if I ask to reply to the review comment above?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants