Skip to content

Commit

Permalink
[SPARK-4916]Update SQL programming guide
Browse files Browse the repository at this point in the history
  • Loading branch information
luogankun committed Dec 22, 2014
1 parent 6ee6aa7 commit 99b2336
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions docs/sql-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -835,8 +835,7 @@ Spark SQL can cache tables using an in-memory columnar format by calling `sqlCon
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.

Note that if you call `schemaRDD.cache()` rather than `sqlContext.cacheTable(...)`, tables will _not_ be cached using
the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is strongly recommended for this use case.
Note that you call schemaRDD.cache() alike sqlContext.cacheTable(...) in 1.2 release of Spark, tables will be cached using the in-memory columnar format.

Configuration of in-memory caching can be done using the `setConf` method on SQLContext or by running
`SET key=value` commands using SQL.
Expand Down

0 comments on commit 99b2336

Please sign in to comment.