Skip to content

Commit

Permalink
[SPARK-7490] [CORE] [Minor] MapOutputTracker.deserializeMapStatuses: …
Browse files Browse the repository at this point in the history
…close input streams

GZIPInputStream allocates native memory that is not freed until close() or
when the finalizer runs. It is best to close() these streams explicitly.

stephenh made the same change for serializeMapStatuses in commit b0d884f. This is the same change for deserialize.

(I ran the unit test suite! it seems to have passed. I did not make a JIRA since this seems "trivial", and the guidelines suggest it is not required for trivial changes)

Author: Evan Jones <ejones@twitter.com>

Closes apache#5982 from evanj/master and squashes the following commits:

0d76e85 [Evan Jones] [CORE] MapOutputTracker.deserializeMapStatuses: close input streams
  • Loading branch information
Evan Jones authored and jeanlyn committed Jun 12, 2015
1 parent 5e81a43 commit 894fc36
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion core/src/main/scala/org/apache/spark/MapOutputTracker.scala
Original file line number Diff line number Diff line change
Expand Up @@ -367,7 +367,11 @@ private[spark] object MapOutputTracker extends Logging {
// Opposite of serializeMapStatuses.
def deserializeMapStatuses(bytes: Array[Byte]): Array[MapStatus] = {
val objIn = new ObjectInputStream(new GZIPInputStream(new ByteArrayInputStream(bytes)))
objIn.readObject().asInstanceOf[Array[MapStatus]]
Utils.tryWithSafeFinally {
objIn.readObject().asInstanceOf[Array[MapStatus]]
} {
objIn.close()
}
}

// Convert an array of MapStatuses to locations and sizes for a given reduce ID. If
Expand Down

0 comments on commit 894fc36

Please sign in to comment.