Skip to content

Construct iterators directly if possible to allow spilling to disk #18529

@kennknowles

Description

@kennknowles

When you construct a collection first and convert it to an iterator you force Spark to evaluate the entire input partition before it can get the first element off the output. This breaks some of the spilling to disk Spark can do otherwise. Instead chain operations on Iterators.

This is only possible in the Java API for Spark 2 and above (and that's my fault from back in my work in the Spark project).

Imported from Jira BEAM-3290. Original Jira may contain additional context.
Reported by: holden.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions