-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark first time longer #28
Comments
To clarify @MichaelChirico concerns, first run on Spark does not include start-up time of cluster (in this case a single node cluster). Cluster is already started and data were read into it, cached into memory. Already had little discussion on that with @st-pasha and problem is not trivial to resolve. The non trivial parts are:
As for now I am not seeing reasons good enough, and strategy fair enough, to include warming up solutions for "groupby" task. |
The dataset is being loaded from file I think. Could it be that Spark is very fast at file load but isn't materializing the data. Then when the first group by comes along, that's when it actually does the load from file. (Adding load times to the report was on the todo list regardless.) If lazy data ingest doesn't explain it, can an issue be raised in spark SO tag or code-review site to see if they know. |
@mattdowle I'm not sure whether this is what's going on, but yes, operations are generally lazy in Spark. This code will be almost instant:
Even adding some filtering & basic things will do nothing. Can force-overcome lazy eval by doing something inexpensive like:
Open to debate whether something like |
|
Solved in a26b8af |
* nicely close connection to avoid warnings * missing DBI prefix and con var
Comment from Michael on Twitter here :
https://twitter.com/michael_chirico/status/1039356873760112641
It seems proportional to the data size, though. What's happening there and is there a way to isolate it and report separately perhaps.
The text was updated successfully, but these errors were encountered: