Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

read_csv: Implement reading of number of rows #1656

Merged
merged 1 commit into from Jul 16, 2020

Conversation

tomspur
Copy link
Contributor

@tomspur tomspur commented Jul 15, 2020

Implement reading of number of rows (nrows) in read_csv by using spark's limit.

On the first read, this does not seem to help much with regards to reading speed, because inferSchema is True and spark seems to scan over the full data anyway (see e.g. here). It would help to use it as a similar parameter to read_csv than in pandas to make the API more compatible.

Implement reading of number of rows (nrows) in read_csv.
Copy link
Collaborator

@ueshin ueshin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ueshin
Copy link
Collaborator

ueshin commented Jul 16, 2020

Thanks! merging.

@ueshin ueshin merged commit 7bdf141 into databricks:master Jul 16, 2020
@itholic
Copy link
Contributor

itholic commented Jul 16, 2020

Nice work. Thanks! :D

@tomspur
Copy link
Contributor Author

tomspur commented Jul 16, 2020

Thank you for the quick response and merge! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants