Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching GenomicRDD in pyspark #1883

akmorrow13 opened this Issue Jan 23, 2018 · 1 comment


2 participants
Copy link

commented Jan 23, 2018

Can we cache a GenomicRDD in python?


This comment has been minimized.

Copy link
Contributor Author

commented Jan 24, 2018

I try

reads = ac.loadAlignments(readsPath)
x = reads.toDF().rdd.cache() 
cachedReads = reads._replaceRdd(x)

Which errors as:

ValueError: Some of types cannot be determined by the first 100 rows, please try again with sampling

@fnothaft fnothaft closed this in 6051321 Feb 14, 2018

@heuermh heuermh added this to the 0.24.0 milestone Feb 14, 2018

@heuermh heuermh added this to Completed in Release 0.24.0 Feb 14, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.