reading multiple large root files (each of about 1GB) trying to read each file at a time to pandas df for groupby operation #925
Unanswered
sbdrchauhan
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I know to open large files, it is best to use
uproot.iterate()
withstep_size
option. But step_size is different for each file. I want to do the operation for each eventID, so I want to make pandas dataframe to perform analysis for each groupby object for each eventID. If I use some fixed step_size, won't it chop off the dataframe in the middle of the event, and I won't be able to do groupby analysis as it chops not at every eventID.Something like this:
here calculateUVW() method should get dataframe for each eventID, but step_size might chop off in the middle of eventID and my analysis might be incorrect.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions