-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify parsing of ff headers to avoid numpy.fromfile. #3791
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@owena11 Awesome, thanks for submitting this PR 🤟
Just a couple of minor review suggestions, and outstanding questions to address, thanks.
_buffer = file_like.read(count * dtype.itemsize) | ||
|
||
# Let numpy do the heavy lifting once we've sorted the file reading. | ||
array = np.frombuffer(_buffer, dtype=dtype, count=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@owena11 You're effectively reading the data twice here, right? Once to read the bytes from file, and once to parse the bytes into a numpy
array using frombuffer
.
I don't have access to a laptop at the moment, but I'm assuming that a file pointer to a stream doesn't support the buffer protocol? I guess I'm just wondering whether there is a way to do this by only streaming through the data once?
Perhaps that's not possible...I don't immediately know 🤔
Also, don't we need to care about byte order here? i.e., endianess
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I fully understand the first question here, but as an attempt at an answer anyway.....
We probably don't do two passes over the data with this implementation, It's a bit of an guess that I can go and look up the implementation details of numpy if needed, but the np.frombuffer
has all of the information to create the header/object section of a np.array
, so the implementation of frombuffer
probably can be as simple as creating the array object and pointing the internal reference to the data to the _buffer
object we supply as it supports the buffer protocol, so can share the data.
This is guess is supported by the associated flags on the array created:
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : False
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False
The array we create isn't writable because the bytes
we passed are immutable. Numpy can then handle the copying if we ever need to modify the data (I don't think we ever do modify the header data in this module).
Happily I can answer more confidently for byte order! Byte order will be encoded within the np.dtype
object, defaulting to the machine default. Throughout the calls in this module the dtype description passed specifies endianness which will be preserved when we convert to the np.dtype
object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah after thinking about this more the arrays generated definitely aren't being modified prior to a copy, otherwise numpy would throw a ValueError
. However there is no performance hit in my timings for using the mutable bytearray
rather than simple read so I've push up that as a change.
Would probably prevent annoying gremlins for anyone touching this code in the future, with more expected behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@owena11 Awesome. Okay, let's roll with this... Looks good to me 👍
027e0b8
to
5427db1
Compare
5427db1
to
0fdae3d
Compare
0fdae3d
to
32b825b
Compare
@owena11 Nice one, thanks for taking the time to fix this, much appreciated 👍 |
@bjlittle Thanks for the review 👍 |
* master: Support for climatological in Coord.from_coord() and DimCoord.from_regular(). (SciTools#3802) clarify unit handling (SciTools#3803) Add quality flags to whatsnew (SciTools#3801) add args/kwargs to DimCoord.__init__ docstring (SciTools#3681) Merge launch_ancils feature branch. Modify parsing of ff headers to avoid numpy.fromfile. (SciTools#3791)
* upstream/master: add SciTools#3791 whatsnew entry (SciTools#3897) bump whatsnew latest and version to 3.1.dev0 (SciTools#3896) hide the further topics toc (SciTools#3894) Deprecate iris.util.as_compatible_shape (SciTools#3892) whatsnew additions (SciTools#3891) linkcheck ignore http://cfconventions.org (SciTools#3889) Cube arithmetic docs to master (SciTools#3890) added whats new for pr SciTools#3884 (SciTools#3887)
Recently noticed that fieldsfile loading was causing a bottleneck in our iris usage with iris running under python3, profiling seemed to show that half of our runtime was being spent in parsing fieldsfile headers. With
numpy.fromfile
being the main sink of time within the loading path (Seems to be a more widely noted issue for python3 versions of numpy, numpy/numpy#13319),Created a rough timing script to show for some basic timing loading a specific field from a file 1000 times:
With output when running on master vs this branch
Branch
Master
In this scenario we're seeing roughly 15% improvement with this change to the load pipeline, but it seems highly variable dependant on the file system being read from. I'd expect the improvement to be much higher on lustre based file systems where the initial issue was spotted, where we saw an overall reduction of around 45% in total runtime.