Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify parsing of ff headers to avoid numpy.fromfile. #3791

Merged
merged 1 commit into from
Aug 20, 2020

Conversation

owena11
Copy link
Contributor

@owena11 owena11 commented Aug 19, 2020

Recently noticed that fieldsfile loading was causing a bottleneck in our iris usage with iris running under python3, profiling seemed to show that half of our runtime was being spent in parsing fieldsfile headers. With numpy.fromfile being the main sink of time within the loading path (Seems to be a more widely noted issue for python3 versions of numpy, numpy/numpy#13319),

Created a rough timing script to show for some basic timing loading a specific field from a file 1000 times:

# An quick timing script to show the effect of removing fromfile.
import iris
import time
import os

f_size = os.path.getsize('big_fields_file')
print(f"File 'big_fields_file size': {f_size/1024**3:.2f}GB")

total_time = 0
count = 1000

for i in range(count):
    if i % 100 == 0:
        print(f'Run: {i}')
    s_time = time.time()
    # Load soil temp, just happened to be the data to hand!
    cube = iris.load('big_fields_file', iris.AttributeConstraint(STASH='m01s08i223'))
    r_time = time.time() - s_time
    total_time += r_time

print(f'==========================================================')
print(f'Total time: {total_time}, Average time: {total_time/count}')
print(f'==========================================================')

With output when running on master vs this branch

Branch

(iris-dev) [aowen@vld588:/data/users/aowen/iris_timings]$ python -m cProfile -o branch.pstat  timing_script.py 
File 'big_fields_file size': 5.04GB
Run: 0
Run: 100
Run: 200
Run: 300
Run: 400
Run: 500
Run: 600
Run: 700
Run: 800
Run: 900
==========================================================
Total time: 443.8917579650879, Average time: 0.4438917579650879
==========================================================

Master

(iris-dev) [aowen@vld588:/data/users/aowen/iris_timings]$ python -m cProfile -o master.pstat  timing_script.py 
File 'big_fields_file size': 5.04GB
Run: 0
Run: 100
Run: 200
Run: 300
Run: 400
Run: 500
Run: 600
Run: 700
Run: 800
Run: 900
==========================================================
Total time: 520.458238363266, Average time: 0.520458238363266
==========================================================

In this scenario we're seeing roughly 15% improvement with this change to the load pipeline, but it seems highly variable dependant on the file system being read from. I'd expect the improvement to be much higher on lustre based file systems where the initial issue was spotted, where we saw an overall reduction of around 45% in total runtime.

lib/iris/fileformats/_ff.py Outdated Show resolved Hide resolved
lib/iris/fileformats/_ff.py Outdated Show resolved Hide resolved
Copy link
Member

@bjlittle bjlittle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@owena11 Awesome, thanks for submitting this PR 🤟

Just a couple of minor review suggestions, and outstanding questions to address, thanks.

lib/iris/fileformats/_ff.py Outdated Show resolved Hide resolved
lib/iris/fileformats/_ff.py Outdated Show resolved Hide resolved
_buffer = file_like.read(count * dtype.itemsize)

# Let numpy do the heavy lifting once we've sorted the file reading.
array = np.frombuffer(_buffer, dtype=dtype, count=-1)
Copy link
Member

@bjlittle bjlittle Aug 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@owena11 You're effectively reading the data twice here, right? Once to read the bytes from file, and once to parse the bytes into a numpy array using frombuffer.

I don't have access to a laptop at the moment, but I'm assuming that a file pointer to a stream doesn't support the buffer protocol? I guess I'm just wondering whether there is a way to do this by only streaming through the data once?

Perhaps that's not possible...I don't immediately know 🤔

Also, don't we need to care about byte order here? i.e., endianess

Copy link
Contributor Author

@owena11 owena11 Aug 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I fully understand the first question here, but as an attempt at an answer anyway.....

We probably don't do two passes over the data with this implementation, It's a bit of an guess that I can go and look up the implementation details of numpy if needed, but the np.frombuffer has all of the information to create the header/object section of a np.array, so the implementation of frombuffer probably can be as simple as creating the array object and pointing the internal reference to the data to the _buffer object we supply as it supports the buffer protocol, so can share the data.

This is guess is supported by the associated flags on the array created:

  C_CONTIGUOUS : True
  F_CONTIGUOUS : True
  OWNDATA : False
  WRITEABLE : False
  ALIGNED : True
  WRITEBACKIFCOPY : False
  UPDATEIFCOPY : False

The array we create isn't writable because the bytes we passed are immutable. Numpy can then handle the copying if we ever need to modify the data (I don't think we ever do modify the header data in this module).

Happily I can answer more confidently for byte order! Byte order will be encoded within the np.dtype object, defaulting to the machine default. Throughout the calls in this module the dtype description passed specifies endianness which will be preserved when we convert to the np.dtype object.

Copy link
Contributor Author

@owena11 owena11 Aug 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah after thinking about this more the arrays generated definitely aren't being modified prior to a copy, otherwise numpy would throw a ValueError. However there is no performance hit in my timings for using the mutable bytearray rather than simple read so I've push up that as a change.

Would probably prevent annoying gremlins for anyone touching this code in the future, with more expected behavior.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@owena11 Awesome. Okay, let's roll with this... Looks good to me 👍

@owena11 owena11 force-pushed the remove_from_file branch 2 times, most recently from 027e0b8 to 5427db1 Compare August 20, 2020 11:06
@bjlittle
Copy link
Member

@owena11 Nice one, thanks for taking the time to fix this, much appreciated 👍

@bjlittle bjlittle merged commit 1a2e61c into SciTools:master Aug 20, 2020
@owena11
Copy link
Contributor Author

owena11 commented Aug 20, 2020

@bjlittle Thanks for the review 👍

@owena11 owena11 deleted the remove_from_file branch August 20, 2020 18:28
tkknight added a commit to tkknight/iris that referenced this pull request Aug 29, 2020
* master:
  Support for climatological in Coord.from_coord() and DimCoord.from_regular(). (SciTools#3802)
  clarify unit handling (SciTools#3803)
  Add quality flags to whatsnew (SciTools#3801)
  add args/kwargs to DimCoord.__init__ docstring (SciTools#3681)
  Merge launch_ancils feature branch.
  Modify parsing of ff headers to avoid numpy.fromfile. (SciTools#3791)
bjlittle added a commit to bjlittle/iris that referenced this pull request Oct 1, 2020
stephenworsley pushed a commit that referenced this pull request Oct 1, 2020
tkknight added a commit to tkknight/iris that referenced this pull request Oct 8, 2020
* upstream/master:
  add SciTools#3791 whatsnew entry (SciTools#3897)
  bump whatsnew latest and version to 3.1.dev0 (SciTools#3896)
  hide the further topics toc (SciTools#3894)
  Deprecate iris.util.as_compatible_shape (SciTools#3892)
  whatsnew additions (SciTools#3891)
  linkcheck ignore http://cfconventions.org (SciTools#3889)
  Cube arithmetic docs to master (SciTools#3890)
  added whats new for pr SciTools#3884 (SciTools#3887)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants