-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking script for Pillow #3
Conversation
Benchmarking script for Pillow
Hi @hugovk, looks interesting, I'll update the benchmarks. Thanks! |
Looks like there's no performance change between pil and pillow on this benchmark on this laptop:
I'll update the speed and memuse page anyway. |
@jcupitt Thanks for running it. Please could you retest with the latest Pillow 2.7.0 (released 2015-01-01) rather than 2.3.0 (2014-01-01)? It contains a number of performance improvements. |
I see a modest improvement: down from 2.5s to 2.3s. Reading the docs on the performance improvements (which look very nice), I see that downsizing now always uses convolution. The other systems in this benchmark are using simple affine + bilinear, so this might be hurting Pillow's place. Will Pillow use a convolution for BILINEAR with just a 10% shrink factor? Or does it fall back to simple affine plus interpolator for small shrinks? I'll try with NEAREST and see how much difference it makes. |
Ah, 1.72s with NEAREST. I've updated the page and added a some notes on this. Is it possible to call affine directly with a bilinear interpolator? |
Let's ask @homm, he implemented much of the recent Pillow improvements. |
Indeed, Pillow always uses convolutions regardless of the scale factor. Affine transformations still possible through Image.transform, if you sure the scale factor is always between 1 and 0.5. |
I tried Image.transform, but it's a lot slower: im = im.transform((int (im.size[0] * 0.9), int (im.size[1] * 0.9)),
Image.AFFINE,
(0.9, 0, 0, 0, 0.9, 0),
resample = Image.BILINEAR) Back to 2.5s and 260mb peak RES. I'll leave it as |
PIL hasn't had any releases in five years. Pillow is a maintained fork.
The only difference to the PIL test is the imports. To run, make sure to uninstall PIL first, then
pip install pillow
. More details here: http://pillow.readthedocs.org/installation.htmlIt'd be interesting to see Pillow added to the results as well as PIL here: http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use