Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking script for Pillow #3

Merged
merged 2 commits into from
Feb 11, 2015
Merged

Benchmarking script for Pillow #3

merged 2 commits into from
Feb 11, 2015

Conversation

hugovk
Copy link
Contributor

@hugovk hugovk commented Feb 11, 2015

PIL hasn't had any releases in five years. Pillow is a maintained fork.

The only difference to the PIL test is the imports. To run, make sure to uninstall PIL first, then pip install pillow. More details here: http://pillow.readthedocs.org/installation.html

It'd be interesting to see Pillow added to the results as well as PIL here: http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use

jcupitt added a commit that referenced this pull request Feb 11, 2015
Benchmarking script for Pillow
@jcupitt jcupitt merged commit f5e32ff into libvips:master Feb 11, 2015
@jcupitt
Copy link
Member

jcupitt commented Feb 11, 2015

Hi @hugovk, looks interesting, I'll update the benchmarks. Thanks!

@jcupitt
Copy link
Member

jcupitt commented Feb 11, 2015

Looks like there's no performance change between pil and pillow on this benchmark on this laptop:

program, time (s), peak memory (MB)
pillow, 2.49, 205.70703125
pil, 2.50, 205.62890625

I'll update the speed and memuse page anyway.

@hugovk
Copy link
Contributor Author

hugovk commented Feb 12, 2015

@jcupitt Thanks for running it.

Please could you retest with the latest Pillow 2.7.0 (released 2015-01-01) rather than 2.3.0 (2014-01-01)? It contains a number of performance improvements.

@jcupitt
Copy link
Member

jcupitt commented Feb 12, 2015

I see a modest improvement: down from 2.5s to 2.3s.

Reading the docs on the performance improvements (which look very nice), I see that downsizing now always uses convolution. The other systems in this benchmark are using simple affine + bilinear, so this might be hurting Pillow's place.

Will Pillow use a convolution for BILINEAR with just a 10% shrink factor? Or does it fall back to simple affine plus interpolator for small shrinks?

I'll try with NEAREST and see how much difference it makes.

jcupitt added a commit that referenced this pull request Feb 12, 2015
@jcupitt
Copy link
Member

jcupitt commented Feb 12, 2015

Ah, 1.72s with NEAREST. I've updated the page and added a some notes on this.

Is it possible to call affine directly with a bilinear interpolator?

@hugovk
Copy link
Contributor Author

hugovk commented Feb 12, 2015

Let's ask @homm, he implemented much of the recent Pillow improvements.

@homm
Copy link

homm commented Feb 12, 2015

Indeed, Pillow always uses convolutions regardless of the scale factor.

Affine transformations still possible through Image.transform, if you sure the scale factor is always between 1 and 0.5.

@jcupitt
Copy link
Member

jcupitt commented Feb 12, 2015

I tried Image.transform, but it's a lot slower:

im = im.transform((int (im.size[0] * 0.9), int (im.size[1] * 0.9)), 
                  Image.AFFINE,
                  (0.9, 0, 0, 0, 0.9, 0),
                  resample = Image.BILINEAR)

Back to 2.5s and 260mb peak RES. I'll leave it as image.resize(NEAREST).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants