Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use vips to improve speed? #1109

Closed
guettli opened this issue Feb 11, 2015 · 15 comments
Closed

Use vips to improve speed? #1109

guettli opened this issue Feb 11, 2015 · 15 comments

Comments

@guettli
Copy link

guettli commented Feb 11, 2015

VIPS is a library for image processing.

According to the docs, it is fast and lightweight:

http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use

What about using vips to improve the speed of pillow?

@homm
Copy link
Member

homm commented Feb 11, 2015

What exactly speed do you want improve?

@guettli
Copy link
Author

guettli commented Feb 11, 2015

I have no specific use case. This is just a general request. According to the spreadsheet in the above link, vips very fast.

@homm
Copy link
Member

homm commented Feb 11, 2015

Vips use different approach and therefore consumes much less memory, but it can't be significant faster because of this. The real reason vips do processing in parallel. There is also experimental branch of Pillow with OpenMP if you need it.

@wiredfool
Copy link
Member

First, There are python bindings for vips, so if anyone wants to use it, they can. Second, they're testing PIL 1.1.7 with bilinear scaling, which has been recently replaced with faster and better code. Third, I'm not sure I really buy their memory methodology.

@guettli
Copy link
Author

guettli commented Feb 11, 2015

@wiredfool what do you mean with "I'm not sure I really buy their memory methodology. "?

@wiredfool
Copy link
Member

Counting memory usage by summing RSS in multithreaded programs runs the risk of double counting memory that's actually shared. It could be interpreted as an upper bound. (though, less changes behind ps's back there than when calculating processor usage percentage).

@guettli
Copy link
Author

guettli commented Feb 13, 2015

Thank you for your reply. I am not a memory expert. I guess you are right.

I don't want to start a childish A vs B debate.

I am a user of the PIL API. A lot of people use PIL, but vips seems to be not that wide spread.

I just want to know why the Pillow developers don't use vips. What are the reasons?

@wiredfool
Copy link
Member

It's a separate project, with different goals. Pillow's goal isn't really to be the fastest, rather to be correct and increasingly complete across a range of Pythons and platforms.

I think it would be far easier to write a layer on top of VIPS exposing that part of the Pillow API that it implements than to rehost Pillow on VIPS.

@guettli
Copy link
Author

guettli commented Feb 24, 2015

If I would code a wrapper around vips to expose the Pillow API, would you use it?

If not, why would you prefer Pillow over a wrapped vips?

@aclark4life
Copy link
Member

Please ask on Stack Overflow if you don't get an answer here.

@guettli
Copy link
Author

guettli commented Mar 26, 2015

@aclark4life I don't have a specific problem. I see that there are several image processing libraries for python. For me this looks like reinventing the wheel. I am curious why the Pillow developers take a library which is according to their docs very fast.

A question like this would get down and closed voted in seconds.

@aclark4life
Copy link
Member

@guettli Ah, indeed. OK in that case answer is "yes". If you were to integrate VIPS with Pillow in a PR then we'd definitely consider accepting it. I suspect you are getting a lukewarm reception because the core team is mostly concerned with fixing bugs and getting releases out the door. Enhancements are a luxury for us and we mostly rely on community contributions to implement them.

@hugovk
Copy link
Member

hugovk commented Mar 26, 2015

By the way, I asked the VIPS author to re-run with Pillow 2.7.0, which shows some improvement:

Software        Run time (secs real)    Memory (peak RSS MB)    Times slower
PIL 1.1.7       2.48                    211                     6.7
Pillow 2.7.0    1.72                    211                     4.9

@aclark4life
Copy link
Member

@hugovk Wow, interesting right?

@guettli
Copy link
Author

guettli commented Mar 26, 2015

The speed improvement looks good. Thank you for sharing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants