-
-
Notifications
You must be signed in to change notification settings - Fork 646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Progress percentage looping per page instead of global percentage #2450
Comments
Hi again, This is a bug (or misfeature?) in the tiff saver. For example:
It's looping for each page, rather than one for the whole TIFF. |
We were looping over pages, cropping each one out, and saving. Now there's a single loop for thw whole of the image, so things like percent reporting work in the obvious way. See #2450
OK, should be fixed in master. Thank you for pointing out this dumb thing, @LionelArn2. This will be in 8.12. I've credited you in the changelog, I hope that's OK. |
That's great, thanks for your responsiveness :) |
@jcupitt I see a similar issue with the new cgif saver build from commit b2527da. $ vips copy x.jpg x.gif --vips-progress
vips temp-10: 1 x 1 pixels, 6 threads, 1 x 1 tiles, 128 lines in buffer
vips temp-10: done in 0,00452s
vips temp-11: 460 x 460 pixels, 6 threads, 460 x 16 tiles, 128 lines in buffer
vips temp-11: done in 0,0391s Let me know if you want to track this in a separate issue. |
Hi Kleis, I think that's a different problem. When the saver starts up, it renders the background colour into a 1x1 image pixel ready to memcpy() into the output, that's the first loop there. The second loop is the one for the save operation, and I think there should only ever be one of them, for example:
That's a 139 page GIF and it's doing all the pages in a single loop. Running a whole pipeline just to render a 1x1 pixel image is very wasteful, but it saved some development effort, and the cost is small. |
TIL, thanks for the info! It looks like it's happening here: libvips/libvips/foreign/cgifsave.c Lines 415 to 421 in b2527da
I was also able to reproduce this with doing: $ vips bandjoin_const x.jpg x.png 255 --vips-progress
vips temp-8: 1 x 1 pixels, 6 threads, 1 x 1 tiles, 128 lines in buffer
vips temp-8: done in 0,00331s
vips temp-12: 460 x 460 pixels, 6 threads, 460 x 16 tiles, 128 lines in buffer
vips temp-12: done in 0,0213s Or with: $ vips getpoint x.jpg 1 1 --vips-progress
vips temp-1: 1 x 1 pixels, 6 threads, 1 x 16 tiles, 128 lines in buffer
vips x.jpg: 460 x 460 pixels, 6 threads, 460 x 16 tiles, 128 lines in buffer
vips x.jpg: done in 0,00579s
vips temp-1: done in 0,0146s
vips temp-13: 1 x 1 pixels, 6 threads, 1 x 1 tiles, 128 lines in buffer
vips temp-13: done in 0,00219s
244 244 244 Perhaps we need to do this: vips_image_set_int( in, "hide-progress", 1 ); After line: libvips/libvips/conversion/insert.c Line 276 in db22eb4
in vips__vector_to_pels (untested)?
|
The extra loops have never bothered me, but you're right, I guess it's a bit neater. Let's do it! |
Great, thanks! It didn't bother me either, but it's a bit neater for programs with 1 progress bar. PR #2488 hides these extra loops for |
Hello :)
I'm having frequent issues with the values reported by the progress attached to a NetVips.Image when an image with multiple pages is being written (e.g. with using tiffsave with pyramid=true).
For example, if I have an image with height=25000 and pageHeight=5000, the percentage reported (int value) will go from 0 to 100% 5 times. This means my code has to keep track of which page is currently being exported in order to calculate a global progress percentage. This is not really ideal because it is sometimes impossible to keep track of which page is currently being exported (not sure why but I do not necessarily receive the page progress values chronologically).
Would it be possible to have the libvips algorithm calculating the export progress take the number of pages into account to report a global progress value? This would be much more robust than letting the user do it.
The text was updated successfully, but these errors were encountered: