Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast way to write pyvips image to FFmpeg stdin (and others; suggestions; big read) #198

Closed
Tremeschin opened this issue Aug 2, 2020 · 25 comments
Labels

Comments

@Tremeschin
Copy link

Hey @jcupitt, looks like you're very active helping people here so I thought it wouldn't hurt asking something. This will be a bit long as I'll give you the full context and wanted some suggestions if possible.

I'm working on a project in making a music visualizer and unsurprisingly I'm dealing with lots and lots of image processing, gaussian blur, vignetting, resize, alpha composite, etc.

While numpy + PIL works they aren't that much fast comparing to a proper image library or some GPU accelerated canvas (this last one woudn't quite work because I have to move back and forth a lot of images, and textures are somewhat expensive so I'd have to process them on the GPU itself, I'm not that deep into it yet)

For a 8.5 second audio using non multiprocessed code it takes about 3 minutes to make a 720p60 video out of it, I thought: hmm I don't have only one thread on my CPU so multiprocessing would work better, right? no! the IPC and mutexes shenanigans I wasn't aware about didn't scale up much, it cut in half the processing time down to 1m30sec though.

I tried using something like NodeJS and thought using C++ with Python to alpha composite and process the images very fast, the first one didn't quite work and haven't tried the second one yet.

So I stumbled across pyvips and with little effort I could alpha composite 100 particle images (random coordinates) onto a 4k image at 84 fps!! and it didn't even use much RAM and only half the CPU.

Though when piping the images to FFmpeg we have to convert them into a format we can write to the stdin and be readable + compatible by FFmpeg and its arguments.

Here comes my question after this short context: when I use something like

  • image.write_to_buffer('.JPEG', Q=100)

  • image = np.ndarray(buffer=image.write_to_memory(), dtype=format_to_dtype[image.format], shape=[image.height, image.width, image.bands])

Piping the images takes about 0.0764 seconds in total each cycle of making + converting to bytes + write to stdin but those two example lines takes about 0.0617 seconds to run. (those numbers are average of 510 piped frames). That's nearly all the time spend on the loop.

I'm not sure of how I can ask this but am I doing something wrong or is there an better way of getting the full fat raw image out of a pyvips Image object and send to FFmpeg stdin?

Again, this is my main bottleneck so any advice on quickly converting the images to a video is what I need.

I use somewhat canonical piping arguments (with ffmpeg-python package)

self.pipe_subprocess = (
    ffmpeg
    .input('pipe:', format='image2pipe', pix_fmt='rgba', r=self.context.fps, s='{}x{}'.format(self.context.width, self.context.height))
    .output(output, pix_fmt='yuv420p', vcodec='libx264', r=self.context.fps, crf=18, loglevel="quiet")
    .global_args('-i', self.context.input_file)
    .overwrite_output()
    .run_async(pipe_stdin=True)
)

I only change image2pipe with rawvideo when piping the numpy array's raw data.

I've seen and read a few places before asking this, most notably:

And I've tried lurking the docs on pyvips.Image class.

I'm looking forward using this library for images from now on, it works really well and is VERY simple to use.

I almost cried when I saw the Image.composite method, that is because I had manually implemented something equal to this by hand here (spoiler: it took a while to crop and composite only the needed parts)

And looks like pyvips can handle big images like they are nothing!!

Thanks for the project for using libvips easily through Python.

@Tremeschin
Copy link
Author

Just found out about this Most efficient region reading from tiled multi-resolution tiff images #100, so it looks like I'm creating many pipelines on libvips, I'll rewrite the code to blit into a base canvas the current setup I'm testing instead of copying a base image.

@Tremeschin
Copy link
Author

Yes last linked issue is very promising though I'm getting black images from the region, investigating but speeds are drastically improved now, about as fast as without converting to jpg and saving to ffmpeg

@Tremeschin
Copy link
Author

Tremeschin commented Aug 2, 2020

Ok so while I can pipe images to FFmpeg at 40 fps now using this fetch method from a pyvips.Region I'm a bit confused on how I can make it work to this specific case, I'll explain what worked and what didn't as well as what I think would solve this.

I saw the code on vregion.py and it inherits from a pointer object in the __init__ method (super(Region, self).__init__(pointer)), this is called by the Region.new method if you give a valid image pointer to it.

The way my code will work is sequentially alpha composite, resize multiple images so for example I'd do something like

  • canvas = canvas.composite(background, 'over', x=0, y=0)

And just for testing I'm putting some particles

Welp talk is cheap show me the code!!

from cmn_video import FFmpegWrapper
from PIL import Image
import random
import pyvips

# Images to work with
background = pyvips.Image.new_from_file("walp1080.jpg").copy(interpretation="srgb")
particle = pyvips.Image.new_from_file("particle.png").copy(interpretation="srgb")

# Start with a zeros canvas
canvas = pyvips.Image.black(width, height, bands=3).copy(interpretation="srgb")
canvas_region = pyvips.Region.new(canvas)  # Region for fetching images

# Video variables
width = 1920
height = 1080
fps = 60

# 8.5 seconds audio file
nframes = int(8.5*fps)

# Number of particles for stressing a bit the code
nparticles = 100

# My FFmpeg wrapper class, only need to pay attention to write_to_pipe and pipe_one_time methods
# You can get this class on a blob linked below

# Needs Context and Controller to operate, Context are "non changing vars" and controller are dynamic vars

class Context:
    def __init__(self):
        self.fps = fps
        self.width = width
        self.height = height
        self.input_file = "banjo.ogg"
        self.watch_processing_video_realtime = False
    
class Controller:
    def __init__(self):
        self.total_steps = nframes
        self.core_waiting = False

# Create FFmpegWrapper class
ctx = Context()
con = Controller()
f = FFmpegWrapper(ctx, con)

# Start the pipe
ff.pipe_one_time("out.mkv")

# Thread to write images to FFmpeg, 8.5 seconds audio (don't need to be exact, just for stats)
threading.Thread(target=ff.pipe_writer_loop, args=(8.5,)).start()

# Loop through each frame
for index in range(nframes):

    # Add the background
    canvas = canvas.composite(background, 'over', x=0, y=0)

    # Composite nparticles on random parts of the image
    canvas = canvas.composite(
        [particle]*nparticles, 'over',
        x=[random.randint(0, width) for _ in range(nparticles)],
        y=[random.randint(0, height) for _ in range(nparticles)]
    ).gaussblur(3)

    # Get the pixels from canvas
    patch = canvas_region.fetch(0, 0, width, height)

    # Convert buffer region.fetch to PIL Image
    image = np.ndarray(buffer=patch, dtype=format_to_dtype[canvas.format], shape=[canvas.height, canvas.width, 3])
    image = Image.fromarray(image)

    # Write this image at that index on final video
    ff.write_to_pipe(index, image.tobytes())

ff.close_pipe()

While this code is mostly for demo what I did it'll not run because it's missing the FFmpegWrapper() class

I made a gist that removes MMV parts of the code if you want to test this code snnipet.

Welp here comes the issue I'm not being able to find a fix for.

The pyvips.Region.new(canvas) is kinda a "pointer" to that pyvips Image class so when we do canvas = canvas.composite, according to the documentation, it returns an Image class as well but if you do for example on the background line:

print(" Before", id(canvas))

# Add the background
canvas = canvas.composite(background, 'over', x=0, y=0)

print(" After", id(canvas))

The ids will change, the memory address of the canvas var will be another one as far as I understood.

So when I put the original canvas variable as that background image, it does pipe the images correctly but only sends the original background attributed to the canvas as we just did.

If I try to reatribbute the Region to the canvas on the main loop it usually hangs and leaks lots of memory.

I wonder where I'm right / wrong here, if there's an way to swap the two images or keep the reference from the Region for fetching afterwards after a .composite and whatnot funcions.

Writing this in details just in case somebody stumbles across this in the future will have a gold mine :)

@jcupitt
Copy link
Member

jcupitt commented Aug 2, 2020

Hello @Tremeschin,

This sounds very interesting.

Piping the images takes about 0.0764 seconds in total each cycle of making + converting to bytes + write to stdin but those two example lines takes about 0.0617 seconds to run. (those numbers are average of 510 piped frames). That's nearly all the time spend on the loop.

libvips is a lazy image processing library, so yes, all the processing happens on the final write.

When you run a series of operations on an image, they don't execute, instead they append nodes to a large graph structure that libvips maintains behind your back. When you connect to the final output (a memory area here), the whole graph executes at once using all of your CPUs (hopefully).

When you run region fetch, it will pull just a single small patch from a pipeline using just one thread. It skips all the multiprocessing and buffering setup and teardown, so it can be faster if you are working with small patches (eg. 64 x 64 pixels). It'll be slower than the usual .write_to_memory() or whatever for large images, since it can't use more than one thread.

You need to make and then render a new pipeline for each frame, something like this:

background = pyvips.Image.new_from_file("walp1080.jpg")
particle = pyvips.Image.new_from_file("particle.png")
for index in range(frames):
    canvas = background

    canvas = canvas.composite(
        [particle] * nparticles, 'over',
        x=[random.randint(0, width) for _ in range(nparticles)],
        y=[random.randint(0, height) for _ in range(nparticles)]
    )
    canvas = canvas.gaussblur(3)
    
    ff.write_to_pipe(index, canvas.write_to_memory())

You can send write_to_memory directly to ffmpeg, I think. I guess width x height is 1920 x 1080, so it'll be much faster than fetch.

@jcupitt
Copy link
Member

jcupitt commented Aug 2, 2020

... I've done things slightly like this using webgl, I expect you've had a look at that already.

@Tremeschin
Copy link
Author

Tremeschin commented Aug 2, 2020

@jcupitt Yes I have tried many stuff (maybe not directlry webgl) but really unlucky with most, pyvips and libvip are currently the most promising ones. I tried:

It all failed because either the inter process communication between Python and the other languages or just didn't had all the features I needed and my knowledge about them :(

Edit: Sorry didn't notice the comment before your last one, the write_to_memory will try it right now!

@Tremeschin
Copy link
Author

libvips is a lazy image processing library, so yes, all the processing happens on the final write.

Thanks for this insight, this cleared a lot of stuff in my head.

As for your suggestion in using ff.write_to_pipe(index, canvas.write_to_memory()) directly rather than a Region and fetch, it did work after switching FFmpeg's pix_fmt="rgba", no issues in the final video.

Performance wise, took 48 seconds to finish a 1080p60 8.5 sec video, a really awesome result not gonna lie, would take about 1m30sec with 4 Python multiprocessing Process workers on old code though I'm not converting svgs here or resizing a lot of images.

@Tremeschin
Copy link
Author

Tremeschin commented Aug 2, 2020

I've two other questions (maybe not completely related to the original issue tell me if you want to open new issues on both)

  • Can I apply vignetting effect? blacking out the borders with a center_x, center_y and (deviation? sigma?) on x and y directions? edit: I can make this through cv2 if it's not possible with pyvips / libvips, it would add some overhead converting image between the two.

  • If I resize an image and it bleeds quite a bit out of the edges because its resolution will be higher, can I say to pyvip to blit into a negative coordinate and I only get a crop of the region I want?

In other words, get a big enough "canvas" to work with.

I saw something similar across those links I referred to in the first comment, though I was a bit confused on how that'd work (haven't tried doing so yet so just asking a question if that's possible)

Btw I saw you made some changes on the code, it was those temporary quickly R&D codes so stuff isn't implemented the way I'd do :)

@jcupitt
Copy link
Member

jcupitt commented Aug 2, 2020

It sounds like you've looked around quite a bit. I think processing.js is the main webgl wrapper people use for this kind of thing, eg.:

https://therewasaguy.github.io/p5-music-viz/

Vignetting: sure, make a mask and multiply your final image by that before the write. xyz will make an image where pixels have the value of their coordinates -- eg.:

x = pyvips.Image.xyz(width, height)
# move origin to the centre
x -= [width / 2, height / 2]
# distance from origin
d = (x[0] ** 2 + x[1] ** 2) ** 0.5
# some function of d in 0-1
vingette = (d.cos() + 1) / 2
# render to memory ready for reuse each frame (you don't want to render this image every frame)
vingette = vingette.copy_memory()

Then to apply the mask:

    ff.write_to_pipe(index, (canvas * vingette).cast("uchar").write_to_memory())

Bleeding: yes, have a look at embed, it'll add a border to an image, copying or reflecting the edges. Do that before the resize, then crop the result.

@Tremeschin
Copy link
Author

Thanks, I'll try these tomorrow as well as making SVG loading working for me from svgwrite's raw svg string

(didn't test svg "native" data like points, lines, only tried rendering a image from the disk, actually that was the first thing I tried with pyvips)

It sounds like you've looked around quite a bit.

Aha!! I've been implementing (and tryharding) what comes to my mind and inspiration from other visualizers in that project so I lurked a lot lately and got a good intuition of stuff like alpha composite and parallel computing (still lack from some proper knowledge).

render to memory ready for reuse each frame (you don't want to render this image every frame)

RIP. the modular music visualizer code updates vignetting each frame, interpolates based on the last value as well as other things like gaussian blur, resize of the logo, FFT visualization radial bars around logo for each channel of the audio, etc..

There's a demo video on the repository readme if you wanna see The trouble for porting to pyvips I've put myself into 😆

At least performance seems to be way faster than my current implementation (for higher resolutions mainly), will only sort out this over-resize method and loading an svg from a string for finally starting to port pyvips into the project.

Will let you know how it goes, thanks again for your insights and references, pyvips is now even more promising!!

@jcupitt
Copy link
Member

jcupitt commented Aug 2, 2020

pyvips has a very fast SVG loader, did you see?

x = pyvips.Image.svgload_buffer(b"""
<svg viewBox="0 0 200 200">
  <circle r="100" cx="100" cy="100" fill="#900"/>
</svg>
""")

@Tremeschin
Copy link
Author

Yes I was a bit dumb and tried with images on disk rather than simple circles at first, I can easily get this svg string from svgwrite module, will test things properly tomorrow, a bit late for me now :)

@Tremeschin
Copy link
Author

Thanks for you help, I got some interesting results testing these today though speeds weren't like 4x faster but merely 2x (the speed gain I'd get by applying some alternative multiprocessing under Python at cost of huge memory usage).

My best guess is that I'm limited by the CPU processing power itself because there is a HUGE amount of pixels being computed on each frame of the final video (resize, vignetting, alpha composite).

I'm not throwing away pyvips completely as it's very very memory efficient and is a 100% Python solution, will close this issue as the main question was addressed and if I need any more help I'll be commenting here back to you!!

Will be looking nto the webgl you mentioned in the upcoming days as GPUs are better on these rendering methods if the code is implemented the right way (pitfalls are easy to fall on tho), seems fun and fast :)

@jcupitt
Copy link
Member

jcupitt commented Aug 3, 2020

I think you're probably spending most of your time the composite. It's a complicated operation and I'm sure it could be optimized a bit more. We'd need to make a benchmark representing your use-case and profile it. For example, for each output region, it needs to compute the subset of the input images which overlap that rect, and at the moment this is a simple linear search. This is fine for perhaps 10 input images, but once you have a few hundred, it could start to become significant. A simple 2D map would remove that.

I mentioned webgl because I wrote a silly game to teach myself:

https://github.com/jcupitt/argh-steroids-webgl

If you try the game, there's a big explosion when your ship finally dies. It animates 10,000 large alpha blended particles at 60fps on a basic smartphone. It'll easily do 1m on a desktop, change this line for max particles:

https://github.com/jcupitt/argh-steroids-webgl/blob/gh-pages/particles.js#L271

And this line to set the size of the big explosion:

https://github.com/jcupitt/argh-steroids-webgl/blob/gh-pages/particles.js#L498

@Tremeschin
Copy link
Author

I've profiled the code in the past (should've talked about that here but forgot), most of the time is spent on resizing and gaussian blur, those are pretty expensive operations for a high res image for the CPU to keep up and that require quite a bit of memory under PIL or cv2.

Will take a look in those links you sent as well to get some inspiration, here we go again breaking things

@jcupitt
Copy link
Member

jcupitt commented Aug 3, 2020

Are you using orc, by the way? libvips uses it as a runtime compiler, and it makes things like gaussblur a lot quicker. It helps resize a bit too.

16:45 $ time vips gaussblur nina.jpg x.jpg 20
real	0m2.221s
user	0m7.990s
sys	0m0.078s
✔ ~/pics 
16:45 $ time vips gaussblur nina.jpg x.jpg 20 --vips-novector
real	0m3.353s
user	0m11.942s
sys	0m0.090s

@Tremeschin
Copy link
Author

Hmmm not sure when I get to the computer will see it, I'll let you know

@Tremeschin
Copy link
Author

I took a day off from coding and yup, looks like I'm using orc.

I'll be properly implementing pyvips on a branch and see how it goes, embed was exactly what I needed and is even a cleaner solution that will simplify lots of lines in the code base.

webgl + Python seems tricky, don't really want to code from scratch or deal with IPC anymore honestly :(

By the way I found a typo on the documentation here, it says "vertcial" on the direction parameter description :)

@Tremeschin
Copy link
Author

I have another question, can I set a anchor point to the image for alpha compositing, rotating (haven't tried yet), resizing? Maybe I only need for alpha compositing the image though.

I can calculate this by hand easily by just taking the pixels difference on the resize, dividing by two and adding a offset but having a center point for everything would be much simpler coding so.

Or in the case of the alpha composite itself just calculate the top left point according to the image scale, width, height (I'll do this for now as it's the final processing I need to do and just don't care about the other transformations)

@Tremeschin
Copy link
Author

Tremeschin commented Aug 4, 2020

What about forcing a resize to a certain resolution (width, height)? .resize only scales it by a factor and it isn't very clear in my head how that'd work.

Guess it's based on vscale (float) – Vertical scale image by this factor, perhaps even use the thumbnail functions?

Edit: ah yes it was with the thumbnail method, I thought it was just for file but we have thumbnail_image

jcupitt added a commit to libvips/libvips that referenced this issue Aug 5, 2020
@jcupitt
Copy link
Member

jcupitt commented Aug 5, 2020

Oh heh I fixed the typo, thanks!

No, there are no anchor points for composite.

You can use thumbnail to very quickly load + resize to pixel dimensions in one operation, or thumbnail_image as you say.

You can calculate the scales for resize as eg.:

y = x.resize(target_width / w.width, vscale=target_height / x.height)

@Tremeschin
Copy link
Author

How can I .composite with a transparency / opacity? Or in other words, multiply the alpha channel of a image by a constant?

I found on the wiki this code snippet:

def brighten(filein, fileout):
     im = pyvips.Image.new_from_file(filein, access="sequential")
     im *= [1.1, 1.2, 1.3]
     # if it is a jpg force a high quality
     im.write_to_file(fileout , Q=97)

But looks like it only unpacks the RGB and not the alpha channel of the image as in doing im *= [r, g, b, a], maybe I'm missing some casting, band joining here?

The premultiply method on Image documentation wasn't very clear how to use it if it ever applies this "opacity".

@jcupitt
Copy link
Member

jcupitt commented Aug 5, 2020

You can just scale the alpha before passing the image to composite, eg.:

image *= [1, 1, 1, 0.5]
y = x.composite(image, "over")

@Tremeschin
Copy link
Author

Wait, wut that yielded very strange images and black became white, can be a issue with my FFmpeg settings of pixel format, will investigate this as you said it works this way I was using !!

@jcupitt
Copy link
Member

jcupitt commented Aug 5, 2020

You'd need to post a specific example.

alon-ne added a commit to wix-playground/libvips that referenced this issue Dec 21, 2020
* better webp load sanity checking

see libvips@d93d9bb#r40846309

* block fuzz data over 100kb

Many codecs can take a huge amount of time attempting to read large
random objects. jpeg_read_header(), for example, can take ~10s on a 1mb
of random data.

Ignore fuzz objects over 100kb.

See https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=24383

* deprecate vips_popen()

it didn't work well on win, and we no longer use it anyway

* detect RLE overflow in radiance

old-style radiance RLE could overflow harmlessly

see https://oss-fuzz.com/testcase-detail/4918518961930240

* better dbg msg in spngload

* try to work around some broken heic images

see libvips#1574

* revise heic thumbnail workaround

* Add C++ bindings for new_from_memory_steal()

new_from_memory_steal() will create a new image with the input
buffer and will "move" the data into the image. The buffer is then
managed by the image, and will be freed when it goes out of scope.

* Don't check g_signal_connect()'s return

* Remove cast in free() call

* Remove redundant part of comment

* Add parameter name for unused image

* fix typo

see libvips/pyvips#198 (comment)

thanks Tremeschin

* fix write ICC profile to webp

ooops, a typo broke ICC profile write to webp 18 days ago

thanks augustocdias

see libvips#1767

* put minimise support back into pdfload

* docs fix

pandoc changed the name of their top-level section node

* start 8.10.1

following the doc generation fix

* remove redefinition of typedefs

We had this in a couple of places:

	typedef struct _A A;
	typedef struct _A A;

Some old gccs (eg. centos6) throw errors for this.

See libvips#1774

* add missing funcs to public C API

The C API was missing vips_jpegload_source and vips_svgload_source.
Thanks to augustocdias.

See libvips#1780

* update wrapper script

fixes "make check".

* experiment with doxygen for C++ docs

* fix regression in thumbnail of pyr tiff

The new subifd pyramid thumbnail code broke the old page-based pyramid
detector.

Thanks tand826

See libvips#1784

* move vips-operators.h into the header

doxy hates #include inside classes

* Ensure magick buffer+file checks use consistent min length guard

Prevents a zero-length buffer from crashing GetImageMagick

It looks like the fix for magick7 in libvips#1642 is also now required
for magick6 as the assertion appears to have been backported.

* add optional params to doc strings

* Ensure magick buffer+file checks use consistent min length guard

Prevents a zero-length buffer from crashing GetImageMagick

It looks like the fix for magick7 in libvips#1642 is also now required
for magick6 as the assertion appears to have been backported.

* prevent 0-length buffers reaching imagemagick

im6 seems to have added an assert for this

see libvips#1785

* add doxygen to the build system

configure tests for it, make runs it, make install copies the generated
html

* revise cpp codegen again

use f'' strings, polish formatting

* add doc comments for VError and VInterpolate

* a few more doc comments

* convert no-profile CMYK to RGB on save

Use the fallback cmyk profile to convert to RGB on save if the image has
no embedded profile.

Thanks augustocdias.

See libvips#1767

* fix some unknown types

We were missing VipsInterpolate and guint64. Add guint64 set() as well.

see libvips@636e265#commitcomment-41589463

* improve C++ API

Make VObject inheritance public, so we can have a single set() for all
VObject-derived types.

* note how to list interpolators

* Ensure SVG loader skips input with chars outside x09-x7F range

Add test with example valid WebP image that happens to contain
the string '<svg' within its compressed image data.

* Ensure SVG loader skips input with chars outside x09-x7F range

Add test with example valid WebP image that happens to contain
the string '<svg' within its compressed image data.

* note svg fix

* improve docs for arrayjoin

see libvips/pyvips#202

* Fix a small memory leak in sinkscreen

* start sinkscreen thread on first use

we were starting the sinkscreen background thread during vips_init() --
instead, start it on first use

see libvips#1792

* fix typo

* better mask sizing for gaussmat

We were calculating the mask size incorrectly for small masks.

Thanks johntrunc

see libvips#1793

* fix handling of "squash" param in tiffsave

the deprecated param was not being detected correctly, breaking vips7
compat in some cases

see libvips#1801

* fix jpegload autorotate

thanks chregu

see libvips/php-vips#105

* note render thread change in changelog

* update magick metadata naming

IM seem to have changed their rules for naming metadata chunks. They are
now lowercase and ICM is renamed to ICC. Add a simple test too.

See libvips/ruby-vips#246

* revise doxy flags to configure

* add a README.md for cpp

* add some more C++ docs

* more C++ docs

* better dint rules

We had some special cases coded for dhint inheritance, but they could
fail in some edge cases. Revert to something simpler and more
predictable.

see libvips#1810

* don't add generated latex to repo

* finish C++ doc comments

* integrate new C++ docs in main docs

* more small doc tweaks

* Ensure VImage::set uses class to determine type

Prevents null GType and associated segfault

* don't set JFIF res if we will set EXIF res

Some JPEG loaders give priority to JFIF resolution over EXIF resolution
tags. This patch makes libvips not write the JFIF res tags if it will be
writing the EXIF res tags.

See libvips/ruby-vips#247

* typo in recent cpp API improvements

We had G_VALUE_TYPE instead of G_OBJECT_TYPE, oops. Thanks @lovell.

see libvips#1812

* fix TIFF thumbnail of buffer and source

We had dropped a couple of patches.

see libvips#1815

* fix tiff thumbnail from buffer and source

We were missing the new tiff thumbnail logic on the source and buffer
paths.

see libvips#1815

* add a .gitignore for the new cpp api

to stop accidentally adding it to 8.10

* raise minimum libheif version to 1.3

We didn't compile with anything less than 1.3 anyway.

see libvips#1817

* block doxy latex output too

* libheif: expose speed parameter (currently AV1 compression only)

Supports both aom and rav1e encoders by limiting to a 0-8 range.

(The rav1e encoder accepts speed values of 9 and 10 but these
use 64x64 blocks more suited to video than images.)

* fix dzsave iiif dimensions

dzsave in iiif mode could set info.json dimensions off by one

thanks Linden6

see libvips#1818

* allow both dpi and scale to be set for pdfload

pdfload didn't allow both dpi and scale to be set. This patch makes the
two settings combine if both are given.

thanks le0daniel

see libvips#1824

* tiny thumbnail speedup

thumbnail can skip premultiply/unpre if there's no residual resize

* allow gaussblur sigma 0

meaning no blur (obviosuly)

* oop typo

* Verify ISO/3GPP2 signature in heifload is_a check

* Verify ISO/3GPP2 signature in heifload is_a check

* better heif signature detection

* heifload: simplify is_a check of first 4 bytes

Allow multiples of 4, up to 32, as chunk length

* heifload: simplify is_a check of first 4 bytes

Allow multiples of 4, up to 32, as chunk length

* revise heif sniffing again

* note VImage::new_from_memory_steal() in ChangeLog

plus doxy commnets etc., see libvips#1758

* Fix test failure on ARM-based Windows

The optional parameters of vips_gaussnoise were incorrectly
passed within vips_fractsurf. This was discovered when running
the libvips testsuite on Windows 10 IoT (ARM32).

* Fix test failure on ARM-based Windows

The optional parameters of vips_gaussnoise were incorrectly
passed within vips_fractsurf. This was discovered when running
the libvips testsuite on Windows 10 IoT (ARM32).

* note fractsurf fix in changelog

* heifload: prevent reading beyond end of source buffer

* heifload: prevent reading beyond end of source buffer

* heifload: prevent reading beyond end of source buffer

* revise heifload EOF detection

VipsSource used a unix-style model where read() returns 0 to mean EOF.
libheif uses a model where a separate call to wait_for_file_size()
beforehand is used to check thaht the read will be OK, and then the
read() is expected to never fail.

We were trying to put EOF detection into libheif read(), but that's not
the right way to do it. Instead, test for EOF in wait_for_file_size().

see libvips#1833

* get docs building with goi 1.66+

It builds now, but some doc sections are missing. Fix this properly in
8.11.

See libvips#1836

* note improvements to iprofile

The docs had fallen behind a bit ... iprofile is no longer usually necessary.

see libvips#1843

* update vipsthumbnail docs for --export-profile

and --input-profile

* add stdin, stdout to vipsthumbnail

eg.

	vipsthumbnail stdin[page=2] -o .jpg[Q=90]

mirror of syntax in new_from_file etc.

* improve seek on pipes

There were a few issues in VipsSource around seeking on pipes. With this
patch, EOF detection is better, and pipe sources automatically turn into memory
sources when EOF is hit.

see libvips#1829

* revise pipe sources (again)

Simplify and cleanup.

* pdfload was missing a rewind on source

* add a test for vipsthumbnail of stdin/stdout

* libheif: expose speed parameter (currently AV1 compression only)

Supports both aom and rav1e encoders by limiting to a 0-8 range.

(The rav1e encoder accepts speed values of 9 and 10 but these
use 64x64 blocks more suited to video than images.)

* note new "speed" param in heifsave

To help support the rapid move to AVIF.

see libvips#1819 (comment)

* fix a regression in the C path for dilate/erode

A ++ had been dropped in the recent refactoring. Credit to kleisauke.

See libvips#1846

* fix build with libheif save buit not load

We had some definitions inside the #ifdef HEIFLOAD.

Thanks estepnv

libvips#1844

* heifload: expose heif-compression metadata

* Speed up VIPS_ARGUMENT_COLLECT_SET

By using G_VALUE_COLLECT_INIT, see:
https://bugzilla.gnome.org/show_bug.cgi?id=603590

* fix heifload with libheif 1.6

heif_avif wasn't added until libheif 1.7

* Separate lock for PDFium

* move the pdfium lock init

move it inside the existing ONCE

see libvips#1857

* better GraphicsMagick image write

We were not setting matte or depth correctly, thanks bfriesen.

* get pdium load working again

It had bitrotted a bit. Thanks @Projkt-James.

See libvips#1400

* fix pdfium mutex init

We need to make the mutex in _class_init, not _build, since we can lock
even if _build is not called.

* add pdfium load from source

* improve BGRA -> RGBA conversion

* revise BGRA->RGBA

* relax is_a heic test rules

32 was a little too small, see libvips#1861

* fix vips7 webp load

webp load using the vips7 interface was crashing, thanks barryspearce

see libvips#1860

* fix out of bounds exif read in heifload

We were subtracting 4 from the length of the exif data block without
checking that there were 4 or more bytes there.

* only remove main image (ifd0) orientation tag

we were stripping all orientation tags on autorot

see libvips#1865

* fix two small bugs in test_connections.c

We were passing NULL rather than argv0 to VIPS_INIT(), and we were not
freeing the loaded file.

thanks zodf0055980

see libvips#1867

* add mssing g_option_context_free() to vipsedit

We were not freeing the argument parse context in vipsedit.c.

Thanks zodf0055980

see libvips#1868

* Revert "Remove round-to-nearest behaviour"

This reverts commit ac30bad

* Round sum values to the nearest integer in *_notab

* Fix centre convention

* fix out of bounds read in tiffload

libtiff can change the value of some fields while scanning a corrupt
TIFF file and this could trigger an out of bounds read.

This patch makes tiffload more cautious about rescanning a TIFF
directory before reading out header fields.

* fix tiff pyramid save region-shrink

we'd forgotton to connect it up

thanks imgifty

see libvips#1875

* add tests for tiff pyr save region-shrink flag

we were testing the flag before, but not that the result was correct

see libvips#1875

* flush target at end of write

we were missing end-of-write flushes on four save operations

thanks harukizaemon

see libvips/ruby-vips#256

* missing include

* Update Examples.md

Just some issues I found while testing the examples:
- Reference on header for the file try255.py
- Typo on parameter from bigtif to bigtiff
- Use explicit call to python interpreter from command line

* update examples to py3

* fix compiler warning

* block annoying INFO messages on some older glibs

Some old glibs can display INFO messages by default. Block these
ourselves.

See libvips#1876

* fix icc-profiles and dzsave --no-strip

We were not copying metadata down pyramid layers in dzsave, so
--no-strip didn't allow icc profiles on tiles.

Thanks altert

See libvips#1879

* Ensure that streams are properly read in spngload

* better GraphicsMagick image write

We were not setting matte or depth correctly, thanks bfriesen.

* add read loops to heif and ppm as well

We were not looping on vips_source_read() in these loaders, so they
could fail when reading from very slow pipes.

See kleisauke/net-vips#101

* fix changelog after GM backport

* note read fixes in changelog

* forgot to advance the buffer pointers

thanks kleis

see kleisauke/net-vips#101 (comment)

* add read loops to gifload

and check for error in ppnmload.

* add "seed" param to perlin, worley and gaussnoise

see libvips#1884

* block 0 width or height images from imagemagick

IM could return 0 width and/or height for some crafted images. Block
these.

Thanks @Koen1999.

See libvips#1890

* oops typo in magick7 load

* fix a possible read loop for truncated gifs

* gifload: ensure total height of all pages is sanitised

* reformat

* backport gifheight check

ensure gifheight can't oevrflow

see libvips#1892

* oops typo in magick7 load

* make ppm load default to msb first

We has lsb first as the default, breaking 16-bit PPM load. Thanks ewelot.

see libvips#1894

* byteswap on ppm save, if necessary

this was missing, thanks ewelot

see libvips#1894

* Ensure vipsload only byte swaps if necessary

Prior to this commit, MSB-ordered vips images were always byte swapped
on both little- and big endian systems. And LSB-ordered vips images
were loaded without a byte swap. This works correctly on little endian
systems, but will not work on big endian systems where the byte swap
must be done vice versa.

This commit ensures that the byte swap only takes place when needed.

See libvips#1847.

* Simplify MSB-ordered image check

* Determine endianness at compile time

* Port Ruby test case to Python

* force binary mode for connections on win

stdin / stdout (for example) are created in text mode by default on
win. We need to flip them to binary mode for connection read and write.

See https://stackoverflow.com/questions/65014352/pipe-libvips-cli-output-to-stdout-in-windows

* better test for output to target

We used to enable write to stdout if the first character of an output filename
was ".", eg.:

	vips copy x.jpg .png

But this will also enable write to stdout for things like:

	vips copy x.jpg ./y.png

This patch also tests that the rightmost "." in a filename is also the
first character.

Thanks barryspearce

See libvips#1906

* block deprecation warnings from libgsf

with an uglu gcc progma

* docs clarification

libvips#1912

* don't add date in ppmsave if @strip is set

see libvips#1913

* add is_a_source to ppmload

ppmload_source was missing an ia_a test

see libvips#1915

* note ppmload fix

* fix ppmsave regression

ppm strip dropped magic number

* n comment

* webpload: ensure first frame is not blended

* make webp frame blend do doround to nearest

see libvips#1918

* fix heif load fails with 0 length metadata

fixes libvips#1901

* fix range clips for casts to and from int

Fix two bugs:

- clip in casts from int32 and uint32 could overflow -- do these as gint64 now

- clip in casts from float to int could overflow since float32 can't
  represent the full range of int32 without losing precision -- do these
  as double

And add some more tests.

Thanks ewelot.

see libvips#1922

* note HEIC fix in changelog

see libvips#1921

* fix heif load fails with 0 length metadata

fixes libvips#1901

* note HEIC fix in changelog

see libvips#1921

* forgot changelog update

* improve website link in docs

it was being rewritten by the export script

see libvips#1928

* fix spng detection

This patch was dropped from 8.10.3 release 1, annoyingly.

* start 8.10.4

with a dropped patch from 8.10.3

* allow spng.pc and libspng.ps for libspng discovery

* webpload: prevent divide-by-zero when blending pixels

Adds a test case to prevent regression - see commit 6eaf1ed

* bump version for animated webp load fix

Co-authored-by: John Cupitt <jcupitt@gmail.com>
Co-authored-by: Kyle Schwarz <zeranoe@gmail.com>
Co-authored-by: Lovell Fuller <github@lovell.info>
Co-authored-by: Kleis Auke Wolthuizen <github@kleisauke.nl>
Co-authored-by: DarthSim <darthsim@gmail.com>
Co-authored-by: Antonio Martínez <amalbala@gmail.com>
Co-authored-by: Daniel Dennedy <ddennedy@gopro.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants