This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Implement painting for MSE videos using -[AVSampleBufferDisplayLayer …
…copyDisplayedPixelBuffer] https://bugs.webkit.org/show_bug.cgi?id=241788 rdar://94325004 Reviewed by Eric Carlson. In r292811, we enabled MSE inline painting on iOS 16 and macOS Ventura; this was intentionally limited to these versions, since CoreMedia made refinements in these OS versions to prune the `AVSampleBufferVideoOutput` queue more frequently, in order to avoid a large increase in memory use while playing MSE videos, due to accumulating excess video output frame data. However, this more frequent pruning interval has led to significantly increased power use when playing MSE video, due to the extra work done every time the pruning timer fires. To ensure that Live Text in MSE video and MSE to canvas painting still work in iOS 16 and macOS Ventura, we instead adopt new AVFoundation SPI that allows us to ask `AVSampleBufferDisplayLayer` directly for the currently displayed pixel buffer. As opposed to the `AVSampleBufferVideoOutput`- based approach, this will only kick in if MSE video inline painting is actually requested (either by the page, or from within the engine, in the case of Live Text), which avoids both increased memory use and power use. On versions of macOS and iOS that don't have the new SPI, we simply fall back to the `AVSampleBufferVideoOutput`-based snapshotting approach that we currently use. We also fall back to using the video output if the display layer is empty, in which case the backing `CAImageQueue` won't contain _any_ displayed surfaces (which means `-copyDisplayedPixelBuffer` will always end up returning null). By refactoring logic to create and set `m_videoOutput` out into a helper method (`updateVideoOutput`) that's invoked after we've finished setting up the sample buffer display layer, we can transition as needed between setting and unsetting the video output, based on whether or not the display layer is actually displaying any content. There should be no change in behavior, apart from less memory and power use due to not spinning up the `AVSampleBufferVideoOutput` queue whenever we play MSE videos. See below for more details. * Source/WTF/Scripts/Preferences/WebPreferencesExperimental.yaml: Gate MSE inline painting on `HAVE(AVSAMPLEBUFFERDISPLAYLAYER_COPYDISPLAYEDPIXELBUFFER)`, instead of `HAVE(LOW_AV_SAMPLE_BUFFER_PRUNING_INTERVAL)`. * Source/WTF/wtf/PlatformHave.h: Add a new feature flag to guard the availability of the new AVFoundation SPI, `-[AVSampleBufferDisplayLayer copyDisplayedPixelBuffer]`. * Source/WebCore/PAL/pal/spi/cocoa/AVFoundationSPI.h: Add a staging declaration for `-copyDisplayedPixelBuffer`, so that we can maintain source compatibility when building against older versions of the iOS 16 or macOS Ventura SDKs. * Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp: (WebCore::WebGLRenderingContextBase::texImageSourceHelper): * Source/WebCore/platform/graphics/MediaPlayer.cpp: * Source/WebCore/platform/graphics/MediaPlayer.h: * Source/WebCore/platform/graphics/MediaPlayerPrivate.h: Replace more uses of `HAVE(LOW_AV_SAMPLE_BUFFER_PRUNING_INTERVAL)` with the new flag `HAVE(AVSAMPLEBUFFERDISPLAYLAYER_COPYDISPLAYEDPIXELBUFFER)`, which is now used to guard availability of MSE inline painting. The purpose of guarding this logic behind `!HAVE(LOW_AV_SAMPLE_BUFFER_PRUNING_INTERVAL)` in the first place seems to have been to limit the `willBeAskedToPaintGL()` codepaths to versions of macOS and iOS, where we can't enable MSE inline painting due to lack of system support. Since "system support" now depends on the availability of `-copyDisplayedPixelBuffer`, we should change to use that flag instead of one about pruning interval frequency. This also allows us to remove the `HAVE(LOW_AV_SAMPLE_BUFFER_PRUNING_INTERVAL)` flag altogether, now that there isn't any code that needs to be guarded by it. * Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaSourceAVFObjC.h: * Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaSourceAVFObjC.mm: (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::updateLastPixelBuffer): Adjust this logic to ask `m_sampleBufferDisplayLayer` for a copy of the last displayed pixel buffer, instead of grabbing it from the video output, if possible. (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::shouldEnsureLayer const): (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::acceleratedRenderingStateChanged): (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::updateVideoOutput): Factor out logic for creating or destroying the video output into a separate helper method, that's invoked after updating the display layer. (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::ensureLayer): (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::readbackMethod const): Replace `isVideoOutputAvailable()` with another helper method, that returns a strongly typed enum indicating which readback method to use. `None` indicates that readback isn't supported, `CopyPixelBufferFromDisplayLayer` indicates that we'll use the new AVFoundation SPI method, and `UseVideoOutput` indicates that we'll fall back to `AVSampleBufferVideoOutput`. (WebCore::MediaPlayerPrivateMediaSourceAVFObjC::isVideoOutputAvailable const): Deleted. * Source/WebKit/GPUProcess/media/RemoteMediaPlayerProxy.h: * Source/WebKit/GPUProcess/media/RemoteMediaPlayerProxy.messages.in: * Source/WebKit/GPUProcess/media/cocoa/RemoteMediaPlayerProxyCocoa.mm: * Source/WebKit/WebProcess/GPU/media/MediaPlayerPrivateRemote.cpp: * Source/WebKit/WebProcess/GPU/media/MediaPlayerPrivateRemote.h: Canonical link: https://commits.webkit.org/251761@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@295756 268f45cc-cd09-0410-ab3c-d52691b4dbfc
- Loading branch information
Showing 14 changed files with 93 additions and 30 deletions.