Skip to content

Commit

Permalink
CARingBuffer uses CAAudioStreamDescription in a redundant way
Browse files Browse the repository at this point in the history
https://bugs.webkit.org/show_bug.cgi?id=247430
rdar://problem/101914553

Reviewed by Eric Carlson.

CARingBuffer structure is defined by channelCount, bytesPerFrame, frameCount. However,
it also stores CAAudioStreamDescription to define the contents of each frame
in case client wants to mix (contents can be floats, ints of different size).
CARingBuffer might also be used to transfer undefined frames (possibly non-PCM data), via the
copy operation.

Using CAAudioStreamDescription is problematic, as it is redundant information and it smuggles
the non-PCM data definition via the platform-specific descriptor which is not consistent.
In order to stop using CAAudioStreamDescription in the future patches, remove it from
CARingBuffer. The fetch() caller already knows what kind of data is expected, so this information
is redundant. This fixes the problem that the caller expectation and what is inside the
CARingBuffer could diverge.

Leave the use of CAAudioStreamDescription in the constructors, but remove it from the
internals of the classes.

This patch would affect the base-class constructors and functions. Take the opportunity
to change the CARingBuffer allocation logic: previously, one would construct an empty
instance and then at some point call allocate(). In case the allocate would fail, the
instance would stay "empty" with quite undefined behavior. Instead, make all CARingBuffer
classes be normal memory buffer classes: creation will allocate the buffer, and this
might fail. Deletion will deallocate.

Previously ProducerSharedCARingBuffer allocation sequence was a bit convoluted:
- WP proxy class would construct empty instance with a resize callback.
- When allocate and deallocate would succeed, the resize callback would be called.
- The callers would construct with a resize callback that would then
  send the "audio storage changed" message to the other process, typically GPUP.
The sequence would be linear, resize callback invocation would always follow
an allocate. However, it was written in non-linear fashion.

Instead just have a linear logic:
- WP proxy class constructs the shared buffer, and gets the buffer and
  client handle as a result.
- WP proxy class sends the client handle to the other process.

In order to return ConsumerSharedCARingBuffer::Handle (alias for
SharedMemory::Handle) as part of a struct, make it copyable.
Recent changes to SharedMemory::Handle enable this and were made
to for this purpose.

* Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.h:
* Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h:
* Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm:
(WebCore::AudioSampleDataSource::setOutputFormat):
(WebCore::AudioSampleDataSource::pushSamplesInternal):
(WebCore::AudioSampleDataSource::pullSamples):
(WebCore::AudioSampleDataSource::pullSamplesInternal):
(WebCore::AudioSampleDataSource::pullAvailableSampleChunk):
(WebCore::AudioSampleDataSource::pullAvailableSamplesAsChunks):
* Source/WebCore/platform/audio/cocoa/CARingBuffer.cpp:
(WebCore::CARingBuffer::CARingBuffer):
(WebCore::CARingBuffer::computeCapacityBytes):
(WebCore::CARingBuffer::computeSizeForBuffers):
(WebCore::CARingBuffer::initialize):
(WebCore::FetchABL):
(WebCore::CARingBuffer::fetchInternal):
(WebCore::InProcessCARingBuffer::allocate):
(WebCore::InProcessCARingBuffer::InProcessCARingBuffer):
(WebCore::CARingBuffer::initializeAfterAllocation): Deleted.
(WebCore::CARingBuffer::allocate): Deleted.
(WebCore::CARingBuffer::deallocate): Deleted.
(WebCore::InProcessCARingBuffer::allocateBuffers): Deleted.
* Source/WebCore/platform/audio/cocoa/CARingBuffer.h:
(WebCore::CARingBuffer::fetchModeForMixing):
* Source/WebCore/platform/graphics/avfoundation/AudioSourceProviderAVFObjC.h:
* Source/WebCore/platform/graphics/avfoundation/AudioSourceProviderAVFObjC.mm:
(WebCore::AudioSourceProviderAVFObjC::AudioSourceProviderAVFObjC):
(WebCore::AudioSourceProviderAVFObjC::prepare):
(WebCore::AudioSourceProviderAVFObjC::process):
(WebCore::AudioSourceProviderAVFObjC::setConfigureAudioStorageCallback):
(WebCore::AudioSourceProviderAVFObjC::setRingBufferCreationCallback): Deleted.
* Source/WebKit/GPUProcess/media/RemoteAudioDestinationManager.cpp:
* Source/WebKit/GPUProcess/media/RemoteAudioSourceProviderProxy.cpp:
(WebKit::RemoteAudioSourceProviderProxy::create):
(WebKit::RemoteAudioSourceProviderProxy::configureAudioStorage):
(WebKit::RemoteAudioSourceProviderProxy::createRingBuffer): Deleted.
(WebKit::RemoteAudioSourceProviderProxy::storageChanged): Deleted.
* Source/WebKit/GPUProcess/media/RemoteAudioSourceProviderProxy.h:
* Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp:
(WebKit::RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::start):
* Source/WebKit/GPUProcess/webrtc/RemoteMediaRecorder.cpp:
(WebKit::RemoteMediaRecorder::audioSamplesStorageChanged):
* Source/WebKit/Platform/SharedMemory.h:
* Source/WebKit/Platform/cocoa/SharedMemoryCocoa.cpp:
* Source/WebKit/Platform/unix/SharedMemoryUnix.cpp:
(WebKit::SharedMemory::Handle::Handle): Deleted.
(WebKit::SharedMemory::Handle::~Handle): Deleted.
* Source/WebKit/Platform/win/SharedMemoryWin.cpp:
* Source/WebKit/Shared/Cocoa/SharedCARingBuffer.cpp:
(WebKit::SharedCARingBufferBase::SharedCARingBufferBase):
(WebKit::SharedCARingBufferBase::data):
(WebKit::SharedCARingBufferBase::sharedFrameBounds const):
(WebKit::SharedCARingBufferBase::updateFrameBounds):
(WebKit::SharedCARingBufferBase::size const):
(WebKit::ConsumerSharedCARingBuffer::map):
(WebKit::ProducerSharedCARingBuffer::allocate):
(WebKit::ProducerSharedCARingBuffer::setCurrentFrameBounds):
(WebKit::ConsumerSharedCARingBuffer::ConsumerSharedCARingBuffer): Deleted.
(WebKit::ConsumerSharedCARingBuffer::allocateBuffers): Deleted.
(WebKit::ProducerSharedCARingBuffer::setStorage): Deleted.
(WebKit::ProducerSharedCARingBuffer::allocateBuffers): Deleted.
(WebKit::ProducerSharedCARingBuffer::deallocateBuffers): Deleted.
* Source/WebKit/Shared/Cocoa/SharedCARingBuffer.h:
* Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp:
(WebKit::UserMediaCaptureManagerProxy::SourceProxy::~SourceProxy):
(WebKit::UserMediaCaptureManagerProxy::SourceProxy::storageChanged): Deleted.
* Source/WebKit/UIProcess/SpeechRecognitionRemoteRealtimeMediaSource.cpp:
(WebKit::SpeechRecognitionRemoteRealtimeMediaSource::setStorage):
* Source/WebKit/WebProcess/GPU/media/RemoteAudioDestinationProxy.cpp:
(WebKit::RemoteAudioDestinationProxy::RemoteAudioDestinationProxy):
(WebKit::RemoteAudioDestinationProxy::connection):
(WebKit::RemoteAudioDestinationProxy::storageChanged): Deleted.
* Source/WebKit/WebProcess/GPU/media/RemoteAudioDestinationProxy.h:
* Source/WebKit/WebProcess/GPU/media/RemoteAudioSourceProviderManager.cpp:
(WebKit::RemoteAudioSourceProviderManager::RemoteAudio::setStorage):
* Source/WebKit/WebProcess/GPU/webrtc/AudioMediaStreamTrackRendererInternalUnitManager.cpp:
(WebKit::AudioMediaStreamTrackRendererInternalUnitManager::Proxy::start):
(WebKit::AudioMediaStreamTrackRendererInternalUnitManager::Proxy::storageChanged): Deleted.
* Source/WebKit/WebProcess/GPU/webrtc/MediaRecorderPrivate.cpp:
(WebKit::MediaRecorderPrivate::startRecording):
(WebKit::MediaRecorderPrivate::audioSamplesAvailable):
(WebKit::MediaRecorderPrivate::storageChanged): Deleted.
* Source/WebKit/WebProcess/GPU/webrtc/MediaRecorderPrivate.h:
* Source/WebKit/WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp:
(WebKit::SpeechRecognitionRealtimeMediaSourceManager::Source::Source):
(WebKit::SpeechRecognitionRealtimeMediaSourceManager::Source::~Source):
(WebKit::SpeechRecognitionRealtimeMediaSourceManager::Source::storageChanged): Deleted.
* Source/WebKit/WebProcess/cocoa/RemoteCaptureSampleManager.cpp:
(WebKit::RemoteCaptureSampleManager::RemoteAudio::setStorage):
* Tools/TestWebKitAPI/Tests/WebCore/CARingBuffer.cpp:
(TestWebKitAPI::CARingBufferTest::SetUp):
(TestWebKitAPI::CARingBufferTest::setup):
(TestWebKitAPI::MixingTest::run):

Canonical link: https://commits.webkit.org/256392@main
  • Loading branch information
kkinnunen-apple committed Nov 7, 2022
1 parent 154fb73 commit 678b80c
Show file tree
Hide file tree
Showing 29 changed files with 239 additions and 364 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@

#pragma once

#include "CAAudioStreamDescription.h"
#include "CARingBuffer.h"
#include "WebAudioBufferList.h"
#include <CoreAudio/CoreAudioTypes.h>
Expand Down
3 changes: 2 additions & 1 deletion Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
#pragma once

#include "AudioSampleDataConverter.h"
#include "CAAudioStreamDescription.h"
#include "CARingBuffer.h"
#include <CoreAudio/CoreAudioTypes.h>
#include <wtf/LoggerHelper.h>
Expand Down Expand Up @@ -114,7 +115,7 @@ class AudioSampleDataSource : public ThreadSafeRefCounted<AudioSampleDataSource,

RefPtr<AudioSampleBufferList> m_scratchBuffer;

InProcessCARingBuffer m_ringBuffer;
std::unique_ptr<InProcessCARingBuffer> m_ringBuffer;
size_t m_maximumSampleCount { 0 };

float m_volume { 1.0 };
Expand Down
29 changes: 15 additions & 14 deletions Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,8 @@
// Heap allocations are forbidden on the audio thread for performance reasons so we need to
// explicitly allow the following allocation(s).
DisableMallocRestrictionsForCurrentThreadScope disableMallocRestrictions;
m_ringBuffer.allocate(format, static_cast<size_t>(m_maximumSampleCount));
m_ringBuffer = InProcessCARingBuffer::allocate(format, static_cast<size_t>(m_maximumSampleCount));
RELEASE_ASSERT(m_ringBuffer);
m_scratchBuffer = AudioSampleBufferList::create(m_outputDescription->streamDescription(), m_maximumSampleCount);
m_converterInputOffset = 0;
}
Expand Down Expand Up @@ -170,7 +171,7 @@
});
}

m_ringBuffer.store(sampleBufferList, sampleCount, ringBufferIndexToWrite);
m_ringBuffer->store(sampleBufferList, sampleCount, ringBufferIndexToWrite);

m_converterInputOffset += offset;
m_lastPushedSampleCount = sampleCount;
Expand All @@ -194,8 +195,8 @@
{
size_t byteCount = sampleCount * m_outputDescription->bytesPerFrame();

ASSERT(buffer.mNumberBuffers == m_ringBuffer.channelCount());
if (buffer.mNumberBuffers != m_ringBuffer.channelCount()) {
ASSERT(buffer.mNumberBuffers == m_ringBuffer->channelCount());
if (buffer.mNumberBuffers != m_ringBuffer->channelCount()) {
if (mode != AudioSampleDataSource::Mix)
AudioSampleBufferList::zeroABL(buffer, byteCount);
return false;
Expand All @@ -209,7 +210,7 @@

uint64_t startFrame = 0;
uint64_t endFrame = 0;
m_ringBuffer.getCurrentFrameBounds(startFrame, endFrame);
m_ringBuffer->getCurrentFrameBounds(startFrame, endFrame);

ASSERT(m_waitToStartForPushCount);

Expand Down Expand Up @@ -249,18 +250,18 @@
bool AudioSampleDataSource::pullSamplesInternal(AudioBufferList& buffer, size_t sampleCount, uint64_t timeStamp, PullMode mode)
{
if (mode == Copy) {
m_ringBuffer.fetch(&buffer, sampleCount, timeStamp, CARingBuffer::Copy);
m_ringBuffer->fetch(&buffer, sampleCount, timeStamp, CARingBuffer::Copy);
if (m_volume < EquivalentToMaxVolume)
AudioSampleBufferList::applyGain(buffer, m_volume, m_outputDescription->format());
return true;
}

if (m_volume >= EquivalentToMaxVolume) {
m_ringBuffer.fetch(&buffer, sampleCount, timeStamp, CARingBuffer::Mix);
m_ringBuffer->fetch(&buffer, sampleCount, timeStamp, CARingBuffer::fetchModeForMixing(m_outputDescription->format()));
return true;
}

if (m_scratchBuffer->copyFrom(m_ringBuffer, sampleCount, timeStamp, CARingBuffer::Copy))
if (m_scratchBuffer->copyFrom(*m_ringBuffer, sampleCount, timeStamp, CARingBuffer::Copy))
return false;

m_scratchBuffer->applyGain(m_volume);
Expand All @@ -273,8 +274,8 @@

bool AudioSampleDataSource::pullAvailableSampleChunk(AudioBufferList& buffer, size_t sampleCount, uint64_t timeStamp, PullMode mode)
{
ASSERT(buffer.mNumberBuffers == m_ringBuffer.channelCount());
if (buffer.mNumberBuffers != m_ringBuffer.channelCount())
ASSERT(buffer.mNumberBuffers == m_ringBuffer->channelCount());
if (buffer.mNumberBuffers != m_ringBuffer->channelCount())
return false;

if (m_muted || !m_inputSampleOffset)
Expand All @@ -292,13 +293,13 @@

bool AudioSampleDataSource::pullAvailableSamplesAsChunks(AudioBufferList& buffer, size_t sampleCountPerChunk, uint64_t timeStamp, Function<void()>&& consumeFilledBuffer)
{
ASSERT(buffer.mNumberBuffers == m_ringBuffer.channelCount());
if (buffer.mNumberBuffers != m_ringBuffer.channelCount())
ASSERT(buffer.mNumberBuffers == m_ringBuffer->channelCount());
if (buffer.mNumberBuffers != m_ringBuffer->channelCount())
return false;

uint64_t startFrame = 0;
uint64_t endFrame = 0;
m_ringBuffer.getCurrentFrameBounds(startFrame, endFrame);
m_ringBuffer->getCurrentFrameBounds(startFrame, endFrame);
if (m_shouldComputeOutputSampleOffset) {
m_outputSampleOffset = timeStamp + (endFrame - sampleCountPerChunk);
m_shouldComputeOutputSampleOffset = false;
Expand All @@ -325,7 +326,7 @@
}

while (endFrame - startFrame >= sampleCountPerChunk) {
m_ringBuffer.fetch(&buffer, sampleCountPerChunk, startFrame, CARingBuffer::Copy);
m_ringBuffer->fetch(&buffer, sampleCountPerChunk, startFrame, CARingBuffer::Copy);
consumeFilledBuffer();
startFrame += sampleCountPerChunk;
}
Expand Down
164 changes: 71 additions & 93 deletions Source/WebCore/platform/audio/cocoa/CARingBuffer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,80 +32,46 @@
#include "Logging.h"
#include <Accelerate/Accelerate.h>
#include <CoreAudio/CoreAudioTypes.h>
#include <wtf/CheckedArithmetic.h>
#include <wtf/MathExtras.h>
#include <wtf/StdLibExtras.h>

const uint32_t kGeneralRingTimeBoundsQueueSize = 32;
const uint32_t kGeneralRingTimeBoundsQueueMask = kGeneralRingTimeBoundsQueueSize - 1;

namespace WebCore {

CARingBuffer::CARingBuffer() = default;
CARingBuffer::CARingBuffer(size_t bytesPerFrame, size_t frameCount, uint32_t numChannelStreams)
: m_pointers(numChannelStreams)
, m_channelCount(numChannelStreams)
, m_bytesPerFrame(bytesPerFrame)
, m_frameCount(frameCount)
, m_capacityBytes(computeCapacityBytes(bytesPerFrame, frameCount))
{
ASSERT(WTF::isPowerOfTwo(frameCount));
}

CARingBuffer::~CARingBuffer() = default;

CheckedSize CARingBuffer::computeCapacityBytes(const CAAudioStreamDescription& format, size_t frameCount)
CheckedSize CARingBuffer::computeCapacityBytes(size_t bytesPerFrame, size_t frameCount)
{
CheckedSize capacityBytes = format.bytesPerFrame();
capacityBytes *= frameCount;
return capacityBytes;
return CheckedSize { bytesPerFrame } * frameCount;
}

CheckedSize CARingBuffer::computeSizeForBuffers(const CAAudioStreamDescription& format, size_t frameCount)
CheckedSize CARingBuffer::computeSizeForBuffers(size_t bytesPerFrame, size_t frameCount, uint32_t numChannelStreams)
{
auto sizeForBuffers = computeCapacityBytes(format, frameCount);
sizeForBuffers *= format.numberOfChannelStreams();
return sizeForBuffers;
return computeCapacityBytes(bytesPerFrame, frameCount) * numChannelStreams;
}

void CARingBuffer::initializeAfterAllocation(const CAAudioStreamDescription& format, size_t frameCount)
void CARingBuffer::initialize()
{
m_description = format;
m_channelCount = format.numberOfChannelStreams();
m_bytesPerFrame = format.bytesPerFrame();
m_frameCount = frameCount;
m_capacityBytes = computeCapacityBytes(format, frameCount);

m_pointers.resize(m_channelCount);
Byte* channelData = static_cast<Byte*>(data());

for (auto& pointer : m_pointers) {
pointer = channelData;
channelData += m_capacityBytes;
}

flush();
}

bool CARingBuffer::allocate(const CAAudioStreamDescription& format, size_t frameCount)
{
deallocate();
frameCount = WTF::roundUpToPowerOfTwo(frameCount);

auto sizeForBuffers = computeSizeForBuffers(format, frameCount);
if (sizeForBuffers.hasOverflowed()) {
RELEASE_LOG_FAULT(Media, "CARingBuffer::allocate: Overflowed when trying to compute the storage size");
return false;
}

if (UNLIKELY(!allocateBuffers(sizeForBuffers, format, frameCount))) {
RELEASE_LOG_FAULT(Media, "CARingBuffer::allocate: Failed to allocate buffer of the requested size: %lu", sizeForBuffers.value());
return false;
}

initializeAfterAllocation(format, frameCount);
return true;
}

void CARingBuffer::deallocate()
{
deallocateBuffers();
m_pointers.clear();
m_channelCount = 0;
m_capacityBytes = 0;
m_frameCount = 0;
}

static void ZeroRange(Vector<Byte*>& pointers, size_t offset, size_t nbytes)
{
for (auto& pointer : pointers)
Expand All @@ -124,7 +90,7 @@ static void StoreABL(Vector<Byte*>& pointers, size_t destOffset, const AudioBuff
}
}

static void FetchABL(AudioBufferList* list, size_t destOffset, Vector<Byte*>& pointers, size_t srcOffset, size_t nbytes, AudioStreamDescription::PCMFormat format, CARingBuffer::FetchMode mode)
static void FetchABL(AudioBufferList* list, size_t destOffset, Vector<Byte*>& pointers, size_t srcOffset, size_t nbytes, CARingBuffer::FetchMode mode)
{
ASSERT(list->mNumberBuffers == pointers.size());
auto bufferCount = std::min<size_t>(list->mNumberBuffers, pointers.size());
Expand All @@ -138,36 +104,32 @@ static void FetchABL(AudioBufferList* list, size_t destOffset, Vector<Byte*>& po
auto* destinationData = static_cast<Byte*>(dest.mData) + destOffset;
auto* sourceData = pointer + srcOffset;
nbytes = std::min<size_t>(nbytes, dest.mDataByteSize - destOffset);
if (mode == CARingBuffer::Copy)
switch (mode) {
case CARingBuffer::Copy:
memcpy(destinationData, sourceData, nbytes);
else {
switch (format) {
case AudioStreamDescription::Int16: {
auto* destination = reinterpret_cast<int16_t*>(destinationData);
auto* source = reinterpret_cast<int16_t*>(sourceData);
for (size_t i = 0; i < nbytes / sizeof(int16_t); i++)
destination[i] += source[i];
break;
}
case AudioStreamDescription::Int32: {
auto* destination = reinterpret_cast<int32_t*>(destinationData);
vDSP_vaddi(destination, 1, reinterpret_cast<int32_t*>(sourceData), 1, destination, 1, nbytes / sizeof(int32_t));
break;
}
case AudioStreamDescription::Float32: {
auto* destination = reinterpret_cast<float*>(destinationData);
vDSP_vadd(destination, 1, reinterpret_cast<float*>(sourceData), 1, destination, 1, nbytes / sizeof(float));
break;
}
case AudioStreamDescription::Float64: {
auto* destination = reinterpret_cast<double*>(destinationData);
vDSP_vaddD(destination, 1, reinterpret_cast<double*>(sourceData), 1, destination, 1, nbytes / sizeof(double));
break;
}
case AudioStreamDescription::None:
ASSERT_NOT_REACHED();
break;
}
break;
case CARingBuffer::MixInt16: {
auto* destination = reinterpret_cast<int16_t*>(destinationData);
auto* source = reinterpret_cast<int16_t*>(sourceData);
for (size_t i = 0; i < nbytes / sizeof(int16_t); i++)
destination[i] += source[i];
break;
}
case CARingBuffer::MixInt32: {
auto* destination = reinterpret_cast<int32_t*>(destinationData);
vDSP_vaddi(destination, 1, reinterpret_cast<int32_t*>(sourceData), 1, destination, 1, nbytes / sizeof(int32_t));
break;
}
case CARingBuffer::MixFloat32: {
auto* destination = reinterpret_cast<float*>(destinationData);
vDSP_vadd(destination, 1, reinterpret_cast<float*>(sourceData), 1, destination, 1, nbytes / sizeof(float));
break;
}
case CARingBuffer::MixFloat64: {
auto* destination = reinterpret_cast<double*>(destinationData);
vDSP_vaddD(destination, 1, reinterpret_cast<double*>(sourceData), 1, destination, 1, nbytes / sizeof(double));
break;
}
}
}
}
Expand Down Expand Up @@ -320,12 +282,12 @@ void CARingBuffer::fetchInternal(AudioBufferList* list, size_t nFrames, uint64_t

if (offset0 < offset1) {
nbytes = offset1 - offset0;
FetchABL(list, destStartByteOffset, m_pointers, offset0, nbytes, m_description.format(), mode);
FetchABL(list, destStartByteOffset, m_pointers, offset0, nbytes, mode);
} else {
nbytes = m_capacityBytes - offset0;
FetchABL(list, destStartByteOffset, m_pointers, offset0, nbytes, m_description.format(), mode);
FetchABL(list, destStartByteOffset, m_pointers, offset0, nbytes, mode);
if (offset1)
FetchABL(list, destStartByteOffset + nbytes, m_pointers, 0, offset1, m_description.format(), mode);
FetchABL(list, destStartByteOffset + nbytes, m_pointers, 0, offset1, mode);
nbytes += offset1;
}

Expand All @@ -337,8 +299,33 @@ void CARingBuffer::fetchInternal(AudioBufferList* list, size_t nFrames, uint64_t
}
}

InProcessCARingBuffer::InProcessCARingBuffer()
: m_timeBoundsQueue(kGeneralRingTimeBoundsQueueSize)
std::unique_ptr<InProcessCARingBuffer> InProcessCARingBuffer::allocate(const WebCore::CAAudioStreamDescription& format, size_t frameCount)
{
frameCount = WTF::roundUpToPowerOfTwo(frameCount);
auto bytesPerFrame = format.bytesPerFrame();
auto numChannelStreams = format.numberOfChannelStreams();

auto checkedSizeForBuffers = computeSizeForBuffers(bytesPerFrame, frameCount, numChannelStreams);
if (checkedSizeForBuffers.hasOverflowed()) {
RELEASE_LOG_FAULT(Media, "InProcessCARingBuffer::allocate: Overflowed when trying to compute the storage size");
return nullptr;
}
auto sizeForBuffers = checkedSizeForBuffers.value();
Vector<uint8_t> buffer;
if (!buffer.tryReserveCapacity(sizeForBuffers)) {
RELEASE_LOG_FAULT(Media, "InProcessCARingBuffer::allocate: Failed to allocate buffer of the requested size: %lu", sizeForBuffers);
return nullptr;
}
buffer.grow(sizeForBuffers);
std::unique_ptr<InProcessCARingBuffer> result { new InProcessCARingBuffer { bytesPerFrame, frameCount, numChannelStreams, WTFMove(buffer) } };
result->initialize();
return result;
}

InProcessCARingBuffer::InProcessCARingBuffer(size_t bytesPerFrame, size_t frameCount, uint32_t numChannelStreams, Vector<uint8_t>&& buffer)
: CARingBuffer(bytesPerFrame, frameCount, numChannelStreams)
, m_buffer(WTFMove(buffer))
, m_timeBoundsQueue(kGeneralRingTimeBoundsQueueSize)
{
}

Expand All @@ -355,15 +342,6 @@ void InProcessCARingBuffer::flush()
m_timeBoundsQueuePtr = 0;
}

bool InProcessCARingBuffer::allocateBuffers(size_t byteCount, const CAAudioStreamDescription&, size_t)
{
if (!m_buffer.tryReserveCapacity(byteCount))
return false;

m_buffer.grow(byteCount);
return true;
}

void InProcessCARingBuffer::setCurrentFrameBounds(uint64_t startTime, uint64_t endTime)
{
Locker locker { m_currentFrameBoundsLock };
Expand Down

0 comments on commit 678b80c

Please sign in to comment.