Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce DataChannel.readLoop allocations #1516

Closed
bshimc opened this issue Nov 11, 2020 · 2 comments
Closed

Reduce DataChannel.readLoop allocations #1516

bshimc opened this issue Nov 11, 2020 · 2 comments

Comments

@bshimc
Copy link
Contributor

bshimc commented Nov 11, 2020

Summary

I'm evaluating webrtc for a use case that stresses DataChannels. In profiling, I tracked down some pathological alloc behavior to readLoop. A simple patch that uses sync.Pool to minimize allocs (see patch below) resulted in what appear to be significant GC improvements. That said, the patch in question changes the semantics of the datachannel OnMessage handler; that's to say the contents of the message buffer are only valid inside the lifetime of the handler function. For my use case this was acceptable but that's unlikely to be the case in general (great if it is).

Motivation

Performance.

Describe alternatives you've considered

To guarantee compatibility we'd need to allocate a new buffer sized to n and copy the contents to be passed on before releasing the larger dataChannelBufferSize buffers. I haven't tried this. For our purposes it would be nice to have a configuration that could toggle between no-copy and legacy-safe behaviors.

Additional context

diff --git a/vendor/github.com/pion/webrtc/v3/datachannel.go b/vendor/github.com/pion/webrtc/v3/datachannel.go
index ba1269d..7a32c70 100644
--- a/vendor/github.com/pion/webrtc/v3/datachannel.go
+++ b/vendor/github.com/pion/webrtc/v3/datachannel.go
@@ -311,11 +311,16 @@ func (d *DataChannel) onError(err error) {
        }
 }

+var rlBufPool = sync.Pool{New: func() interface{} {
+       return make([]byte, dataChannelBufferSize)
+}}
+
 func (d *DataChannel) readLoop() {
        for {
-               buffer := make([]byte, dataChannelBufferSize)
+               buffer := rlBufPool.Get().([]byte)
                n, isString, err := d.dataChannel.ReadDataChannel(buffer)
                if err != nil {
+                       rlBufPool.Put(buffer)
                        d.setReadyState(DataChannelStateClosed)
                        if err != io.EOF {
                                d.onError(err)
@@ -325,6 +330,7 @@ func (d *DataChannel) readLoop() {
                }

                d.onMessage(DataChannelMessage{Data: buffer[:n], IsString: isString})
+               rlBufPool.Put(buffer)
        }
 }

Top (before) bottom (after) memory allocation profiles. The second set of profiles actually is generating a lot more QPS through the established datachannels.

image

@Sean-Der
Copy link
Member

Sean-Der commented Nov 11, 2020

This is amazing! Really love to see stuff like this.

To get this merged today I think we can make it a SettingEngine option. This is a pretty scary breaking change, users will upgrade and not understand what hit them :(

We can get this merged quickly (with a test) and then revisit making it a default later.

bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is more safe
but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for re-use and
syn.Pool is pure overhead.  As allocs/op increases due to sync.Pool overhead
but B/op and ns/op improve drastically.

    $ git co datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git co datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DataChannelSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DataChannelSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DataChannelSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DataChannelSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DataChannelSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DataChannelSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DataChannelSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DataChannelSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DataChannelSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DataChannelSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DataChannelSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DataChannelSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is more safe
but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for re-use and
syn.Pool is pure overhead.  As allocs/op increases due to sync.Pool overhead
but B/op and ns/op improve drastically.

    $ git co datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git co datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DataChannelSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DataChannelSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DataChannelSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DataChannelSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DataChannelSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DataChannelSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DataChannelSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DataChannelSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DataChannelSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DataChannelSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DataChannelSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DataChannelSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for
re-use and syn.Pool is pure overhead.  As allocs/op increases due to
sync.Pool overhead but B/op and ns/op improve drastically.

    $ git checkout datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git checkout datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DCSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DCSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DCSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DCSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DCSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DCSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DCSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DCSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DCSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DCSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DCSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DCSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for
re-use and syn.Pool is pure overhead.  As allocs/op increases due to
sync.Pool overhead but B/op and ns/op improve drastically.

    $ git checkout datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git checkout datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DCSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DCSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DCSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DCSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DCSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DCSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DCSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DCSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DCSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DCSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DCSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DCSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for
re-use and syn.Pool is pure overhead.  As allocs/op increases due to
sync.Pool overhead but B/op and ns/op improve drastically.

    $ git checkout datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git checkout datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DCSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DCSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DCSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DCSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DCSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DCSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DCSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DCSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DCSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DCSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DCSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DCSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 12, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

As expected, when run with GOMAXPROCS=1 there is opportunity for
re-use and syn.Pool is pure overhead.  As allocs/op increases due to
sync.Pool overhead but B/op and ns/op improve drastically.

    $ git checkout datachannel.go && \
      got test -bench=DataChannelSend -run=XXX \
      -benchmem -cpu=1,2,4,8 -count=10 > original.txt
    $ git checkout datachannel.go && git apply pool.patch && \
      go test -bench=DataChannelSend -run=XXX \
    	-benchmem -cpu=1,2,4,8 -count=10 > patched.txt

    $ benchstat original.txt patched.txt
    name               old time/op    new time/op    delta
    DCSend      1.45µs ±35%    1.24µs ±40%      ~     (p=0.211 n=9+10)
    DCSend-2     885ns ±14%    2221ns ±44%  +150.87%  (p=0.000 n=10+9)
    DCSend-4    5.19µs ±15%    3.27µs ±16%   -37.13%  (p=0.000 n=9+10)
    DCSend-8    8.07µs ±22%    3.97µs ± 4%   -50.78%  (p=0.000 n=10+8)

    name               old alloc/op   new alloc/op   delta
    DCSend      1.67kB ±72%    0.31kB ±12%   -81.55%  (p=0.000 n=9+10)
    DCSend-2    1.80kB ±21%    0.34kB ±14%   -81.14%  (p=0.000 n=10+8)
    DCSend-4    1.47kB ±14%    1.40kB ± 0%    -4.39%  (p=0.031 n=10+8)
    DCSend-8    2.18kB ±14%    1.45kB ± 0%   -33.57%  (p=0.000 n=9+8)

    name               old allocs/op  new allocs/op  delta
    DCSend        6.00 ± 0%      6.80 ±18%   +13.33%  (p=0.008 n=8+10)
    DCSend-2      6.00 ± 0%      8.11 ±36%   +35.19%  (p=0.000 n=10+9)
    DCSend-4      7.00 ± 0%     37.00 ± 0%  +428.57%  (p=0.000 n=10+10)
    DCSend-8      7.70 ± 9%     39.00 ± 0%  +406.49%  (p=0.000 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 13, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 13, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 13, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 13, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
bshimc added a commit to bshimc/webrtc that referenced this issue Nov 13, 2020
See pion#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in pion#1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
Sean-Der pushed a commit that referenced this issue Nov 14, 2020
See #1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in #1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
@Sean-Der
Copy link
Member

Fixed by 159ba5a

ourwarmhouse added a commit to ourwarmhouse/Smart-Contract---Go that referenced this issue May 1, 2024
See pion/webrtc#1516

This patch preserves the semantics of the OnMessage handler and is
more safe but less efficient than the patch first described in #1516.

$ git checkout origin/master datachannel.go && \
  go test -bench=. -run=XXX -benchmem -count=10 > original.txt
$ git checkout datachannel.go && git apply pool.patch && \
  go test -bench=. -run=XXX -benchmem -count=10 > option1.txt

$ benchstat original.txt option1.txt
name                 old time/op    new time/op    delta
DSend2-8     20.3µs ±51%     3.7µs ± 6%   -81.74%  (p=0.000 n=10+10)
DSend4-8     23.5µs ±34%     3.6µs ± 8%   -84.80%  (p=0.000 n=10+8)
DSend8-8     18.9µs ±35%     5.8µs ±68%   -69.45%  (p=0.000 n=9+10)
DSend16-8    16.8µs ±30%    10.0µs ±24%   -40.77%  (p=0.000 n=10+10)
DSend32-8    710ms ±100%       0ms ±81%  -100.00%  (p=0.035 n=10+9)

name                 old alloc/op   new alloc/op   delta
DSend2-8     15.3kB ±89%     1.4kB ± 0%   -90.59%  (p=0.000 n=9+10)
DSend4-8     41.7kB ±63%     1.4kB ± 1%   -96.58%  (p=0.000 n=10+10)
DSend8-8     45.0kB ±33%     1.4kB ± 2%   -96.83%  (p=0.000 n=9+10)
DSend16-8    34.0kB ±69%     1.4kB ± 1%   -95.77%  (p=0.000 n=10+10)
DSend32-8   37.4MB ±388%     0.0MB ± 4%  -100.00%  (p=0.000 n=10+7)

name                 old allocs/op  new allocs/op  delta
DSend2-8       15.8 ±46%      38.6 ± 2%  +144.30%  (p=0.000 n=10+10)
DSend4-8       27.1 ±48%      38.0 ± 0%   +40.22%  (p=0.000 n=10+9)
DSend8-8       29.3 ±16%      38.0 ± 0%   +29.55%  (p=0.000 n=9+8)
DSend16-8      23.6 ±41%      37.0 ± 0%   +56.78%  (p=0.000 n=10+9)
DSend32-8    19.3k ±100%      0.0k ± 0%      ~     (p=0.178 n=10+7)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants