Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpc_util: Reuse memory buffer for receiving message #5862

Merged
merged 42 commits into from
Jun 27, 2023

Conversation

hueypark
Copy link
Contributor

@hueypark hueypark commented Dec 13, 2022

This PR aims to reduce memory allocation in the receive message process. Using this with a stream-heavy workload can improve performance significantly.

RELEASE NOTES:

  • rpc_util: reduce memory allocation for the message parsing process in streaming RPC

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Dec 13, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@hueypark hueypark force-pushed the master branch 5 times, most recently from d198e0a to 9f84355 Compare December 13, 2022 10:59
@easwars
Copy link
Contributor

easwars commented Dec 15, 2022

Could you please explain what problem you are trying to solve by making this change? Maybe we can have a discussion and see if the current approach is the way to go or if a different approach would serve better.

Using this with a stream-heavy workload can improve performance significantly.

Do you have any data to show that this change improves performance? And if so, could you please share that with us. Also, we do have a benchmark suite that you can use to run your changes against. Thanks.

@hueypark
Copy link
Contributor Author

I implemented a feature stream counts at the benchmarkmain and ran my workloads.
The following is the result of the benchmark.

unconstrained-networkMode_Local-bufConn_false-keepalive_false-benchTime_10s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1048576B-respSize_1048576B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps            0            0      NaN%
             SendOps         9251        10583    14.40%
             RecvOps         8150        11626    42.65%
            Bytes/op   5323921.33   3288925.84   -38.22%
           Allocs/op       621.94       610.97    -1.77%
             ReqT/op 7760301260.80 8877663846.40    14.40%
            RespT/op 6836715520.00 9752595660.80    42.65%
            50th-Lat           0s           0s      NaN%
            90th-Lat           0s           0s      NaN%
            99th-Lat           0s           0s      NaN%
             Avg-Lat           0s           0s      NaN%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

And I have attached all of the benchmark results. cpuProfAfter.zip

@easwars
Copy link
Contributor

easwars commented Dec 20, 2022

I implemented a feature stream counts at the benchmarkmain and ran my workloads.

What does this feature do?

And I have attached all of the benchmark results. cpuProfAfter.zip

This zip file contains a single file and I don't know how to view the results in there.

The benchmarks results that you have posted here are for a very specific case. It uses the following settings among others:
workload type: unconstrained
benchmark run time: 10s
concurrent calls: 1
req size: 1048576B
resp size: 1048576B

We would really like to see more comprehensive benchmark runs for a code change which is as fundamental as yours. Also, a benchmark run time of 10s is simply not good enough.

@easwars
Copy link
Contributor

easwars commented Dec 20, 2022

Also, could you please explain why you want to make the change that you are making? Are you seeing some performance bottlenecks when running specific workloads?

@hueypark
Copy link
Contributor Author

What does this feature do?

This feature allows the stream API to send multiple streams.
In real-world scenarios, API users may not only send a single message with the stream.

This zip file contains a single file and I don't know how to view the results in there.

I apologize. It would probably be better to reproduce the issue locally.
I will include the necessary commands for reproduction.

We would really like to see more comprehensive benchmark runs for a code change which is as fundamental as yours. Also, a benchmark run time of 10s is simply not good enough.

I aggred. I added a benchmark run time of 1m.

result:

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps        31405        31349    -0.18%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     18072.12     18104.19     0.18%
           Allocs/op       216.46       215.70    -0.46%
             ReqT/op   4187333.33   4179866.67    -0.18%
            RespT/op   4187333.33   4179866.67    -0.18%
            50th-Lat   1.903201ms    1.90694ms     0.20%
            90th-Lat   1.958722ms   1.960585ms     0.10%
            99th-Lat   2.007894ms   2.015896ms     0.40%
             Avg-Lat   1.910174ms   1.913565ms     0.18%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

streaming-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps         3934         3920    -0.36%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     68941.53     52429.05   -23.95%
           Allocs/op       468.97       449.64    -4.05%
             ReqT/op    524533.33    522666.67    -0.36%
            RespT/op    524533.33    522666.67    -0.36%
            50th-Lat  15.209591ms  15.232099ms     0.15%
            90th-Lat  15.333759ms  15.356509ms     0.15%
            99th-Lat  16.770753ms  17.077881ms     1.83%
             Avg-Lat  15.252835ms  15.309135ms     0.37%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unconstrained-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps            0            0      NaN%
             SendOps        54388        51778    -4.80%
             RecvOps        77008        77023     0.02%
            Bytes/op      6123.87      4086.67   -33.26%
           Allocs/op        21.29        19.23    -9.40%
             ReqT/op   7251733.33   6903733.33    -4.80%
            RespT/op  10267733.33  10269733.33     0.02%
            50th-Lat           0s           0s      NaN%
            90th-Lat           0s           0s      NaN%
            99th-Lat           0s           0s      NaN%
             Avg-Lat           0s           0s      NaN%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unconstrained-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps            0            0      NaN%
             SendOps      2602356      2842256     9.22%
             RecvOps      2664021      2037056   -23.53%
            Bytes/op      6315.69      3869.00   -38.74%
           Allocs/op        20.39        17.48   -14.71%
             ReqT/op 346980800.00 378967466.67     9.22%
            RespT/op 355202800.00 271607466.67   -23.53%
            50th-Lat           0s           0s      NaN%
            90th-Lat           0s           0s      NaN%
            99th-Lat           0s           0s      NaN%
             Avg-Lat           0s           0s      NaN%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

streaming-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps           90           90     0.00%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     70131.82     53823.20   -23.25%
           Allocs/op       551.42       533.84    -3.26%
             ReqT/op     12000.00     12000.00     0.00%
            RespT/op     12000.00     12000.00     0.00%
            50th-Lat 668.543496ms 667.398531ms    -0.17%
            90th-Lat 674.226219ms 671.845385ms    -0.35%
            99th-Lat 677.662648ms 675.277176ms    -0.35%
             Avg-Lat 669.060189ms 668.227508ms    -0.12%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unconstrained-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps            0            0      NaN%
             SendOps        76866        76860    -0.01%
             RecvOps        76950        77004     0.07%
            Bytes/op      5909.29      3845.26   -34.93%
           Allocs/op        19.95        17.88   -10.03%
             ReqT/op  10248800.00  10248000.00    -0.01%
            RespT/op  10260000.00  10267200.00     0.07%
            50th-Lat           0s           0s      NaN%
            90th-Lat           0s           0s      NaN%
            99th-Lat           0s           0s      NaN%
             Avg-Lat           0s           0s      NaN%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps       607492       605531    -0.32%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     18057.20     18106.91     0.27%
           Allocs/op       215.88       215.90     0.00%
             ReqT/op  80998933.33  80737466.67    -0.32%
            RespT/op  80998933.33  80737466.67    -0.32%
            50th-Lat     95.001µs     95.301µs     0.32%
            90th-Lat     108.75µs    109.127µs     0.35%
            99th-Lat    223.706µs    225.913µs     0.99%
             Avg-Lat     98.429µs     98.753µs     0.33%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unconstrained-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps            0            0      NaN%
             SendOps     17801533     23479365    31.90%
             RecvOps     14600691     18649868    27.73%
            Bytes/op      5866.95      3813.91   -34.99%
           Allocs/op        20.04        17.69   -14.97%
             ReqT/op 2373537733.33 3130582000.00    31.90%
            RespT/op 1946758800.00 2486649066.67    27.73%
            50th-Lat           0s           0s      NaN%
            90th-Lat           0s           0s      NaN%
            99th-Lat           0s           0s      NaN%
             Avg-Lat           0s           0s      NaN%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

streaming-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps           92           91    -1.09%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     70343.83     53902.51   -23.37%
           Allocs/op       566.60       540.46    -4.59%
             ReqT/op     12266.67     12133.33    -1.08%
            RespT/op     12266.67     12133.33    -1.08%
            50th-Lat 655.037524ms 666.202283ms     1.70%
            90th-Lat 656.737879ms  667.99468ms     1.71%
            99th-Lat 657.299189ms 669.693627ms     1.89%
             Avg-Lat 654.943331ms 665.739009ms     1.65%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_10240-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps          731          722    -1.23%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     18071.45     18069.62    -0.01%
           Allocs/op       217.97       215.36    -0.92%
             ReqT/op     97466.67     96266.67    -1.23%
            RespT/op     97466.67     96266.67    -1.23%
            50th-Lat  82.127316ms  83.219196ms     1.33%
            90th-Lat  82.400964ms  84.170818ms     2.15%
            99th-Lat  82.576049ms   84.53085ms     2.37%
             Avg-Lat  82.161793ms  83.147415ms     1.20%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

streaming-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps       117155       120640     2.97%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     69970.61     53383.67   -23.71%
           Allocs/op       522.46       499.58    -4.40%
             ReqT/op  15620666.67  16085333.33     2.97%
            RespT/op  15620666.67  16085333.33     2.97%
            50th-Lat    494.255µs    482.374µs    -2.40%
            90th-Lat    541.998µs     522.98µs    -3.51%
            99th-Lat    917.151µs    888.174µs    -3.16%
             Avg-Lat      511.8µs    497.027µs    -2.89%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_40ms-kbps_0-MTU_0-maxConcurrentCalls_1-reqSize_1000B-respSize_1000B-compressor_off-channelz_false-preloader_true-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-
               Title       Before        After Percentage
            TotalOps          736          722    -1.90%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op     18025.93     18032.04     0.04%
           Allocs/op       214.81       213.43    -0.47%
             ReqT/op     98133.33     96266.67    -1.90%
            RespT/op     98133.33     96266.67    -1.90%
            50th-Lat  81.676483ms   83.26441ms     1.94%
            90th-Lat  82.441415ms  84.380606ms     2.35%
            99th-Lat  82.518705ms  84.539625ms     2.45%
             Avg-Lat  81.612729ms  83.167132ms     1.90%
           GoVersion     go1.19.4     go1.19.4
         GrpcVersion   1.53.0-dev   1.53.0-dev

commands:

go run benchmark/benchmain/main.go -benchtime=1m \
          -workloads=all -preloader=on \
          -maxConcurrentCalls=1 \
          -reqSizeBytes=1000 -respSizeBytes=1000 -streamCounts=8 \
          -resultFile=beforeResult

-- after checkout use buffer commit

go run benchmark/benchmain/main.go -benchtime=1m \
          -workloads=all -preloader=on \
          -maxConcurrentCalls=1 \
          -reqSizeBytes=1000 -respSizeBytes=1000 -streamCounts=8 \
          -resultFile=afterResult

go run benchmark/benchresult/main.go beforeResult afterResult

@hueypark
Copy link
Contributor Author

Also, could you please explain why you want to make the change that you are making? Are you seeing some performance bottlenecks when running specific workloads?

I am developing an OLAP database and our map reduce framework relies on gRPC to send and receive large amounts of data. Recently, we discovered that gRPC is allocating unnecessary memory for the stream API, which is causing our system to slow down. To improve performance, we want to reduce the memory allocation.

Actual code link:

@easwars easwars added Type: Performance Performance improvements (CPU, network, memory, etc) and removed Status: Requires Reporter Clarification labels Dec 27, 2022
@easwars easwars assigned easwars and unassigned hueypark Dec 27, 2022
@easwars
Copy link
Contributor

easwars commented Dec 27, 2022

Could you please send our your changes to the benchmarking code as a separate PR. Thanks.

@hueypark
Copy link
Contributor Author

Done with #5898.
I greatly appreciate your review.

@dfawley dfawley added this to the 1.57 Release milestone Jun 2, 2023
@pstibrany
Copy link
Contributor

pstibrany commented Jun 8, 2023

Thank you for this PR! I'm testing this patch in our dev environment. Description mentions "streaming rpc", but code to get buffer from the pool is also hit for unary methods, and experiment confirms that. It wasn't clear to me whether this would work for unary methods too. Perhaps the description could clarify that?

Update: Summary of my findings:

  • On server side, pooling of buffers is supported for streaming methods. For unary methods, received messages are put into buffer from the pool, but buffer is never returned back to pool.
  • On client side, pooling seem to work for messages received for both streaming and unary methods.

Comment on lines +31 to +39
type SharedBufferPool interface {
// Get returns a buffer with specified length from the pool.
//
// The returned byte slice may be not zero initialized.
Get(length int) []byte

// Put returns a buffer to the pool.
Put(*[]byte)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it strange and inconvenient that this interface gives you a []byte but requires a *[]byte to put it back. This forces the caller to allocate a pointer to put elements back to the pool, which slightly defies the purpose of having a pool.

I think that it should either:

  • return a *[]byte from Get(int) *[]byte: which might be inconvenient because the caller may not have a place where that pointer would be stored, so it would be forced to allocate a new pointer to put the slice back.
  • accept a []byte in Put([]byte) and let the specific SharedBufferPool deal with the situation (example).

There's no need to use pointers to the slice, except because the specific implementation below (just like all of them as today) uses sync.Pool which can only hold interface{} values. However, we can expect a generic version of sync.Pool to come in the next versions of go, which would allow us to solve this implementation detail, hence I wouldn't build the exported contract on that.

So, I'd vote for consistency on Get(int) []byte and Put([]byte).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hueypark : What are your thoughts on this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made commit 1f4bc35 for this.

accept a []byte in Put([]byte) and let the specific SharedBufferPool deal with the situation (prometheus/prometheus#12189).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reverted change 1f4bc35 due to the argument should be pointer-like to avoid allocations (SA6002).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, I don't perceive any practical benefits, so it seems most sensible to maintain the current status.
In future, should a feature like generic slice sync.Pool become available, the Put([]byte) could be beneficial.
Perhaps we can consider it after golang/go#47657 (comment).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. Thanks for the discussion @colega and @hueypark.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that by reverting that, you're fixing the linter's complaint, but not the underlying issue: you just hid the issue behind and interface call so now the linter can't see it, but it still causes an extra allocation there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing out. I acknowledge it's not a true fix. We'll need to revisit this for a more effective solution later.

}

func (nopBufferPool) Put(*[]byte) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: nix this newline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. 63a360e

Comment on lines +31 to +39
type SharedBufferPool interface {
// Get returns a buffer with specified length from the pool.
//
// The returned byte slice may be not zero initialized.
Get(length int) []byte

// Put returns a buffer to the pool.
Put(*[]byte)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hueypark : What are your thoughts on this?

@easwars
Copy link
Contributor

easwars commented Jun 27, 2023

Also, looks like there are some merge conflicts in the benchmark files. I would be OK to move them to separate PR as well, if that makes life easier. Thanks.

@hueypark
Copy link
Contributor Author

Also, looks like there are some merge conflicts in the benchmark files. I would be OK to move them to separate PR as well, if that makes life easier. Thanks.

I have made a commit to resolve the conflicts. 86d999f

Comment on lines +31 to +39
type SharedBufferPool interface {
// Get returns a buffer with specified length from the pool.
//
// The returned byte slice may be not zero initialized.
Get(length int) []byte

// Put returns a buffer to the pool.
Put(*[]byte)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. Thanks for the discussion @colega and @hueypark.

@easwars easwars merged commit 1634254 into grpc:master Jun 27, 2023
11 checks passed
@easwars
Copy link
Contributor

easwars commented Jun 27, 2023

Thank you very much for your contribution @hueypark and apologies for the really drawn out review process on this PR.

@bboreham
Copy link
Contributor

bboreham commented Aug 23, 2023

This is in v1.57, right? I was surprised not to see it in the release notes.

[I'm quite confused because the release shows as June 26th and this PR as merged on June 27, but maybe timezones are causing that]

@easwars
Copy link
Contributor

easwars commented Aug 23, 2023

The commit 1634254 definitely made it to v1.57.0, but I'm not sure how we missed to add it to the release notes.

@dfawley
Copy link
Member

dfawley commented Aug 23, 2023

[I'm quite confused because the release shows as June 26th and this PR as merged on June 27, but maybe timezones are causing that]

The release was JULY 26, not June 26.

@bboreham
Copy link
Contributor

Aha! Thanks, I told you I was confused 😄

@efremovmi
Copy link

efremovmi commented Aug 24, 2023

Good afternoon! I ran tests and noticed that objects are not added back to the pool (processUnaryRPC in the server.go file). Can you double-check? @pstibrany has already written about this above

@hueypark
Copy link
Contributor Author

hueypark commented Sep 6, 2023

@easwars
Through this, we managed to achieve a memory allocation reduction of over 25% in specific scenarios.
Once again, thank you.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 24, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Type: Performance Performance improvements (CPU, network, memory, etc)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants