Skip to content
This repository has been archived by the owner on Nov 30, 2023. It is now read-only.

Helios 2.0 Performance Goals & Measurement discussion #36

Closed
Aaronontheweb opened this issue Apr 21, 2015 · 3 comments
Closed

Helios 2.0 Performance Goals & Measurement discussion #36

Aaronontheweb opened this issue Apr 21, 2015 · 3 comments

Comments

@Aaronontheweb
Copy link
Member

This issue is for discussing the performance goals and measurements related to all of the API layers in #35.

@JeffCyr
Copy link

JeffCyr commented Aug 19, 2015

Hi Aaron,

You talked about using SocketAsyncEventArgs in Helios 2.0. From experience, you need to pool them to get better results than BeginReceive/BeginSend. I'm not very familiar with the Helios 1.X codebase, but it doesn't seem to pool its Send buffers, have you thought about a strategy for send buffer pooling in Helios 2.0?

Jeff

@Maximusya
Copy link

SocketAsyncEventArgs pooling is questionable:

  • for long living connections even the use of non-pooled SAEA is already an advantage over BeginReceive/BeginSend (well, that is the point SAEA was introduced in the first place);
  • for short living connections pooling of SAEA might make sence.

Anyway pooling logic complicates the code.

Now I tend to aquire/release buffers from the pool on every new/completed network operation.
And create/dispose 2xSAEA on accept/close of a connection.
Though it all may be an oversight from my side on socket-server architecture ;)

@JeffCyr
Copy link

JeffCyr commented Aug 20, 2015

@Maximusya I think your design is good, but assigning only one SAEA for write operation on the socket force you to copy the buffer you want to send in the SAEA's buffer.

If you replace your traditional buffer pool by a SAEA pool, you can write directly in the SAEA buffer and send that to the socket. So one less Buffer.Copy per write.

Another advantage is that the socket can release the SAEA when it is done writing, so a connection will consume less memory when it's not writing data.

To optimize things further, your pool can allocate large buffers so they end up in the Large Object Heap. Then multiple SAEA can all point to the same buffer with different offset/count. This will result in less fragmentation and less pressure in the GC because it won't need to compact the heap.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants