Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass # of times called to TCPConnectionNotify.received #1777

Merged
merged 1 commit into from Apr 5, 2017
Merged

Conversation

SeanTAllen
Copy link
Member

On non-Windows platforms, TCPConnection will read data off of a socket
until:

  • there's no more data to read
  • a max size is hit
  • TCPConnectionNotify.received returns false

The last option was introduced via [RFC
to give the programmer more control of when to yield the scheduler. This
was a noble goal but is weakly implemented. In order to exercise better
control, the programmer needs an additional bit of information: the
number of times during this scheduler run that received has been
called.

As we began to use RFC #19 at Sendence it became clear that is wasn't
doing what we wanted. What we hoped to be able to do was read up to X
number of messages off the socket, inject them into our application and
then give up the scheduler.

Our initial implementation was to keep a counter of messages received in
our TCPConnectionNotify instances and when it hit a number such as 25
or 50, return false to give up the scheduler. This, however, didn't
accomplish what we wanted. The following scenario was possible:

Scheduler run results in 24 calls to received. When the next scheduler
run would occur, we'd get 1 more received call and return false. What
we really wanted was to read no more than 25 messages per scheduler
run
.

In order to accomplish this, we added an additional parameter to
TCPConnectionNotify.received: the number of times during this
scheduler run that received has been called (inclusive of the existing
call). This gives much more fine-grained control over when to
"prematurely" give up the scheduler and play nice with other sockets in
the system.

You might think, "why not lower the max read size"? And this certainly
is something you could do, but lowering the max read size, lowers how
large of a chunk we read from the socket during a given system call. In
the case of a high-throughput system, that will greatly increase the
number of system calls thereby lowering performance.

Resolves #1773

@SeanTAllen SeanTAllen added the changelog - changed Automatically add "Changed" CHANGELOG entry on merge label Mar 29, 2017
On non-Windows platforms, TCPConnection will read data off of a socket
until:

- there's no more data to read
- a max size is hit
- `TCPConnectionNotify.received` returns false

The last option was introduced via [RFC
to give the programmer more control of when to yield the scheduler. This
was a noble goal but is weakly implemented. In order to exercise better
control, the programmer needs an additional bit of information: the
number of times during *this scheduler run* that `received` has been
called.

As we began to use RFC #19 at Sendence it became clear that is wasn't
doing what we wanted. What we hoped to be able to do was read up to X
number of messages off the socket, inject them into our application and
then give up the scheduler.

Our initial implementation was to keep a counter of messages received in
our `TCPConnectionNotify` instances and when it hit a number such as 25
or 50, return false to give up the scheduler. This, however, didn't
accomplish what we wanted. The following scenario was possible:

Scheduler run results in 24 calls to `received`. When the next scheduler
run would occur, we'd get 1 more `received` call and return false. What
we really wanted was to *read no more than 25 messages per scheduler
run*.

In order to accomplish this, we added an additional parameter to
`TCPConnectionNotify.received`: the number of times during this
scheduler run that `received` has been called (inclusive of the existing
call). This gives much more fine-grained control over when to
"prematurely" give up the scheduler and play nice with other sockets in
the system.

You might think, "why not lower the max read size"? And this certainly
is something you could do, but lowering the max read size, lowers how
large of a chunk we read from the socket during a given system call. In
the case of a high-throughput system, that will greatly increase the
number of system calls thereby lowering performance.
@SeanTAllen
Copy link
Member Author

Forgot to update the examples. Pushed a fix for that.

@SeanTAllen SeanTAllen merged commit a450934 into master Apr 5, 2017
ponylang-main added a commit that referenced this pull request Apr 5, 2017
@SeanTAllen SeanTAllen deleted the rfc-41 branch April 5, 2017 20:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog - changed Automatically add "Changed" CHANGELOG entry on merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant