Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize: set max write len to NET_MAX_WRITES_PER_EVENT when writing to client. #5263

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

0xtonyxia
Copy link
Contributor

In case that we have a very long string value to send to client
in one write().

In case that we have a very long string value to send to client
in one write().
@0xtonyxia 0xtonyxia changed the title Set max write len to NET_MAX_WRITES_PER_EVENT when writing to client. Optimize: set max write len to NET_MAX_WRITES_PER_EVENT when writing to client. Aug 20, 2018
@antirez
Copy link
Contributor

antirez commented Sep 7, 2018

Thanks @0xtonyxia, pinging @oranagra and @soloestoy on that as well to have another POV. Did you observe this in practice? Because AFAIK this should not be needed, anyway write() against a non-blocking socket should be limited to filling kernel socket buffer, so it is very unlikely to cause any delay. Maybe there are other motivations for this PR?

@oranagra
Copy link
Member

oranagra commented Sep 9, 2018

I agree with what Antirez wrote, @0xtonyxia please let us know if there's something we're missing.

@0xtonyxia
Copy link
Contributor Author

Hi @oranagra @antirez , i think the limit is different on different machine. I had a test and the maximum write length is about 4MB which is much bigger than NET_MAX_WRITES_PER_EVENT and the corresponding time cost is about 11ms.

@oranagra
Copy link
Member

@0xtonyxia i don't follow..
the PR is trying to reduce the size of the write to 64kb, and you're now saying we can increase it?
are you insinuating that there are cases were such a write would block on network?
i think it would just copy whatever it can to the OS buffers and return immediately.
what do you mean by corresponding cost? is this the time write took for 4mb? (seems too long)

@0xtonyxia
Copy link
Contributor Author

0xtonyxia commented Sep 20, 2018

@oranagra Sorry, i didn't explain it clearly. The test is executed before my modification, that is the vanilla version. In the vanilla version, let's say we have a 10MB value to send to the client,the first write call will try to write about 4MB of data(under some limits) to the socket on my test machine and this write call cost about 11ms.

Of course, after this first write call, redis will stop writing to the socket because the length of total data written exceeds NET_MAX_WRITES_PER_EVENT. But the latency caused by the first write already happened. So my thought is simple, limit the total data written to a single client exactly under NET_MAX_WRITES_PER_EVENT in one eventloop :-).

@yoav-steinberg
Copy link
Contributor

The bottom line of this fix is that it'll break up large writes with a single system call into multiple system calls. I think for the normal use case we want to avoid this.
I also agree that performing one large write isn't a big deal because it'll either fill the kernel's internal buffers perform a short write. In any case it won't block the client or cause too much of a delay.
@0xtonyxia I find it surprising that a 4MB write call took 11ms, are you sure about this? How long does the 64K write call take on your machine?

Since this is old and I'm not to sure about this I'm marking to be closed. Waiting for a response...

@yoav-steinberg yoav-steinberg added the state:to-be-closed requesting the core team to close the issue label Jan 4, 2022
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
networking state:to-be-closed requesting the core team to close the issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants