Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero-copy file serving #143

Open
vizanto opened this issue Dec 7, 2012 · 9 comments
Open

Zero-copy file serving #143

vizanto opened this issue Dec 7, 2012 · 9 comments
Milestone

Comments

@vizanto
Copy link

vizanto commented Dec 7, 2012

After browsing through the source: fileserver.d -> server.d -> core/net.d -> drivers/libev.d -> stream.d

I noticed files are written using writeDefault() which always buffers in 64K. That's soo 1990's ;-)

When gzip isn't required (for example when data is already gzipped on disk) the OS kernel should just do the network transfer.
Basically I'd like fileserver.d do something like this: http://wiki.nginx.org/HttpGzipStaticModule without userspace buffering.

@s-ludwig
Copy link
Member

s-ludwig commented Jan 7, 2013

Left to do:

  • Add an option to treat a file xyz.ext.gz lying beneath a xyz.ext file as the source for a gzip encoded transfer
  • Fix up the libevent implementation

s-ludwig added a commit that referenced this issue Jul 26, 2013
Add reading encoded file from disk if present. See #143.
@jkm
Copy link
Contributor

jkm commented Jul 27, 2013

I'm looking into the zero copy issue.
The comment says it doesn't work on Windows. Currently I'm trying to reproduce. I'm on Linux only.

@s-ludwig
Copy link
Member

If I remember well Linux had different symptoms, but ultimately it didn't work either (when I made that comment, I just tested on Windows).

@jkm
Copy link
Contributor

jkm commented Jul 27, 2013

Then I just run httperf and see whether it reports any errors? I just need a starting point.
httperf reports also errors for unchanged HEAD.

@jkm
Copy link
Contributor

jkm commented Jul 27, 2013

It seems the errors are from httperf. I need to raise the number of open file descriptors. I'm looking into it.

@s-ludwig
Copy link
Member

Sorry, I don't remember exactly, but I think that I didn't even get a single file to be delivered using that code. On Windows it simply crashed and on Linux I'm not sure, but it didn't work either. If it does work now, maybe something has changed in the latest libevent version?

I can try again on Windows in the coming days but won't have a Linux box available for the next two weeks.

@jkm
Copy link
Contributor

jkm commented Jul 27, 2013

No problem. It looks like the code works here. I'm running libevent 2.0.21 on Debian. I tested with httperf and ab. Both give errors when I execute more than 1500 requests per seconds. But this happens independent of zero copying. I.e. it is unrelated. But is this to be expected? At 1000 requests I get no errors and some speed up.

@s-ludwig
Copy link
Member

1500 requests per second or 1500 concurrent requests (i.e. "ab -c 1500 ...")? In the latter case, at least on Ubuntu getting errors would be normal, as the "ulimit" is somewhere at 1000 requests.

I'll retry on Windows with 2.0.21.

@jkm
Copy link
Contributor

jkm commented Jul 28, 2013

Sorry. I meant concurrent requests (-c 1000).
But I configured my system to allow more file descriptors per process.

$ ulimit -n
200000

@s-ludwig s-ludwig modified the milestones: 0.8.1, 0.8.0 Jul 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants