Skip to content
Permalink
Browse files

Enable HTTP/2 support

The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).

For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.

This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
  • Loading branch information
edolstra committed Sep 14, 2016
1 parent a75d11a commit 90ad02bf626b885a5dd8967894e2eafc953bdf92
@@ -55,7 +55,7 @@ bool parseSearchPathArg(Strings::iterator & i,
Path lookupFileArg(EvalState & state, string s)
{
if (isUri(s))
return makeDownloader()->downloadCached(state.store, s, true);
return getDownloader()->downloadCached(state.store, s, true);
else if (s.size() > 2 && s.at(0) == '<' && s.at(s.size() - 1) == '>') {
Path p = s.substr(1, s.size() - 2);
return state.findFile(p);
@@ -662,7 +662,7 @@ std::pair<bool, std::string> EvalState::resolveSearchPathElem(const SearchPathEl
// FIXME: support specifying revision/branch
res = { true, exportGit(store, elem.second, "master") };
else
res = { true, makeDownloader()->downloadCached(store, elem.second, true) };
res = { true, getDownloader()->downloadCached(store, elem.second, true) };
} catch (DownloadError & e) {
printMsg(lvlError, format("warning: Nix search path entry ‘%1%’ cannot be downloaded, ignoring") % elem.second);
res = { false, "" };
@@ -1769,7 +1769,7 @@ void fetch(EvalState & state, const Pos & pos, Value * * args, Value & v,
if (state.restricted && !expectedHash)
throw Error(format("‘%1%’ is not allowed in restricted mode") % who);

Path res = makeDownloader()->downloadCached(state.store, url, unpack, name, expectedHash);
Path res = getDownloader()->downloadCached(state.store, url, unpack, name, expectedHash);
mkString(v, res, PathSet({res}));
}

@@ -17,13 +17,15 @@ void builtinFetchurl(const BasicDerivation & drv)
auto fetch = [&](const string & url) {
/* No need to do TLS verification, because we check the hash of
the result anyway. */
DownloadOptions options;
options.verifyTLS = false;
DownloadRequest request(url);
request.verifyTLS = false;

/* Show a progress indicator, even though stderr is not a tty. */
options.showProgress = DownloadOptions::yes;
request.showProgress = DownloadRequest::yes;

auto data = makeDownloader()->download(url, options);
/* Note: have to use a fresh downloader here because we're in
a forked process. */
auto data = makeDownloader()->download(request);
assert(data.data);

return data.data;

4 comments on commit 90ad02b

@bjornfor

This comment has been minimized.

Copy link
Contributor

@bjornfor bjornfor replied Sep 14, 2016

Nice!

@copumpkin

This comment has been minimized.

Copy link
Member

@copumpkin copumpkin replied Sep 14, 2016

Nice!!

@domenkozar

This comment has been minimized.

Copy link
Member

@domenkozar domenkozar replied Sep 14, 2016

Can we get rid of negative lookup cache now for http2?

@nixos-discourse

This comment has been minimized.

Copy link

@nixos-discourse nixos-discourse replied Aug 7, 2019

This commit has been mentioned on Nix community. There might be relevant details there:

https://discourse.nixos.org/t/improvements-to-cache-nixos-org-help-test-the-new-config/3620/7

Please sign in to comment.