Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow plain HTTP access? (+ apt-cacher-ng HTTPS how-to, + make HTTP mirrors) #71

Closed
mk-pmb opened this issue Mar 8, 2015 · 18 comments
Closed

Comments

@mk-pmb
Copy link

mk-pmb commented Mar 8, 2015

When I use the original HTTPS URLs in my deb and deb-src lines, aptitude tries to build a CONNECT tunnel through my apt-cacher-ng, which of course is denied because its purpose is to cache the downloaded packages, not just let them pass.
When I remove the s from https://, apt-cacher-ng fails to download the packages, unfortunately without any hint in its error log.

Is there a way to still get the packages through apt-cacher-ng?
Are there plain HTTP mirrors?
They should still be safe sources verified by the GnuPG signatures, right?


Overview as of 2016-05-29:
workarounds: abstain from caching or setup a mirror
solutions: for apt-cacher-ng
apt repo attack vectors: replay stale metadata, identify missing security updates

@rvagg
Copy link
Contributor

rvagg commented Jul 4, 2015

FYI, http support is under consideration

@mk-pmb
Copy link
Author

mk-pmb commented Jul 4, 2015

Thanks! 👍

@mgcrea
Copy link

mgcrea commented Jul 16, 2015

I'm encoutering this as well with docker-apt-cacher-ng. Very problematic for our production deployments as it totally breaks our ansible playbooks as any apt-get update that might go through the same proxy do fail!

Err https://deb.nodesource.com/iojs_2.x/ trusty/main iojs amd64 2.3.4-1nodesource1~trusty1
  Received HTTP code 403 from proxy after CONNECT

@mk-pmb
Copy link
Author

mk-pmb commented Jul 18, 2015

@rvagg , could you share some insight into the obstacles/problems and required consideration on the way towards plain HTTP access? Or is it a trivial change, just buried under tons of other todo trivial changes?

In case part of it is to encourage visitors to use HTTPS, it might be enough to send non-encrypted directory indexes with

<meta http-equiv="refresh" content="0;URL=https://…">

instead of the 301 Moved Permanently redirect.

@mgcrea
Copy link

mgcrea commented Jul 18, 2015

@mk-pmb by the way, the workaround I found is to add a DIRECT rule to bypass by host.

echo "Acquire::http { Proxy \"http://proxy.local:3142\"; };\nAcquire::HTTP::Proxy::deb.nodesource.com \"DIRECT\";" > /etc/apt/apt.conf.d/01proxy

@mk-pmb
Copy link
Author

mk-pmb commented Jul 18, 2015

If I understand that correctly, it will bypass the intended caching, resulting in a lot of duplicate downloads. At least it's an easy way to get (uncached) node packages together with cached other packages.

@martinhbramwell
Copy link

@mgcrea and anyone else who find they need a 'pass through' for nodesource.com

When instantiating virtual machines I add this to my scripts :

sudo tee /etc/apt/apt.conf.d/02proxy > /dev/null <<APTPRXY
Acquire::http::Proxy { "http://      MY.PROXY.SERVER     :3142"; };
Acquire::http::Proxy { deb.nodesource.com DIRECT; };
APTPRXY

@ice799
Copy link

ice799 commented Aug 25, 2015

We recently blogged about how to use apt-cacher-ng with packagecloud.io debian repositories (all of which are served over TLS) on our blog. The configuration settings explained there should solve this issue for the people having trouble on this thread.

I'd recommend that you do not support plain HTTP access to your repositories. FWIW, we offer no plain http access at packagecloud and haven't had trouble so far :)

@mk-pmb
Copy link
Author

mk-pmb commented Aug 25, 2015

Thanks a lot for introducing us to that PassThroughPattern: config option remap trick.
That should solve the caching problem for that one software.

I'd recommend that you do not support plain HTTP access to your repositories.

Would you mind sharing your insights? What are the pros/cons that made you prefer to exclude other (SSL unaware) caching proxies? (Still putting a burden on ACNG users.)

@mc0e
Copy link

mc0e commented Oct 10, 2015

TBH, I'm not sure what the case is for using HTTPS at all. The signing mechanisms built in to apt cover the data integrity issues well, so MITM attacks are not an issue. Is the concern about someone snooping on what packages are being downloaded?

@ice799
Copy link

ice799 commented Oct 24, 2015

one MITM attack: stale metadata can be replayed to a user forcing the user to install known vulnerable or buggy versions of packages.

here's an academic paper describing that attack (and a few others): https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.pdf.

it's worth noting that APT keeps a single global keyring of GPG public keys that are imported, which opens the GPG system up to a few other interesting MITM attacks (this differs from YUM where each repo has its own GPG keyring). I would strongly encourage you to use HTTPS: the cost of adding HTTPS is pretty low and the benefits are quite high, IMO.

@mk-pmb
Copy link
Author

mk-pmb commented Oct 24, 2015

Thank you for providing a reasonable argument to use HTTPS at all, at long last. With this, I think the discussion is reduced to whether we trust users to decide their own security trade-offs.

@mc0e
Copy link

mc0e commented Oct 24, 2015

From the first para of the paper referenced by @ice799:

This work identifies three rules of package management security: don’t trust the repository, the trusted entity with the most information should be the one who signs, and don’t install untrusted packages.

So relying on HTTPS seems to break rule number one.

Some years after that paper, use of HTTPS for APT remains a rarity. What response has there been to the security issues raised? What issues are outstanding?

http://lwn.net/Articles/327847/
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=499897

@mk-pmb
Copy link
Author

mk-pmb commented Oct 25, 2015

Thanks for pointing out the apt debate. HTTPS had been suggested in
200901172024.55733.thijs@debian.org:

Why are we trying to invent something new here, […]
That problem has already been solved: use https.

I can't find the part of the discussion where they decide about which measures are best, but the facts indicate that Debian seems to prefer its Valid-Until: solution over HTTPS.

I'd expect that if Nodesource was concerned about replay attacks, they'd at least offer the Debian defense, no matter whether HTTPS is used as an additional defense layer.

@mk-pmb
Copy link
Author

mk-pmb commented Mar 7, 2016

In case someone wants to start another mirror, nodesource-mirror-bash-wget might help.

@mk-pmb
Copy link
Author

mk-pmb commented May 28, 2016

Yawnbox proposes a threat model that can justify voluntary use of HTTPS to download packages: To hide from network observers which security updates have reached a certain host yet. (They also warn about old weak keys for package signing, but that's a different problem.)

I consider that argument valid for setups where you don't have caching proxies. With caching proxies, you get more obscurity against attackers even if they control an SSL CA, since they can't see how many* of your systems the patch has been relayed to, once any of your systems downloaded it. (* If each host makes its own download, quirks of the TCP stacks could reveal even more info about which ones have the patch, even without breaking SSL.)
If your caching proxy uses TOR, attackers can't even assume that no observation of your download means that you don't have that patch yet. Neither can they use your downloads to guess which software you have running.

Basically, you get more security while still haveing less traffic cost and less crypto workload on the machines behind your cache. Also more reliability against connection problems, including downtime of the original repos. This can be added security when a machine that is late in your update schedule can still get the patch even if the repo vanishes from your currently reachable subset of the internet.

@mk-pmb mk-pmb changed the title HTTPS vs. apt-cacher-ng; plain HTTP mirrors? Allow plain HTTP access? (+ apt-cacher-ng HTTPS how-to, + make HTTP mirrors) May 29, 2016
@rbravo-avantrip
Copy link

+1 http

@mk-pmb
Copy link
Author

mk-pmb commented Jul 5, 2020

While my specialized mirror script above probably still works, in case you want a solution that can help mirror repos from other projects as well, have a look at debmirror-easy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants