-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow plain HTTP access? (+ apt-cacher-ng HTTPS how-to, + make HTTP mirrors) #71
Comments
FYI, http support is under consideration |
Thanks! 👍 |
I'm encoutering this as well with docker-apt-cacher-ng. Very problematic for our production deployments as it totally breaks our ansible playbooks as any
|
@rvagg , could you share some insight into the obstacles/problems and required consideration on the way towards plain HTTP access? Or is it a trivial change, just buried under tons of other todo trivial changes? In case part of it is to encourage visitors to use HTTPS, it might be enough to send non-encrypted directory indexes with <meta http-equiv="refresh" content="0;URL=https://…"> instead of the |
@mk-pmb by the way, the workaround I found is to add a
|
If I understand that correctly, it will bypass the intended caching, resulting in a lot of duplicate downloads. At least it's an easy way to get (uncached) node packages together with cached other packages. |
@mgcrea and anyone else who find they need a 'pass through' for nodesource.com When instantiating virtual machines I add this to my scripts :
|
We recently blogged about how to use apt-cacher-ng with packagecloud.io debian repositories (all of which are served over TLS) on our blog. The configuration settings explained there should solve this issue for the people having trouble on this thread. I'd recommend that you do not support plain HTTP access to your repositories. FWIW, we offer no plain http access at packagecloud and haven't had trouble so far :) |
Thanks a lot for introducing us to that
Would you mind sharing your insights? What are the pros/cons that made you prefer to exclude other (SSL unaware) caching proxies? (Still putting a burden on ACNG users.) |
TBH, I'm not sure what the case is for using HTTPS at all. The signing mechanisms built in to apt cover the data integrity issues well, so MITM attacks are not an issue. Is the concern about someone snooping on what packages are being downloaded? |
one MITM attack: stale metadata can be replayed to a user forcing the user to install known vulnerable or buggy versions of packages. here's an academic paper describing that attack (and a few others): https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.pdf. it's worth noting that APT keeps a single global keyring of GPG public keys that are imported, which opens the GPG system up to a few other interesting MITM attacks (this differs from YUM where each repo has its own GPG keyring). I would strongly encourage you to use HTTPS: the cost of adding HTTPS is pretty low and the benefits are quite high, IMO. |
Thank you for providing a reasonable argument to use HTTPS at all, at long last. With this, I think the discussion is reduced to whether we trust users to decide their own security trade-offs. |
From the first para of the paper referenced by @ice799:
So relying on HTTPS seems to break rule number one. Some years after that paper, use of HTTPS for APT remains a rarity. What response has there been to the security issues raised? What issues are outstanding? http://lwn.net/Articles/327847/ |
Thanks for pointing out the apt debate. HTTPS had been suggested in
I can't find the part of the discussion where they decide about which measures are best, but the facts indicate that Debian seems to prefer its
I'd expect that if Nodesource was concerned about replay attacks, they'd at least offer the Debian defense, no matter whether HTTPS is used as an additional defense layer. |
In case someone wants to start another mirror, nodesource-mirror-bash-wget might help. |
Yawnbox proposes a threat model that can justify voluntary use of HTTPS to download packages: To hide from network observers which security updates have reached a certain host yet. (They also warn about old weak keys for package signing, but that's a different problem.) I consider that argument valid for setups where you don't have caching proxies. With caching proxies, you get more obscurity against attackers even if they control an SSL CA, since they can't see how many* of your systems the patch has been relayed to, once any of your systems downloaded it. (* If each host makes its own download, quirks of the TCP stacks could reveal even more info about which ones have the patch, even without breaking SSL.) Basically, you get more security while still haveing less traffic cost and less crypto workload on the machines behind your cache. Also more reliability against connection problems, including downtime of the original repos. This can be added security when a machine that is late in your update schedule can still get the patch even if the repo vanishes from your currently reachable subset of the internet. |
+1 http |
While my specialized mirror script above probably still works, in case you want a solution that can help mirror repos from other projects as well, have a look at debmirror-easy. |
When I use the original HTTPS URLs in my
deb
anddeb-src
lines, aptitude tries to build a CONNECT tunnel through my apt-cacher-ng, which of course is denied because its purpose is to cache the downloaded packages, not just let them pass.When I remove the
s
fromhttps://
, apt-cacher-ng fails to download the packages, unfortunately without any hint in its error log.Is there a way to still get the packages through apt-cacher-ng?
Are there plain HTTP mirrors?
They should still be safe sources verified by the GnuPG signatures, right?
Overview as of 2016-05-29:
workarounds: abstain from caching or setup a mirror
solutions: for
apt-cacher-ng
apt repo attack vectors: replay stale metadata, identify missing security updates
The text was updated successfully, but these errors were encountered: