Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

suggestion: please install ca-certificates by default #15

Open
stapelberg opened this issue Nov 26, 2017 · 40 comments
Open

suggestion: please install ca-certificates by default #15

stapelberg opened this issue Nov 26, 2017 · 40 comments

Comments

@stapelberg
Copy link

Currently, the ca-certificates package is not included in the debian docker image. Nowadays, this essentially equals not being able to make outbound connections to the internet by default. Given TLS’s pervasiveness, could we install ca-certificates by default?

@tianon
Copy link
Contributor

tianon commented Nov 27, 2017

Hmm, interesting thought. We don't include either of curl or wget by default either, but I can see an argument for ca-certificates, especially with @paultag's constant re-pushing to have sources.list use https by default (which would essentially pull ca-certificates into the minbase set naturally).

Also, we do now have a separate slim variant where we could leave this out (as we do ping and iproute2).

@paultag what're your thoughts on including ca-certificates in the standard tags of the debian base image by default?

@paultag
Copy link

paultag commented Nov 27, 2017

Hurm, good question.

Is there anything in the default image that's capable of a TLS request? Looking though the image's binaries and which ones are ldd'd against gnutls (since I don't see OpenSSL), none of them look too exciting in terms of making outbound TLS requests.

This could be a "gotcha" if someone does something like drops a Go binary onto the platform and finds no CA bundle without installing something that would have a transitive dependency on the CA bundle, but I'm not sure if we should optimize for that just yet.

I can see both sides of this argument, but I'm not sure if we ought to make this call as Docker image maintainers, but by the same token, I'm not convinced big-D Debian will ever bring in the CA Bundle by default (until TLS is a hard requirement for apt, because something something SPARC something something).

The use-case of Docker images is basically assured to have networking and a server, which does change the tradeoffs a bit.

@stapelberg can we get a bit more information on how you discovered this, and what steps you needed to take to debug the lack of the bundle in your image?

@stapelberg
Copy link
Author

This could be a "gotcha" if someone does something like drops a Go binary onto the platform and finds no CA bundle without installing something that would have a transitive dependency on the CA bundle, but I'm not sure if we should optimize for that just yet.

This is exactly what @Merovius recently did and how we discovered this.

Myself, I ran into the same issue previously as well, e.g. when setting up my git-mirror Dockerfile.

My take on this is: if we want to optimize for people using Debian itself, this isn’t necessary. But if we want people to build useful things on top of Debian, we should go for it. I’m thinking of Go binaries, bundler deployments, npm deployments, etc. etc.

@paultag
Copy link

paultag commented Nov 27, 2017

So, I'm really torn on this.

Installing openjdk-*-jdk{,-headless} ruby, node pull in the CA bundle (interestingly, also ca-certificates-java, 10 bucks that's a JKS blob, I didn't look 😄), but python doesn't, but urllib apparently doesn't validate Certificates (or at least blow up at https://self-signed.badssl.com), and IIRC Requests vendors the mozilla bundle iirc, and everyone should use requests anyway because doing Certificate validation with plain Python is the worst. I'd be interested in other languages too.

I'm not yet convinced this is something that anyone except users deploying a static binary will hit, which is a fucking pain, since we never think about programs and operating systems in terms of promising an interface or resource to eachother.

This strikes me as the same class of issues as trying to run a binary that requires a newer kernel version for a core feature, libraries shelling out to weirdo programs that aren't always obvious, or reading files all over the filesystem that aren't always in place.

The real trick here is what's the cost benefit. How hard is this to debug (will this ever fail quietly?) what are the implications of it failing, and is it worth shipping for every image, and do we want to start shipping every bit and bob to make sure folks can plop a static binary into the image and run it? Should users be expected to install packages or resources needed to run their static binary?

Or we can take the cowards way out and use tls everywhere and punt forever

@paultag
Copy link

paultag commented Nov 27, 2017

Yeah i'll sleep on this. I'm not sure what the right thing to do here is.

@ycprog
Copy link

ycprog commented Dec 15, 2017

This is especially essential as I'm facing a problem where we're behind a corporate proxy that requires trusting a root self signed certificate authority in order to access the internet. There's no way to apt-get install ca-certificates without first running update-ca-certificates. And we have a chicken & egg problem

@tianon
Copy link
Contributor

tianon commented Dec 15, 2017

@ycprog if you're able to download images from the Docker Hub, it should be really trivial to set up an automated build (with a repository link, so it's auto-rebuilt any time debian is updated) which simply adds ca-certificates that you then use as your base instead of debian itself

@w33tmaricich
Copy link

w33tmaricich commented Sep 16, 2019

I ran into a similar situation in regards to a corporate proxy.
The solution I went with was to install ca-certificates with the --allow-unautenticated option. Once thats done, you can add your cert and then proceed as normal.

@vhakulinen
Copy link

I just hit this issue. Was a bit surprised that ca-certificates wasn't installed by default.

@tianon
Copy link
Contributor

tianon commented Feb 25, 2020

Something I just realized that's relevant to this thread but isn't mentioned is that the curl variants of https://hub.docker.com/_/buildpack-deps are only slightly more than the ask here -- buildpack-deps:buster-curl is debian:buster plus ca-certificates, curl, netbase, wget, and gnupg (with dirmngr): https://github.com/docker-library/buildpack-deps/blob/b0fc01aa5e3aed6820d8fed6f3301e0542fbeb36/buster/curl/Dockerfile

@kunKun-tx
Copy link

My Go program running in the Debian container fails any HTTP requests towards sites using Let's Encrypt's certs with an error message as x509: certificate signed by unknown authority

Simply adding and running ca-certificates fixs the issue.

RUN apt-get update \
 && apt-get install -y --no-install-recommends ca-certificates

RUN update-ca-certificates

From Docker's build log, guess this does the magic by adding the missing root ca.

Updating certificates in /etc/ssl/certs...
151 added, 0 removed; done.

@bluehack42
Copy link

I have the same problem in my company. We have a internal artifactory, access by https with an official cert. But I have first to install ca-certificates. I can build the image with the package and upload to our artifactory, but I think it's the wrong way. I don't want to use the first the proxy to install the package and then I can use the internal system. Thanks @tianon I'm using now the images form buildpack-deps.

GiedriusS added a commit to GiedriusS/pint that referenced this issue Jun 16, 2021
Add common CA certs to the Dockerfile. The Debian image doesn't include
them by default:
debuerreotype/docker-debian-artifacts#15. Most
likely a user will want to use the reporter functionality hence it
should trust some root CAs such as DigiCert and so on.
@Perdjesk
Copy link

Perdjesk commented Aug 28, 2021

Summary of this post: Argument in favor of ca-certificates added in Debian image to solve the chicken-egg issue for https packages repositories
The below explanation try to demonstrate as well that this case is greatly more impending compared to the case of "deploying a static binary doing https requests"

This describe the same case as previously pointed at by following comments: #15 (comment), #15 (comment)

A few remarks regarding past answers

Is there anything in the default image that's capable of a TLS request? Looking though the image's binaries and which ones are ldd'd against gnutls (since I don't see OpenSSL), none of them look too exciting in terms of making outbound TLS requests.

Yes apt-get.

I'm not yet convinced this is something that anyone except users deploying a static binary will hit

Users using apt-get and wishing to use a package repository providing or requiring TLS.

https support has been moved into the apt package in 1.5
https://packages.debian.org/bullseye/apt-transport-https

if you're able to download images from the Docker Hub, it should be really trivial to set up an automated build (with a repository link, so it's auto-rebuilt any time debian is updated) which simply adds ca-certificates that you then use as your base instead of debian itself

The simplified version of this which only builds one image based one one architecture might be, doing so for every tag and every architecture provided in docker.io for debian seems to be another scope. Moreover such "intelligent derivation of images" would likely be a fragile/half-baked solution. See below for more details.

The case for resolving the chicken-egg issue for https package repositories

First of all let's put aside the question of whether one should consume debian packages over https or not. Today in apt-get, it is a feature provided to users to get packages using https repositories. Even the official debian repositories are able to serve package over https: https://deb.debian.org/debian/, https://security.debian.org/

However the fact that ca-certificates is not in the default image is de-facto limiting user to benefit from this feature. This is true for a repository using a certificate signed by a CA in the Mozilla bundle that ca-certificates provides and it is true as well for users with a repository using a certificate signed by a CA not in this bundle, because the command update-ca-certificates (which is provided by ca-certificates package) is required as part of the process to add a CA statically from a local file.

The following Dockerfile example demonstrates the problem:

FROM docker.io/debian

RUN sed -i "s#http://deb.debian.org#https://deb.debian.org#g" /etc/apt/sources.list
RUN sed -i "s#http://security.debian.org#https://security.debian.org#g" /etc/apt/sources.list

# This will fails with "No system certificates available. Try installing ca-certificates."
RUN apt-get update && apt-get --assume-yes install curl

The solution for users in the above situations are weird workarounds:

  • As said before, use http instead of https to install ca-certificates and then switch to https
  • Fragile solution of statically adding the package to the image while diverging from official repositories.

In the previous example one might say that the alternative to consume over http for one single package might be a good enough compromise, however as said before the argument is not over what are the user's choices. In the previous example the user want to consume all the packages using https. This example is a simplification of real world user experiences. For the sake of context we can imagine valid practical situations in which users might find themselves:

  • Traffic to http is blocked
  • Firewalled zone without direct access to internet with a internally provided mirror that is only available over https by policy of the internal network zone.

Now to the real impediment that takes this problem and leads to a contamination of it which causes previously proposed naive solutions (#15 (comment)) to be not applicable. Container image are meant to be reused and part of derivation.
Meaning that all images based on debian are thus affected by the very same issue. Any user of any image derived from debian whom is required or wished to consume packages using https might be affected*
*(might be because some luck could have the derivation to include ca-certificates)

For the reasons explained here and especially due to the last part leaving users to implement fragile solution, it seems that the cost-benefit balance leans towards having ca-certificates added to the debian base image.

As well worth to point out that Alpine and Centos official container images are not suffering from this issue.

Pure opinion based
Nowadays any container image that is not supposed at some point to need a CA bundle have to be a very small minority. The main usage of container is about doing request over the network and having a default CA bundle provided seems to be more than logical and wanted. The argument to only have a the minimal is a base image is correct, and a default CA bundle is part of today's minimal requirement for software to run.
Thus joining the point of @stapelberg

But if we want people to build useful things on top of Debian, we should go for it.

Rant
https everywhere won, and a magical CA bundle is required for it, it is time to come to term with it. If some users still want to use http have them let to do so but let's not create unnecessary and blind misery for all the others.

@tianon
Copy link
Contributor

tianon commented Sep 1, 2021

While interesting, adding ca-certificates now would be the opposite of PRs like debuerreotype/debuerreotype#101 which move us to be including nothing more than debootstrap --variant=minbase does, so I think this argument is going to be stronger if it's directed towards getting ca-certificates into Debian's standard minbase set.

A potential workaround for users wanting 100% fully TLS-using images (especially if you can pull this image, which thus means your host would necessarily have to have a reasonable set of certificates) would be something like the following:

FROM debian:bullseye-slim
RUN sed -i -e 's/http:/https:/g' /etc/apt/sources.list
COPY ca-certificates.crt /etc/ssl/certs/
RUN apt-get update && apt-get install -y ca-certificates

then, build with /etc/ssl/certs from your host as the build context:

$ docker build -f Dockerfile /etc/ssl/certs
Sending build context to Docker daemon  339.5kB
Step 1/4 : FROM debian:bullseye-slim
 ---> 1e40bc10bc1f
Step 2/4 : RUN sed -i -e 's/http:/https:/g' /etc/apt/sources.list
 ---> Running in dd90fe0b4154
Removing intermediate container dd90fe0b4154
 ---> dc69264372bb
Step 3/4 : COPY ca-certificates.crt /etc/ssl/certs/
 ---> 57d6b8a9dbd5
Step 4/4 : RUN apt-get update && apt-get install -y ca-certificates
 ---> Running in c172bb123a1f
Get:1 https://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 https://deb.debian.org/debian bullseye InRelease [113 kB]
Get:3 https://deb.debian.org/debian bullseye-updates InRelease [36.8 kB]
Get:4 https://deb.debian.org/debian bullseye/main amd64 Packages [8178 kB]
Get:5 https://security.debian.org/debian-security bullseye-security/main amd64 Packages [29.4 kB]
Fetched 8401 kB in 1s (6277 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  openssl
The following NEW packages will be installed:
  ca-certificates openssl
0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.
Need to get 1009 kB of archives.
After this operation, 1891 kB of additional disk space will be used.
Get:1 https://deb.debian.org/debian bullseye/main amd64 ca-certificates all 20210119 [158 kB]
Get:2 https://security.debian.org/debian-security bullseye-security/main amd64 openssl amd64 1.1.1k-1+deb11u1 [851 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 1009 kB in 0s (8372 kB/s)
Selecting previously unselected package openssl.
(Reading database ... 6653 files and directories currently installed.)
Preparing to unpack .../openssl_1.1.1k-1+deb11u1_amd64.deb ...
Unpacking openssl (1.1.1k-1+deb11u1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20210119_all.deb ...
Unpacking ca-certificates (20210119) ...
Setting up openssl (1.1.1k-1+deb11u1) ...
Setting up ca-certificates (20210119) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/x86_64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Updating certificates in /etc/ssl/certs...
129 added, 0 removed; done.
Processing triggers for ca-certificates (20210119) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container c172bb123a1f
 ---> 2844a19fda72
Successfully built 2844a19fda72

(That also reasonably updates once that "real" ca-certificates package gets installed.)

@stappersg
Copy link

stappersg commented Sep 1, 2021 via email

@Perdjesk
Copy link

Perdjesk commented Sep 2, 2021

Nah, introduce the variant 2000sfam for all the stubborn delusionals and liberate minbase already.

@stefanbethke
Copy link

I'd like to add one workaround that works well in my situation: we have systems without Internet access, but we do mirror Debian on our Artifactory server. When building images based on the official Debian images, this creates a bootstrap problem: we need to add packages from our mirror, but our Artifactory is only available over HTTPS.

The workaround is to disable certificate checking for installing ca-certificates, like so:

FROM debian:buster
RUN echo 'Acquire::https::Verify-Peer "false";' >/etc/apt/apt.conf.d/80-ignore-tls
COPY sources.list /etc/apt/sources.list
RUN apt-get update && apt-get install -y ca-certificates && rm -f /etc/apt/apt.conf.d/80-ignore-tls
...

@tianon

This comment was marked as off-topic.

@gw0

This comment was marked as off-topic.

@stappersg

This comment was marked as spam.

@eighthave
Copy link

I would love to see these images default to HTTPS for apt sources, and include ca-certificates. I'm not sure this needs to be tied to minbase because minbase also covers use cases where networking is not needed, while these Docker images are basically always used in situations where networking is in use. Also, Acquire::https::Verify-Peer "false"; or sed -i s/https:/http:/ /etc/apt/sources.list can be used for situations where the required certificates might not be present.

There is discussion about making HTTPS be used by default on new installs:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=992692

And the Debian Vagrant images now default to HTTPS apt sources, but those already included ca-certificates:
https://salsa.debian.org/cloud-team/debian-vagrant-images/-/merge_requests/15

@GyrosGeier
Copy link

The counterargument to defaulting to HTTPS is that the only security benefit is that it requires eavesdroppers to analyze the traffic patterns to find out the size of files downloaded and guess the list of packages that way instead of seeing them directly.

Repository security is provided by the signatures on the package list, not the certificates on the servers. Requiring HTTPS for Debian mirrors creates a vendor lock-in effect as we can no longer use donated server capacity, as we don't have a way to generate an arbitrary number of valid certificates for a given host name, so this would make the project beholden to big content delivery networks.

@stapelberg
Copy link
Author

My original suggestion to add ca-certificates by default was not for the use-case of installing packages.

Instead, my observation is that nothing I want to do with a Debian container works out of the box: I can’t clone a git repository to compile my software in a CI pipeline. I can’t have my programs query kernel.org for the current Linux version. I can’t push CI artifacts to a cloud service provider.

My use-cases either don’t need a network at all, or they need ca-certificates.

I think having networking work out of the box is not too much to ask for :)

@tianon
Copy link
Contributor

tianon commented Apr 4, 2022

Well, we're definitely not going to have curl, wget, git, etc installed by default either. 😅

What I'd suggest is trying either buildpack-deps:bullseye-curl or buildpack-deps:bullseye-scm, which are going to be pretty close to exactly what you're looking for -- this basic image plus a few extra packages:

@eighthave
Copy link

eighthave commented Apr 4, 2022 via email

@GyrosGeier
Copy link

The counterargument to defaulting to HTTPS is that the only security benefit is that it requires eavesdroppers to analyze the traffic patterns to find out the size of files downloaded and guess the list of packages that way instead of seeing them directly.

This is unfortunately not true. If you think the GPG signatures alone are enough, consider these CVEs:

If we wanted to use SSL to guard against these, we'd also have to pin the certificates to a trustworthy set of CAs as well, and also provide a mechanism that allows users to un-trust CAs without potentially breaking updates. It's not a simple change, but it attaches APT to the SSL trust mechanisms and enforces policy on them, specifically "do not disable any of the CAs that CDNs use for proxy certificates."

Specifically proxy certificates are problematic because these are installed on thousands of machines that either share private keys, or have a mechanism to generate thousands of valid certificates quickly.

I'm not convinced that this will give any significant amount of extra protection, but it will cause reliability issues, and the solution proposed in the thread to simply disable certificate validation to avoid those issues would degrade security to an even worse point than before.

Luckily, there are many mirrors that also provide HTTPS. It is not just the major CDNs.

That is not useful though, because these will have to be manually configured, as they do not get certificates for the deb.debian.org name, so I cannot simply redirect deb.debian.org to the nearest mirror with a DNS view as I can with HTTP.

@eighthave
Copy link

eighthave commented Apr 5, 2022 via email

@stefanbethke
Copy link

The argument for having HTTPS support available in the base image is quite simple, I think: you have to jump through enormous hoops to get there if its not already in the base image, as has been documented here and many other places plentyful. The cost to have it in the image is minimal. Why is this even a discussion?

@eighthave
Copy link

@stefanbethke to answer why this is a discussion: there was a time when the code for supporting HTTPS was a plugin to apt, e.g. apt-transport-https. That meant that apt had a much simpler code path when using HTTP sources (e.g. no TLS library and related code). Since Debian/buster, the HTTPS support has been built into apt, so that argument for using HTTP by default no longer applies.

Also, the original apt threat model did not include privacy concerns, so defending against metadata leaks was not part of the picture. It is now clear that we also need to consider metadata leaks in apt's security model. For example, the most effective exploits are 0days, and 0days are only valuable as long as they are not known by the software maintainers. Someone looking to exploit an 0day will want to target specific machines to avoid making the vuln known to the world. Metadata leaks are essential for targetting. HTTPS limits the scope of metadata leaks by a large factor.

@roshanshariff
Copy link

If an attacker is able to MITM the connection to security.debian.org, can't they "freeze" the repository and prevent package updates from becoming available? If I understand correctly, APT doesn't check that the Release.gpg file (which is the root of trust) is recent. If a new zero-day attack is fixed in the security repository, an attacker could prevent the fix from being downloaded while they exploit it. They might only have to do this for a few days, so key expiry won't stop them.

HTTPS would at least guarantee that you're talking to an actual Debian mirror rather than an impersonator. I'm not very knowledgeable about Debian's repositories, so please excuse me if I'm mistaken and this attack is somehow mitigated. But otherwise, using HTTPS is not just about privacy.

@tianon
Copy link
Contributor

tianon commented Apr 5, 2022

Yes, that's mitigated; see https://wiki.debian.org/DebianRepository/Format#Date.2C_Valid-Until

@GyrosGeier
Copy link

@stefanbethke it's a discussion because it pulls in extra infrastructure that is not required otherwise, and it breaks a semi-common deployment scenario, where deb.debian.org is redirected to a local mirror with a simple DNS change.

In well-connected places where Internet flat rates are available, that's only a handful of large installations, mostly cloud hosters; in places where Internet is expensive, that's a common setup.

@stefanbethke
Copy link

@GyrosGeier I don't understand why enabling HTTPS support in the image breaks redirecting http://deb.debian.org. Btw, that DNS change requires breaking DNSsec, which in itself is problematic. Quite the opposite, enabling HTTPS enabled specifying a mirror that is only available over HTTPS, a very common setup in medium and large enterprises.

@aleksandrs-ledovskis
Copy link

aleksandrs-ledovskis commented Apr 5, 2022

Recently this discussion deviated to pros & cons of using HTTPS for deb.debian.org, whereas original request was much simpler - just include ca-certificates and let users decide what they would be using it for.

@stappersg
Copy link

Recently this discussion deviated to pros & cons of using HTTPS for deb.debian.org, whereas original request was much simpler - just include ca-certificates and let users decide what they would be using it for.

Which could be achieve by providing an extra image

So, the "default" image, which matches debian-essential.
The "extra image" being debian-essential plus certificate stuff.

See #15 (comment) for the descriptions of the image names buildpack-deps:bullseye-curl and buildpack-deps:bullseye-scm.

Yes, as far as I understand it, the original request already granted.

I think this issue is open to document the "what you want is available under a different image name"

@eighthave
Copy link

Something like buildpack-deps:bullseye-scm seems like a strange mix, its not something I'd use because the vast majority of the time, only Git would be used, and the rest would be dead weight. I would consider something like buildpack-deps:bullseye-git if it existed.

@stefanbethke it's a discussion because it pulls in extra infrastructure that is not required otherwise, and it breaks a semi-common deployment scenario, where deb.debian.org is redirected to a local mirror with a simple DNS change.

In well-connected places where Internet flat rates are available, that's only a handful of large installations, mostly cloud hosters; in places where Internet is expensive, that's a common setup.

On top of what @stefanbethke said about breaking DNSSec, the days of overriding things at the network level are over. The right place for that kind of configuration is in the end points, e.g. the clients. For example, sed -i 's,deb\.debian\.org,mymirror.local,' /etc/apt/sources.list. This idea is even standardized by the IETF:

@stappersg
Copy link

I would consider something like buildpack-deps:bullseye-git if it existed.

"Patches welcome"

anstadnik added a commit to anstadnik/FirstAidBot that referenced this issue Jun 19, 2022
anstadnik added a commit to anstadnik/FirstAidBot that referenced this issue Sep 6, 2022
eighthave added a commit to f-droid/fdroidserver that referenced this issue Sep 8, 2022
Debian Docker images will soon default to HTTPS for apt sources, so force
it now:
debuerreotype/docker-debian-artifacts#15
@Raniz85
Copy link

Raniz85 commented Nov 8, 2022

I just ran into this with an internal Nexus proxy with a certificate signed by an internal CA behind a corporate firewall that blocks most outgoing connections.

My solution was something along the lines of this:

FROM docker.io/library/debian:bullseye

# Install internal CA certificate where update-ca-certificates can pick it up
RUN mkdir -p /usr/local/share/ca-certificates
COPY internal-ca.crt /usr/local/share/ca-certificates

# Temporarily make the internal CA the only trusted CA
RUN mkdir -p /etc/ssl/certs
RUN cp /usr/local/share/ca-certificates/internal-ca.crt /etc/ssl/certs/ca-certificates.crt

# Set apt repository location
RUN printf -- '\
deb https://nexus.lan/repository/debian-proxy bullseye main\n\
deb https://nexus.lan/repository/debian-proxy bullseye-updates main\n\
deb https://nexus.lan/repository/debian-security-proxy bullseye-security main\n' > /etc/apt/sources.list

# Install ca-certificates, this will run update-ca-certificates which will include our internal CA in the trust
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y ca-certificates && apt-get clean

# Rest of your Dockerfile goes from here

@justinmchase
Copy link

My company blocks firewall access to debian packages by default, which means to build a docker image I have to download the CA certs and update them and replace all the sources. I can do it all manually similar to how @Raniz85 is doing it but I'm not going to lie if ca-certificates and curl or wget were just available on the base image already it would simplify things quite a bit. They seem like pretty reasonable fundamental packages.

@Perdjesk
Copy link

Well, we're definitely not going to have curl, wget, git, etc installed by default either. 😅
15#issuecomment-1087985251

As I understand the original point made is not about tools but whether the image is able to use https out of the box, this being a common use case for users. It feels like the examples given for context were unnecessarily turned into a strawman.

What I'd suggest is trying either buildpack-deps:bullseye-curl or buildpack-deps:bullseye-scm, which are going to be pretty close to exactly what you're looking for -- this basic image plus a few extra packages:
15#issuecomment-1087985251

Which could be achieve by providing an extra image
So, the "default" image, which matches debian-essential.
The "extra image" being debian-essential plus certificate stuff.
15#issuecomment-1089087808

Those idea are always circulating around the same idea that users are directly consuming the base image whereas base container images are meant to be derived and reused. Adding variations to a base image is not addressing the issue, as introducing flavors only address the case of direct consumption of base image and not derived images. The goal of a base image being to address the common use cases into one base image and not several flavors. See previous comment 15#issuecomment-907653538

What is being said by the answers is that using TLS connections is not an enough shared use case to make it to this base image.
I among others disagree with this point of view. https://github.com/search?q=apt+ca-certificates+language%3ADockerfile&type=code

Moreover the collections of workaround how to resolve the chicken-egg for the case described here (15#issuecomment-907653538, which as stated by others is not at all related to a discussion about http vs https for packages) are questionable compared to migrate to a saner base image that is being able to use the main layer7 network protocol (https) out of the box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests