New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow ports other than 443. #33

Closed
dnozay opened this Issue Nov 27, 2014 · 51 comments

Comments

Projects
None yet
@dnozay

dnozay commented Nov 27, 2014

see also issue #19.

practical examples:

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 27, 2014

The argument brought forward was need to prove administrative control over a machine.

  • if an attacker gains root access, they can listen in to the connection, acquire the certs / keys ...etc.
  • there are legitimate use cases for ssl endpoints that do not terminate at apache / nginx / name-your-webserver but at the app level.
  • there are legitimate use cases where services are running on the same machine, and sharing certs is a not as secure as having a different cert for each service.
  • in the case where the ssl endpoint is apache/nginx, but the traffic is processed by something else (php, wsgi, cgi) down the line, attacker can listen in to unencrypted traffic between user-facing webserver and the layer that actually processes the requests.

dnozay commented Nov 27, 2014

The argument brought forward was need to prove administrative control over a machine.

  • if an attacker gains root access, they can listen in to the connection, acquire the certs / keys ...etc.
  • there are legitimate use cases for ssl endpoints that do not terminate at apache / nginx / name-your-webserver but at the app level.
  • there are legitimate use cases where services are running on the same machine, and sharing certs is a not as secure as having a different cert for each service.
  • in the case where the ssl endpoint is apache/nginx, but the traffic is processed by something else (php, wsgi, cgi) down the line, attacker can listen in to unencrypted traffic between user-facing webserver and the layer that actually processes the requests.
@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 28, 2014

Contributor

Port 8080 is available to any user of the system. Say you give me a shell account on your machine. Would you like me to be able to register certificates for your domains? Definitely not.

ACME protocol is HTTP-server (or app) agnostic, so I don't see any point in discussing uwsgi, Apache etc. here.

Therefore, I believe that this bug report does not bring any value and should be closed as duplicate of #19.

Contributor

kuba commented Nov 28, 2014

Port 8080 is available to any user of the system. Say you give me a shell account on your machine. Would you like me to be able to register certificates for your domains? Definitely not.

ACME protocol is HTTP-server (or app) agnostic, so I don't see any point in discussing uwsgi, Apache etc. here.

Therefore, I believe that this bug report does not bring any value and should be closed as duplicate of #19.

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 28, 2014

@kuba, i am not going to give you a shell account nor am I going to disable selinux, nor am I also going to have iptables not running. You are making assumptions about who the set of users of the system can be.

dnozay commented Nov 28, 2014

@kuba, i am not going to give you a shell account nor am I going to disable selinux, nor am I also going to have iptables not running. You are making assumptions about who the set of users of the system can be.

@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 28, 2014

Contributor

On the other hand, you are making assumptions on what the system has: selinux (non-standard) or iptables (also non-standard, i.e. appropriate firewall rules). Also, mind that there is a lot of software that you run as non-root that might bite you if exploited; you don't have to give me shell access.

Port < 1024 is a fairly good assumption, though not perfect.

Contributor

kuba commented Nov 28, 2014

On the other hand, you are making assumptions on what the system has: selinux (non-standard) or iptables (also non-standard, i.e. appropriate firewall rules). Also, mind that there is a lot of software that you run as non-root that might bite you if exploited; you don't have to give me shell access.

Port < 1024 is a fairly good assumption, though not perfect.

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 28, 2014

dude, if I can't even make assumptions about my own systems...

dnozay commented Nov 28, 2014

dude, if I can't even make assumptions about my own systems...

@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 28, 2014

Contributor

The question is how do you communicate to ACME server that it is safe to validate something on non-privileged ports. Not everyone has the same super secure setup as you.

Contributor

kuba commented Nov 28, 2014

The question is how do you communicate to ACME server that it is safe to validate something on non-privileged ports. Not everyone has the same super secure setup as you.

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 28, 2014

conversely: you cannot build on the premise that root is using port 443; an attacker could be running an ACME client and already control port 443. Therefore port 443 is irrelevant to the equation.

If a sysadmin wants to use port 443, then it's the sysadmin job to help secure that port. If a sysadmin wants to use port N, then it's the sysadmin job to secure port N.

e.g.

# good luck bob.
sudo sysctl -w net.ipv4.ip_local_port_range="65000 65000"

dnozay commented Nov 28, 2014

conversely: you cannot build on the premise that root is using port 443; an attacker could be running an ACME client and already control port 443. Therefore port 443 is irrelevant to the equation.

If a sysadmin wants to use port 443, then it's the sysadmin job to help secure that port. If a sysadmin wants to use port N, then it's the sysadmin job to secure port N.

e.g.

# good luck bob.
sudo sysctl -w net.ipv4.ip_local_port_range="65000 65000"
@ravisorg

This comment has been minimized.

Show comment
Hide comment
@ravisorg

ravisorg Nov 28, 2014

@dnozay You might want to take a moment and meditate on exactly what you're asking here, it's really not a good idea.

ravisorg commented Nov 28, 2014

@dnozay You might want to take a moment and meditate on exactly what you're asking here, it's really not a good idea.

@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 28, 2014

Contributor

@dnozay Okay, while you're meditating, here is a little scenario for you to play with. Imagine you are an ACME server and you get a request to provide a certificate for example.com. What do you do? You can assume that none of the ports is assumed to be controlled by root.

Contributor

kuba commented Nov 28, 2014

@dnozay Okay, while you're meditating, here is a little scenario for you to play with. Imagine you are an ACME server and you get a request to provide a certificate for example.com. What do you do? You can assume that none of the ports is assumed to be controlled by root.

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 29, 2014

@dnozay You might want to take a moment and meditate on exactly what you're asking here, it's really not a good idea.

@kuba, @ravisorg, please keep it courteous. If you want to hurt people's feelings by calling them ignorant (my interpretation) on social media, at least back it up with some facts. I'm just trying to help here.

An administrator controls which ports are open for regular users to consume - this includes ephemeral ports / ports in the local port range. As I did show previously you can control that with sysctl. Effectively, root can control all ports if they want to. On your machine the local port range may start at 1024, on my machine it may start at whatever I decide.

Here is something stupid (stupid only because I am ignorant of a good use case for it) that a sysadmin could do:

sudo sysctl -w net.ipv4.ip_local_port_range="1 1024"

With the above regular users can control port 443 without administrative control of the machine.

@dnozay Okay, while you're meditating, here is a little scenario for you to play with. Imagine you are an ACME server and you get a request to provide a certificate for example.com. What do you do? You can assume that none of the ports is assumed to be controlled by root.

Same here, I would assume none of the ports to be controlled by root.

dnozay commented Nov 29, 2014

@dnozay You might want to take a moment and meditate on exactly what you're asking here, it's really not a good idea.

@kuba, @ravisorg, please keep it courteous. If you want to hurt people's feelings by calling them ignorant (my interpretation) on social media, at least back it up with some facts. I'm just trying to help here.

An administrator controls which ports are open for regular users to consume - this includes ephemeral ports / ports in the local port range. As I did show previously you can control that with sysctl. Effectively, root can control all ports if they want to. On your machine the local port range may start at 1024, on my machine it may start at whatever I decide.

Here is something stupid (stupid only because I am ignorant of a good use case for it) that a sysadmin could do:

sudo sysctl -w net.ipv4.ip_local_port_range="1 1024"

With the above regular users can control port 443 without administrative control of the machine.

@dnozay Okay, while you're meditating, here is a little scenario for you to play with. Imagine you are an ACME server and you get a request to provide a certificate for example.com. What do you do? You can assume that none of the ports is assumed to be controlled by root.

Same here, I would assume none of the ports to be controlled by root.

@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 29, 2014

Contributor

@dnozay Okay, let's assume none of the ports is controlled by root, you are ACME server. How do you validate me?

Contributor

kuba commented Nov 29, 2014

@dnozay Okay, let's assume none of the ports is controlled by root, you are ACME server. How do you validate me?

@dnozay

This comment has been minimized.

Show comment
Hide comment
@dnozay

dnozay Nov 29, 2014

@kuba I don't know, I am not writing the specs.

dnozay commented Nov 29, 2014

@kuba I don't know, I am not writing the specs.

@kuba

This comment has been minimized.

Show comment
Hide comment
@kuba

kuba Nov 29, 2014

Contributor

Despite all the madness above, I had an idea and for some time now...

Current ACME protocol does not state that explicitly, but all defined validations require ACME server to perform domain resolution to IP address before connecting to the client. I claim, that implicitly the protocol relies on the security of the DNS system.

On this assumption, without weakening the security, we could extend the current protocol to look up predefined TXT record, say acme.port and use it to contact ACME client instead of the default 443. This would not only allow to use any privileged port < 1024 (#19) but any valid TCP/UDP port number. The latter has a useful property that it would not require clients to run as privileged users, closing possibility of various OS exploits due to the bugs in the client software... Of course, care should be taken and administrator will have to remember to secure the port before setting the record. I feel like requiring low TTL on such record would be beneficial from the security point of view.

TXT acme.port should be optional, and ACME server would fall back to the standard 443. This way we give more flexibility for more tech-savy users, while still maintaining the goal of the protocol, i.e. making it easier to acquire certificates.

Note that another benefit of the TXT acme.port solution is that administrators can run the protocol and create certificates without taking down existing services running on port 443.

What do you think?

CC: @jdkasten

Contributor

kuba commented Nov 29, 2014

Despite all the madness above, I had an idea and for some time now...

Current ACME protocol does not state that explicitly, but all defined validations require ACME server to perform domain resolution to IP address before connecting to the client. I claim, that implicitly the protocol relies on the security of the DNS system.

On this assumption, without weakening the security, we could extend the current protocol to look up predefined TXT record, say acme.port and use it to contact ACME client instead of the default 443. This would not only allow to use any privileged port < 1024 (#19) but any valid TCP/UDP port number. The latter has a useful property that it would not require clients to run as privileged users, closing possibility of various OS exploits due to the bugs in the client software... Of course, care should be taken and administrator will have to remember to secure the port before setting the record. I feel like requiring low TTL on such record would be beneficial from the security point of view.

TXT acme.port should be optional, and ACME server would fall back to the standard 443. This way we give more flexibility for more tech-savy users, while still maintaining the goal of the protocol, i.e. making it easier to acquire certificates.

Note that another benefit of the TXT acme.port solution is that administrators can run the protocol and create certificates without taking down existing services running on port 443.

What do you think?

CC: @jdkasten

@ravisorg

This comment has been minimized.

Show comment
Hide comment
@ravisorg

ravisorg Nov 29, 2014

@dnozay My apologies, I wasn't meaning to insult you. I was trying to say you might want to step back and view your request more globally, instead of concentrating on your server and your setup. For example, take into account shared servers (like GoDaddy, but also little mom and pop shops that don't have full time sysadmins) where malicious users that have access to the system may be able to do things unexpected.

You comment that your system is secure. It might be. It might not be as secure as you think. But regardless I would suggest that most servers in the wild are not well secured, and by assuming they are (or saying "well, that's root's fault for not securing it then") makes us all less secure by potentially allowing unauthorized people to obtain trusted certificates. Certificates that your browser will now trust, because the verification system was made weaker.

Verification is the weak point in the CA system - it needs to be made as strong as possible, even if it makes your specific case slightly less convenient for you.

Specifically in this case:

  • Ports <1024 are by default owned by root. Yes, root could control any port, but by default higher ports are open to anyone (and any software) on the system. Anything other than default will be untrusted, because in the real world very few servers will change those defaults.
  • In the same way, port 443 can PROBABLY be trusted because, by default, it is controlled by root. Could a really bad sysadmin reconfigure the server in such a way that allows non root users to listen on 443? Sure. But because of the defaults (and the effort the admin would have to go through to change it accidental) it's probably a much safer bet to assume 443 is root, and >1024 is not root.
  • You use an example where sudo can be used to gain access to root owned ports. Well, yes - if you can execute things as root then you can do things that root can do. But if you have sudo root access on a server you basically own it anyway, and probably should be allowed to authenticate. Shared hosts will generally not provide sudo access to untrusted users, and they certainly don't by default.

Let me know if I'm wildly misunderstanding your argument or if I've missed anything. I haven't had any caffeine today and I may be a little off ;)

ravisorg commented Nov 29, 2014

@dnozay My apologies, I wasn't meaning to insult you. I was trying to say you might want to step back and view your request more globally, instead of concentrating on your server and your setup. For example, take into account shared servers (like GoDaddy, but also little mom and pop shops that don't have full time sysadmins) where malicious users that have access to the system may be able to do things unexpected.

You comment that your system is secure. It might be. It might not be as secure as you think. But regardless I would suggest that most servers in the wild are not well secured, and by assuming they are (or saying "well, that's root's fault for not securing it then") makes us all less secure by potentially allowing unauthorized people to obtain trusted certificates. Certificates that your browser will now trust, because the verification system was made weaker.

Verification is the weak point in the CA system - it needs to be made as strong as possible, even if it makes your specific case slightly less convenient for you.

Specifically in this case:

  • Ports <1024 are by default owned by root. Yes, root could control any port, but by default higher ports are open to anyone (and any software) on the system. Anything other than default will be untrusted, because in the real world very few servers will change those defaults.
  • In the same way, port 443 can PROBABLY be trusted because, by default, it is controlled by root. Could a really bad sysadmin reconfigure the server in such a way that allows non root users to listen on 443? Sure. But because of the defaults (and the effort the admin would have to go through to change it accidental) it's probably a much safer bet to assume 443 is root, and >1024 is not root.
  • You use an example where sudo can be used to gain access to root owned ports. Well, yes - if you can execute things as root then you can do things that root can do. But if you have sudo root access on a server you basically own it anyway, and probably should be allowed to authenticate. Shared hosts will generally not provide sudo access to untrusted users, and they certainly don't by default.

Let me know if I'm wildly misunderstanding your argument or if I've missed anything. I haven't had any caffeine today and I may be a little off ;)

@jdkasten

This comment has been minimized.

Show comment
Hide comment
@jdkasten

jdkasten Nov 30, 2014

Contributor

It is difficult to separate out the eventual Let's Encrypt policy from what other CAs might want to do in the future. Mind you that any deviation from the "standards" poses additional risk on the CA system as a whole. The initial challenges were posed to be strictly stronger validations of existing CA practices.

I wrote up a bit of background on current validation methods and some CA background in an earlier question if you are interested. validation methods

In regards to @kuba, It seems reasonable, but I would have a few concerns.

It seems a bit heavy to me. Since there is already an existing DNS challenge, the only time someone would use it is if they were preparing to get a certificate sometime in the future and wanted to setup the initial phase now. The process requires two steps whereas if you already have ready access to DNS, you can always just perform the DNS challenge.

This technique may also lead to some prolonged attacks that may not be as easy to detect for the user. There is no sense of when the modification to the TXT record actually occurred and everything else seems to be fine. (This would still require some unprivileged box access, but this is the threat model we are trying to stop)

Finally, the CA will likely want to control which ports it thinks is valid for the current challenge.

Remember, as a client, you have to satisfy the CA and the CA is on the hook for any mistakes in verification. I know the Let's Encrypt CA will take additional policy measures to improve security and avoid misissuance.

What is more likely is that the challenges will allow the CA to specify a port to perform the challenge on... especially when ACME CAs begin issuing for different services.

Contributor

jdkasten commented Nov 30, 2014

It is difficult to separate out the eventual Let's Encrypt policy from what other CAs might want to do in the future. Mind you that any deviation from the "standards" poses additional risk on the CA system as a whole. The initial challenges were posed to be strictly stronger validations of existing CA practices.

I wrote up a bit of background on current validation methods and some CA background in an earlier question if you are interested. validation methods

In regards to @kuba, It seems reasonable, but I would have a few concerns.

It seems a bit heavy to me. Since there is already an existing DNS challenge, the only time someone would use it is if they were preparing to get a certificate sometime in the future and wanted to setup the initial phase now. The process requires two steps whereas if you already have ready access to DNS, you can always just perform the DNS challenge.

This technique may also lead to some prolonged attacks that may not be as easy to detect for the user. There is no sense of when the modification to the TXT record actually occurred and everything else seems to be fine. (This would still require some unprivileged box access, but this is the threat model we are trying to stop)

Finally, the CA will likely want to control which ports it thinks is valid for the current challenge.

Remember, as a client, you have to satisfy the CA and the CA is on the hook for any mistakes in verification. I know the Let's Encrypt CA will take additional policy measures to improve security and avoid misissuance.

What is more likely is that the challenges will allow the CA to specify a port to perform the challenge on... especially when ACME CAs begin issuing for different services.

@normanr

This comment has been minimized.

Show comment
Hide comment
@normanr

normanr May 2, 2015

btw, instead of TXT acme.port you may want to use SRV _acme._tcp as that is what SRV records are for :-)

normanr commented May 2, 2015

btw, instead of TXT acme.port you may want to use SRV _acme._tcp as that is what SRV records are for :-)

@coderanger

This comment has been minimized.

Show comment
Hide comment
@coderanger

coderanger Jun 5, 2015

Would love to see this in the context of infrastructure automation. It would be much easier for Chef/Puppet/etc to spin up a little web server on a new port (but still <1024) rather than having to sort out what is listening on 443 already and interact with it to complete the verification. This could allow a more generic approach from automation/tooling.

coderanger commented Jun 5, 2015

Would love to see this in the context of infrastructure automation. It would be much easier for Chef/Puppet/etc to spin up a little web server on a new port (but still <1024) rather than having to sort out what is listening on 443 already and interact with it to complete the verification. This could allow a more generic approach from automation/tooling.

@dol

This comment has been minimized.

Show comment
Hide comment
@dol

dol Jun 25, 2015

@kuba The idea with an optional DNS records sound like a good solution. As @normanr mentioned prefer SRV over TXT.

This is a very useful feature.
In this case, we could offload the DVSNI traffic to an different port.

dol commented Jun 25, 2015

@kuba The idea with an optional DNS records sound like a good solution. As @normanr mentioned prefer SRV over TXT.

This is a very useful feature.
In this case, we could offload the DVSNI traffic to an different port.

@coolaj86

This comment has been minimized.

Show comment
Hide comment
@coolaj86

coolaj86 Jun 30, 2015

The idea of SRV _acme._tcp sounds great to me, but I worry it will be too complicated for the average joe, which is kinda who we're targeting with letsencrypt, isn't it? I mean, we want to lower the barrier to get more people on the encryption bus, right?

Why not just allow some other privileged port < 1024?

Here's what I see:

  • letsencrypt is to help average joe get certificates
  • average joe doesn't want his website to be taken offline to get a certificate
  • average joe can't figure out haproxy tcp forwarding (and neither can I for that matter)
  • He can probably figure out how to sudo letsencrypt

Also, I definitely hear the argument for not allowing a user on a system request certs on behalf of that system... but I kinda disagree. Shared hosts generally don't allow their users to listen on random ports. Despite the history of linux being "multi-user", in practice I don't see it.

My motto is kinda "if you trust me enough to give me your credit card to have access to my system, you can trust that if you screw it up I'm gonna charge that lovely card of yours". If you're letting random untrusted users on your box, you've got... two problems... y'know?

coolaj86 commented Jun 30, 2015

The idea of SRV _acme._tcp sounds great to me, but I worry it will be too complicated for the average joe, which is kinda who we're targeting with letsencrypt, isn't it? I mean, we want to lower the barrier to get more people on the encryption bus, right?

Why not just allow some other privileged port < 1024?

Here's what I see:

  • letsencrypt is to help average joe get certificates
  • average joe doesn't want his website to be taken offline to get a certificate
  • average joe can't figure out haproxy tcp forwarding (and neither can I for that matter)
  • He can probably figure out how to sudo letsencrypt

Also, I definitely hear the argument for not allowing a user on a system request certs on behalf of that system... but I kinda disagree. Shared hosts generally don't allow their users to listen on random ports. Despite the history of linux being "multi-user", in practice I don't see it.

My motto is kinda "if you trust me enough to give me your credit card to have access to my system, you can trust that if you screw it up I'm gonna charge that lovely card of yours". If you're letting random untrusted users on your box, you've got... two problems... y'know?

@coolaj86

This comment has been minimized.

Show comment
Hide comment
@coolaj86

coolaj86 Jun 30, 2015

It looks like port 4 is ripe for an RFC
https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml

And 6,8,10,12,14,16,26,28,30,32,34,36,40,60.

In all seriousness, I'd say port 81 is quite a catch. 442 or 444 would have been nicer, but hey, 81 ain't bad.

I'm quite surprised how sparse it gets as you go further out, but there are definitely enough ports for use and if silly one-off microsoft technologies that no one will ever use again can get through IANA, I'm sure something as monumental and web-worthy as LetsEncrypt can.

coolaj86 commented Jun 30, 2015

It looks like port 4 is ripe for an RFC
https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml

And 6,8,10,12,14,16,26,28,30,32,34,36,40,60.

In all seriousness, I'd say port 81 is quite a catch. 442 or 444 would have been nicer, but hey, 81 ain't bad.

I'm quite surprised how sparse it gets as you go further out, but there are definitely enough ports for use and if silly one-off microsoft technologies that no one will ever use again can get through IANA, I'm sure something as monumental and web-worthy as LetsEncrypt can.

@Yvi71

This comment has been minimized.

Show comment
Hide comment
@Yvi71

Yvi71 Aug 3, 2015

I would also love to see an option to use an alternative port, just for the reason not to take down a service that already works on 443.

Yvi71 commented Aug 3, 2015

I would also love to see an option to use an alternative port, just for the reason not to take down a service that already works on 443.

@ac000

This comment has been minimized.

Show comment
Hide comment
@ac000

ac000 Aug 26, 2015

To me, the argument of it's running on a root initiated port so it's good seems rather weak.

I see lots of chat here about insecure servers, well if that's the case assuming root is OK seems somewhat flawed. So I don't really see a big difference with using a port < 1024 or not.

Simply sending an email to an address at the domain seems just as good/bad to me and a lot less faffing about.

ac000 commented Aug 26, 2015

To me, the argument of it's running on a root initiated port so it's good seems rather weak.

I see lots of chat here about insecure servers, well if that's the case assuming root is OK seems somewhat flawed. So I don't really see a big difference with using a port < 1024 or not.

Simply sending an email to an address at the domain seems just as good/bad to me and a lot less faffing about.

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Sep 13, 2015

Allowing other ports to remove the downtime for (re)validation is a good idea, especially if domain validation will take place on every renewal to make sure it's always the same process and not another process once a year.

I also think that restricting it to ports < 1024 is a must, at least by default, as it's secure by default then.

kelunik commented Sep 13, 2015

Allowing other ports to remove the downtime for (re)validation is a good idea, especially if domain validation will take place on every renewal to make sure it's always the same process and not another process once a year.

I also think that restricting it to ports < 1024 is a must, at least by default, as it's secure by default then.

@DavidLutton

This comment has been minimized.

Show comment
Hide comment
@DavidLutton

DavidLutton Sep 13, 2015

Mumble seems to support CA provided TLS Certs.
Some Community - Let's Encrypt discussion and recent Mumble release on Cipher Suites Seem to be indicate compatible suites, not sure if Mumble plays well with sharing ports, SSL endpoints et al, also given that it is live group voice chat that it would reduce quality of service & increase total latency through an endpoint service.

Mumble can be set to any port >1024, runs as reduced user on Ubuntu.
Cannot claim ports "Server: TCP Listen on [::]:2 failed: The address is protected"

DavidLutton commented Sep 13, 2015

Mumble seems to support CA provided TLS Certs.
Some Community - Let's Encrypt discussion and recent Mumble release on Cipher Suites Seem to be indicate compatible suites, not sure if Mumble plays well with sharing ports, SSL endpoints et al, also given that it is live group voice chat that it would reduce quality of service & increase total latency through an endpoint service.

Mumble can be set to any port >1024, runs as reduced user on Ubuntu.
Cannot claim ports "Server: TCP Listen on [::]:2 failed: The address is protected"

@jdkasten

This comment has been minimized.

Show comment
Hide comment
@jdkasten

jdkasten Sep 22, 2015

Contributor

I am in favor of allowing the server to specify a set of ports to be used.

Of note, the CABForum Validation working group is currently attempting to enumerate a set of acceptable ports to perform DV.

Contributor

jdkasten commented Sep 22, 2015

I am in favor of allowing the server to specify a set of ports to be used.

Of note, the CABForum Validation working group is currently attempting to enumerate a set of acceptable ports to perform DV.

@jsha

This comment has been minimized.

Show comment
Hide comment
@jsha

jsha Sep 22, 2015

Collaborator

@mvdkleijn: Note that the current spec does not require any downtime for revalidation. It is specifically designed so the your can provision a file or a DVSNI responder on your web server without taking it down.

Also, this early draft of the protocol has been replaced by the official IETF repo: https://github.com/ietf-wg-acme/acme. Let's do further discussion over there!

Collaborator

jsha commented Sep 22, 2015

@mvdkleijn: Note that the current spec does not require any downtime for revalidation. It is specifically designed so the your can provision a file or a DVSNI responder on your web server without taking it down.

Also, this early draft of the protocol has been replaced by the official IETF repo: https://github.com/ietf-wg-acme/acme. Let's do further discussion over there!

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Sep 22, 2015

Would have been nice if just the ownership had moved, so all issues and all history would have been retained.

@jsha Note, that it's not just providing a file, as we need a special content-type header, too. It would be nice to have the LE client really in a standalone mode, the current standalone mode is only suited for the first issuance. That way we don't need ACME support in every web server but can just always use it as standalone software for certificate management.

kelunik commented Sep 22, 2015

Would have been nice if just the ownership had moved, so all issues and all history would have been retained.

@jsha Note, that it's not just providing a file, as we need a special content-type header, too. It would be nice to have the LE client really in a standalone mode, the current standalone mode is only suited for the first issuance. That way we don't need ACME support in every web server but can just always use it as standalone software for certificate management.

@mvdkleijn

This comment has been minimized.

Show comment
Hide comment
@mvdkleijn

mvdkleijn Sep 22, 2015

@ravisorg I would prefer to keep things simple. Lets just register a port with IANA for the standalone client. Allowing for choosing a set of ports is possible of course, but I would prefer a clearly defined and registered port.

@jsha I was under the impression that an automated update of the certificates would require downtime for validation. I'll reread.

I'm not a big fan of the DVSNI option personally. Seems overly complex to me.

The file option would be nice and possible (and at least simple) if not for the special header which seems unnecessary. A special header would make it more complex to set things up for the user who might not know how to set it up.

I was unaware the spec moved, especially concerning any discussion since there appears none in that repo. (0 issues)

I agree with @kelunik that the client needs a true standalone option for revalidation which would allow it to be tech neutral-ish.

For me personally, my ideal setup would have a completely standalone client manage the certificates for me and let me do the rest. I just don't like the idea of a third party app changing the configuration of my webserver. Seems just too fragile.

mvdkleijn commented Sep 22, 2015

@ravisorg I would prefer to keep things simple. Lets just register a port with IANA for the standalone client. Allowing for choosing a set of ports is possible of course, but I would prefer a clearly defined and registered port.

@jsha I was under the impression that an automated update of the certificates would require downtime for validation. I'll reread.

I'm not a big fan of the DVSNI option personally. Seems overly complex to me.

The file option would be nice and possible (and at least simple) if not for the special header which seems unnecessary. A special header would make it more complex to set things up for the user who might not know how to set it up.

I was unaware the spec moved, especially concerning any discussion since there appears none in that repo. (0 issues)

I agree with @kelunik that the client needs a true standalone option for revalidation which would allow it to be tech neutral-ish.

For me personally, my ideal setup would have a completely standalone client manage the certificates for me and let me do the rest. I just don't like the idea of a third party app changing the configuration of my webserver. Seems just too fragile.

@jdkasten

This comment has been minimized.

Show comment
Hide comment
@jdkasten

jdkasten Sep 22, 2015

Contributor

@mvdkleijn No, updates for the validation do not require downtime. If you are required to change the configuration, most webservers allow you to simply reload ('graceful restart')

@mvdkleijn @kelunik Given that the validation is currently required to be on port 80, 443 you are going to need to interact with the existing webserver. An option is currently being worked on. that might be of interest to you. certbot/certbot#757

Finishing this off or additional PRs are always welcome. Though this discussion should be taking place at https://github.com/letsencrypt/letsencrypt

FYI, updates to certificates in the linked client modify a symlink so that configuration files don't have to change after installation.

Finally, if you do choose to modify config files, all configuration files are backed up before writing out to them. If installation fails, it reverts the configuration changes (or you can manually tell it to rollback to the original configuration)

Contributor

jdkasten commented Sep 22, 2015

@mvdkleijn No, updates for the validation do not require downtime. If you are required to change the configuration, most webservers allow you to simply reload ('graceful restart')

@mvdkleijn @kelunik Given that the validation is currently required to be on port 80, 443 you are going to need to interact with the existing webserver. An option is currently being worked on. that might be of interest to you. certbot/certbot#757

Finishing this off or additional PRs are always welcome. Though this discussion should be taking place at https://github.com/letsencrypt/letsencrypt

FYI, updates to certificates in the linked client modify a symlink so that configuration files don't have to change after installation.

Finally, if you do choose to modify config files, all configuration files are backed up before writing out to them. If installation fails, it reverts the configuration changes (or you can manually tell it to rollback to the original configuration)

@coolaj86

This comment has been minimized.

Show comment
Hide comment
@coolaj86

coolaj86 Sep 22, 2015

@jdkasten If Apache is currently using port 443, no number of graceful restarts in all the world will help the letsencrypt client, which also needs to run on port 443.

Or are you saying that updates don't run on 443?

Also, since it's in python instead of go (or even node), it's painfully slow. It takes whole seconds on a decent machine and something in the order of 30 seconds on a Raspberry Pi or similar.

coolaj86 commented Sep 22, 2015

@jdkasten If Apache is currently using port 443, no number of graceful restarts in all the world will help the letsencrypt client, which also needs to run on port 443.

Or are you saying that updates don't run on 443?

Also, since it's in python instead of go (or even node), it's painfully slow. It takes whole seconds on a decent machine and something in the order of 30 seconds on a Raspberry Pi or similar.

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Sep 22, 2015

@coolaj86 There's another mode which just updates and reloads the Apache config to provide the validation payload IIRC, so there's no need for a restart.

kelunik commented Sep 22, 2015

@coolaj86 There's another mode which just updates and reloads the Apache config to provide the validation payload IIRC, so there's no need for a restart.

@jdkasten

This comment has been minimized.

Show comment
Hide comment
@jdkasten

jdkasten Sep 22, 2015

Contributor

@coolaj86 I think you misunderstood me. Yes, as I mentioned, as it stands you are going to have to interact in some way with the running webserver. Reload only avoids terminating existing connections and should not result in any downtime while serving the validation challenge.

@coolaj86 Yes, more clients should be written. There was never meant to only be a single client. There are many different use cases which will require varied clients.

Contributor

jdkasten commented Sep 22, 2015

@coolaj86 I think you misunderstood me. Yes, as I mentioned, as it stands you are going to have to interact in some way with the running webserver. Reload only avoids terminating existing connections and should not result in any downtime while serving the validation challenge.

@coolaj86 Yes, more clients should be written. There was never meant to only be a single client. There are many different use cases which will require varied clients.

@coderanger

This comment has been minimized.

Show comment
Hide comment
@coderanger

coderanger Sep 22, 2015

Interacting with the running Apache means you need to make assumptions about the Apache (or Nginx, or whatever) config structure. Using a secondary port would mean you could do this with no assumptions at all about the state of the system. Drop on a responder client, run the validation, shut it down, done. For stuff like Chef and Puppet integration this is hugely important.

coderanger commented Sep 22, 2015

Interacting with the running Apache means you need to make assumptions about the Apache (or Nginx, or whatever) config structure. Using a secondary port would mean you could do this with no assumptions at all about the state of the system. Drop on a responder client, run the validation, shut it down, done. For stuff like Chef and Puppet integration this is hugely important.

@dol

This comment has been minimized.

Show comment
Hide comment
@dol

dol Sep 22, 2015

One idea to solve this problem without customizing/reloading the web server is to use iptables/PF to redirect traffic from the Let's Encrypt Validation Authority to a different internal port. On this port the acme client listens for challenge request and will perform the challenge.

iptables --table nat --append PREROUTING --protocol tcp 
               --source <Validation Authority IP address> --dport 443 \
               --jump REDIRECT --to-ports <ACME client port>

For this to work the Validation Authority IP addresses should be known.

dol commented Sep 22, 2015

One idea to solve this problem without customizing/reloading the web server is to use iptables/PF to redirect traffic from the Let's Encrypt Validation Authority to a different internal port. On this port the acme client listens for challenge request and will perform the challenge.

iptables --table nat --append PREROUTING --protocol tcp 
               --source <Validation Authority IP address> --dport 443 \
               --jump REDIRECT --to-ports <ACME client port>

For this to work the Validation Authority IP addresses should be known.

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Sep 22, 2015

@coderanger Exactly, we could drop all Nginx and Apache related code / support then.

It's

  • easier to maintain the LE client code
  • easier to actually use the client (less options is almost every time a good sign)
  • doesn't need any assumptions about the config nor any changes there
  • doesn't need to reload running production systems

@dol IP addresses of LE might change and be dynamic to prevent certain attacks, so that doesn't work.

kelunik commented Sep 22, 2015

@coderanger Exactly, we could drop all Nginx and Apache related code / support then.

It's

  • easier to maintain the LE client code
  • easier to actually use the client (less options is almost every time a good sign)
  • doesn't need any assumptions about the config nor any changes there
  • doesn't need to reload running production systems

@dol IP addresses of LE might change and be dynamic to prevent certain attacks, so that doesn't work.

@coderanger

This comment has been minimized.

Show comment
Hide comment
@coderanger

coderanger Sep 22, 2015

Even if the iptables trick did work, I would be very unhappy doing stuff like that in production :-P

coderanger commented Sep 22, 2015

Even if the iptables trick did work, I would be very unhappy doing stuff like that in production :-P

@mvdkleijn

This comment has been minimized.

Show comment
Hide comment
@mvdkleijn

mvdkleijn Sep 22, 2015

What @coderanger described so eloquently and @kelunik nicely lists is exactly what I had in mind.

A letsencrypt client that messes with Apache's (or whatever) configuration is fine for simple use cases but for people with more complex setups or requirements, such a client would quickly break.

The ip tables trick is great from a theoretical pov but it makes my skin crawl for production. Also it'd be a definite no-go for less knowledgeable users.

mvdkleijn commented Sep 22, 2015

What @coderanger described so eloquently and @kelunik nicely lists is exactly what I had in mind.

A letsencrypt client that messes with Apache's (or whatever) configuration is fine for simple use cases but for people with more complex setups or requirements, such a client would quickly break.

The ip tables trick is great from a theoretical pov but it makes my skin crawl for production. Also it'd be a definite no-go for less knowledgeable users.

@mvdkleijn

This comment has been minimized.

Show comment
Hide comment
@mvdkleijn

mvdkleijn Sep 23, 2015

@jdkasten So basically what you're saying is someone should write an open source ACME client that is a true standalone client doing what we specified earlier?

Wouldn't that potentially impact (user or service) security and hence usefulness of letsencrypt? One scenario that pops into my mind would be how you would explain to your users that a client written by a third party has a security issue in it. Assuming you're even aware of said client.

mvdkleijn commented Sep 23, 2015

@jdkasten So basically what you're saying is someone should write an open source ACME client that is a true standalone client doing what we specified earlier?

Wouldn't that potentially impact (user or service) security and hence usefulness of letsencrypt? One scenario that pops into my mind would be how you would explain to your users that a client written by a third party has a security issue in it. Assuming you're even aware of said client.

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Sep 23, 2015

@mvdkleijn ACME isn't limited to LE, there may be other CAs supporting it. Generally, other clients would handle security issues like any other software.

kelunik commented Sep 23, 2015

@mvdkleijn ACME isn't limited to LE, there may be other CAs supporting it. Generally, other clients would handle security issues like any other software.

@coderanger

This comment has been minimized.

Show comment
Hide comment
@coderanger

coderanger Sep 23, 2015

@mvdkleijn Yes, having multiple client implementations is part of the idea. I'll be writing one in Ruby regardless, but it would be easier to not have to redo all the Apache/Nginx integration if possible.

coderanger commented Sep 23, 2015

@mvdkleijn Yes, having multiple client implementations is part of the idea. I'll be writing one in Ruby regardless, but it would be easier to not have to redo all the Apache/Nginx integration if possible.

@analogic

This comment has been minimized.

Show comment
Hide comment
@analogic

analogic Nov 4, 2015

@kelunik +1

I am curious why LE staff have choosen the more problematic way to cooperate with client systems instead of JUST getting the cert without web server dependencies. Isnt it easier to make one simple portable utility which will handle all comunication through dedicated port than messing with various systems with various webservers in various environments? When i first heard about LE I thought i would just add "/usr/sbin/le -d example.com -o /etc/ssl/example.com && killall -HUP httpd" to cron, open one port on firewall and forget about it... Well this is far from it

analogic commented Nov 4, 2015

@kelunik +1

I am curious why LE staff have choosen the more problematic way to cooperate with client systems instead of JUST getting the cert without web server dependencies. Isnt it easier to make one simple portable utility which will handle all comunication through dedicated port than messing with various systems with various webservers in various environments? When i first heard about LE I thought i would just add "/usr/sbin/le -d example.com -o /etc/ssl/example.com && killall -HUP httpd" to cron, open one port on firewall and forget about it... Well this is far from it

@My1

This comment has been minimized.

Show comment
Hide comment
@My1

My1 Nov 9, 2015

talking about records, why not just allow to make the entire validation with a record, like TXT or srv or whatever where you store some value that the le client wants you to store?

My1 commented Nov 9, 2015

talking about records, why not just allow to make the entire validation with a record, like TXT or srv or whatever where you store some value that the le client wants you to store?

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Nov 9, 2015

@My1 That's another challenge that's available, but it's not that easy to automate.

kelunik commented Nov 9, 2015

@My1 That's another challenge that's available, but it's not that easy to automate.

@My1

This comment has been minimized.

Show comment
Hide comment
@My1

My1 Nov 9, 2015

if wen keep the challenge the same and just check whether or not it's still there it would be no problem, similar to the port setting a direct DNS challenge could be used optionally if you dont want LE to "mess" with your webserver's config, similar to webroot

My1 commented Nov 9, 2015

if wen keep the challenge the same and just check whether or not it's still there it would be no problem, similar to the port setting a direct DNS challenge could be used optionally if you dont want LE to "mess" with your webserver's config, similar to webroot

@hardie

This comment has been minimized.

Show comment
Hide comment
@hardie

hardie Nov 12, 2015

Just a reminder that discussion of the protocol is at ietf@acme.org, since
change control moved to the IETF.

regards,

Ted

On Mon, Nov 9, 2015 at 4:04 PM, My1 notifications@github.com wrote:

if wen keep the challenge the same and just check whether or not it's
still there it would be no problem, similar to the port setting a direct
DNS challenge could be used optionally if you dont want LE to "mess" with
your webserver's config, similar to webroot


Reply to this email directly or view it on GitHub
#33 (comment)
.

hardie commented Nov 12, 2015

Just a reminder that discussion of the protocol is at ietf@acme.org, since
change control moved to the IETF.

regards,

Ted

On Mon, Nov 9, 2015 at 4:04 PM, My1 notifications@github.com wrote:

if wen keep the challenge the same and just check whether or not it's
still there it would be no problem, similar to the port setting a direct
DNS challenge could be used optionally if you dont want LE to "mess" with
your webserver's config, similar to webroot


Reply to this email directly or view it on GitHub
#33 (comment)
.

@kelunik

This comment has been minimized.

Show comment
Hide comment
@kelunik

kelunik Nov 12, 2015

Unfortunately, there's nobody active there.

kelunik commented Nov 12, 2015

Unfortunately, there's nobody active there.

@bifurcation

This comment has been minimized.

Show comment
Hide comment
@bifurcation

bifurcation Nov 13, 2015

Contributor

What @hardie meant to say was acme@ietf.org, where there is a bunch of active discussion right now.

Contributor

bifurcation commented Nov 13, 2015

What @hardie meant to say was acme@ietf.org, where there is a bunch of active discussion right now.

@bviktor

This comment has been minimized.

Show comment
Hide comment
@bviktor

bviktor Nov 16, 2015

Guys, we really don't care if the port needs to be <1024. Then make it 444, 456, 865, 1023, whatever. Just don't require us to stop the webserver for this.

bviktor commented Nov 16, 2015

Guys, we really don't care if the port needs to be <1024. Then make it 444, 456, 865, 1023, whatever. Just don't require us to stop the webserver for this.

@My1

This comment has been minimized.

Show comment
Hide comment
@My1

My1 Nov 16, 2015

I see it the same way, since an alternate port can also be used for a different port forwarding when using LE inside a home network.

maybe LE can even try to UPNP the port.

My1 commented Nov 16, 2015

I see it the same way, since an alternate port can also be used for a different port forwarding when using LE inside a home network.

maybe LE can even try to UPNP the port.

@letsencrypt letsencrypt locked and limited conversation to collaborators Nov 16, 2015

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.