Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SSL by default using self-cert, support both 80 and 443 without redirect #232

Closed
rfay opened this issue May 17, 2017 · 23 comments
Closed

Comments

@rfay
Copy link
Member

rfay commented May 17, 2017

What happened (or feature request):

Mostly browsers complain a lot about http sites now, especially Chrome, which now marks every http site as "insecure". I think people understand this and it's not a big issue, but there's no real reason we can't ship our containers with an ssl cert and config that uses it.

What you expected to happen:

SSL support is becoming standard, even though there are no known security implications for a local dev situation like this.

How to reproduce it (as minimally and precisely as possible):

Anything else do we need to know:

Related source links or issues:

I looked at the actual configuration in #175 (comment), and it's pretty easy to accomplish. We would need a wildcard cert (preferably with a very long expiration; I don't think you can get more than 3 years) and we'd want to keep it updated.

@rickmanelius rickmanelius added this to the v0.7 milestone May 22, 2017
@rfay
Copy link
Member Author

rfay commented May 26, 2017

Non-self-signed certs are going to require us to:

  • Move to using a real domain name, something like *.ddev.io, instead of using .local
  • Get a wildcard cert for the domain.
  • Expose the private key associated with the cert to everybody we know, because it's configured with the nginx container. I haven't figured out if there is risk associated with this. It does mean that anybody anywhere could put a site up with our cert at junker99.ddev.io for example, and it would test as valid. But our control of DNS for the domain would have to be compromised for this to be a real problem.

Yes, we could use a self-signed certificate, but modern browsers make that really hard. You have to do magical things to click through scary warnings to exactly the right place.

@rickmanelius
Copy link
Contributor

Looks like we can't register a "drud.local" domain because "local". is reserved https://tools.ietf.org/html/rfc6762. However, I'm all for having a *.drud.TLD and distributing the wild card. I'm imagining there are attack vectors that are annoying, but owning the domain does allow us to revoke. And in terms of distributing, we could probably push with each new binary and run a check every so often to ensure it's updated.

Perhaps we spend another 2 hours thinking through any security implications and, if we feel comfortable that this is an acceptable risk, proceed!

@rickmanelius
Copy link
Contributor

We had a spirited debate on this within the team Slack channel. It boils down to two competing needs:

  • Chrome will now mark every non-https connection as insecure, causing unnecessary stress, and confusion for the end user as well as inconsistencies in http/https parity between local/production. A self-signed cert partially solves this but requires the end-user to jump through hoops to accept/trust that cert.
  • Distributing a valid wildcard cert removes the warnings and work, but opens up potential security vulnerabilities.

So we're left with 3 options:

  1. Do nothing (default to HTTP traffic locally, noting this choice and implications in the docs).
  2. Generate a self signed cert (noting this choice and implications in the docs).
  3. Distribute wildcard cert (accepting potential security and potentially putting end-users at risk).

FWIW, we're not the only ones trying to figure out this issue. A Google search for "ssl certificate local development" turned up the following relevant links:

Those last two links show a workaround that purports to give an on-the-fly SSL with an ability to remove the scary warnings / complications surrounding accepting a self-signed cert.

@rickmanelius
Copy link
Contributor

FWIW, I'd be OK with a 8-hour timebox to test devcert and determine if that is a viable approach without introducing additional issues/complexity. After that, my preference would be to spend an additional 4-hour timebox to evaluate the security attack vector of the distributed cert approach. If that is also a dead-end, then I think self-signed is better than just HTTP for environment parity.

@rickmanelius
Copy link
Contributor

I did get a few bites/responses on Twitter. Leaving this as another trail to evaluate should there be any additional follow-ups of value.

@rickmanelius
Copy link
Contributor

We've decided to wait for end-user feedback. Short term we will not address this. Alternatively, we can scope this to just providing a mechanism to support a self-signed cert.

@rickmanelius
Copy link
Contributor

Hey team. In a conversation with @Schnitzel this morning, he also mentioned that his organizations are pushing towards HTTPS traffic by default. It would seem that the self-signed cert, while featuring an annoying warning, would be the easiest way to at least get us this functionality while we later re-evaluate using the devcert solution. It was also noted in the conversation that other projects have attempted to distribute a wildcard cert and had the cert revoked because public distribution essentially violated the terms of service of the CA.

My vote is we go the self-signed route and enable HTTPS by default. We can then file a follow-up issue if/when it makes sense to evaluate a more elegant solution of a local solution.

@rfay
Copy link
Member Author

rfay commented Jul 18, 2017

Enabling HTTPS by default probably is not the best idea, IMO. It really should match what they're using in production. And the number of support issues will be painful. I think we have to provide a way to make this explicit via configuration.

I'm not really that happy with providing a dev website that is broken out of the box for ordinary people.

@Schnitzel
Copy link

yea, making it default is probably only possible if we have either a way to run a valid certificate or if we find a way to have a CA that we trust from the host and sign certificates with it.

But we probably will have HTTPs per default on amazeeio, so I'm definitely interested in finding a solution to have https support on the local dev plus invest time to make it a valid cert :)

@rickmanelius rickmanelius added this to the v1.1.0 milestone Sep 14, 2017
@rickmanelius
Copy link
Contributor

I have another response to this in flight, but I got distracted and it's saved as a draft on my laptop. That said, here is another relevant link Chrome to force .dev domains to HTTPS via preloaded HSTS

@rickmanelius
Copy link
Contributor

And here was the previous draft...

Thinking through this further and summarizing:

  • My vote is to use a self-signed cert (browser warnings be damned).
  • We enable both port 80 and 443 by default.
  • We don't force (at the proxy/webserver layer) HTTPS.

As @rfay mentioned, there are going to be issues with this approach. However, we can file a subsequent issue to handle this in a more elegant fashion (devcert would be, IMHO, the more robust implementation). Still, there are some additional details that we'll need to address when implementing. Examples:

  • We probably need a configuration flag that would pass to the nginx webserver to force HTTPS, HTTP, or allow both.

@rfay
Copy link
Member Author

rfay commented Sep 18, 2017

Interesting prequel on .dev - https://iyware.com/dont-use-dev-for-development/ - note that we're violating the basic rules with .local as well, and it has other problems (it's delegated for mDNS wikipedia... which could conceivably be the answer to losing our hosts file manipulation) The most likely problem with using ".local" is that it will conflict with bonjour's name resolution and we'll end up with a bizarre ddev name resolution failure that we could never debug because it's completely unique to the user's network.

Oh this seems to have been fixed over the years, but back in the day I was actually forced to move from using .local to .l and later .dev because of the bonjour-taking-over-.local issues on MacOS, https://blog.scottlowe.org/2006/01/04/mac-os-x-and-local-domains/ - it was probably 2011-2012, and I didn't even remember the whole saga until now. But the fact that we're routinely getting away with it means it must be solved in recent OSX.

@rfay rfay changed the title Add SSL support for local sites, consider making it default Add SSL by default using self-cert, support both 80 and 443 without redirect Sep 18, 2017
@rickmanelius
Copy link
Contributor

It looks like the article linked above (thanks, @rfay!) recommends the following:

  • .localhost (but not really)
  • .invalid
  • .test
  • .example

A key statement is made at the end of the article: "Once .dev domains start becoming prevalent I imagine that there will be a few people kicking themselves for mapping it to their local machine, but hey, that's part of learning!" The important part is that this may never happen. We could also use .ddev because that would be a reminder of what program is being used and it's not a known/published TLD. I frankly dislike .invalid (sounds broken), .example (sounds like a a mock prototype), and .test (this could be confused with prod/test/dev/QA environment terminology). Localhost I'd be fine with, but there seems to be issues with that as well.

Perhaps we split off a separate issue and address the TLD issues once and for all.

@rfay
Copy link
Member Author

rfay commented Sep 18, 2017

Yes, although we've discussed our choice of .local a few times before, I can't find a ticket about it. However, related issues are Use wildcard FQDN and Subdomain, local DNS support

@rickmanelius
Copy link
Contributor

Yes, I do recall this coming up a few times. Here are the related issues that I could find:

Note that both are closed, but it seems like the first one might be the best to re-open.

@rickmanelius
Copy link
Contributor

To summarize, this is now actionable with the following:

  • Each site will get a self-signed cert provisioned.
  • We enable both port 80 and 443 by default.
  • We don't force (at the proxy/webserver layer) HTTPS.

We can split out additional functionality into its own ticket, including the domain name (already in #54), allowing a config to force HTTPs, changing domain names, and using a custom cert.

@Schnitzel
Copy link

Mhh actually I would vote against having the sites (aka nginx container) create their own certs. On production Docker environments it is best practice to do the ssl offloading on the load balancer (haproxy or nginx in terms of kubernetes/openshift). With that you don't need to worry about systems that can't handle https (like varnish) plus you don't need to teach your nginx containers about let's-encrypt and just let the reverse proxy handle this, also things like removing old ciphers or SNI handling is then all responsibility of the reverse proxy and not the nginx container.

So I would strongly suggest to do the ssl offloading on the haproxy and not the nginx containers.

@rickmanelius
Copy link
Contributor

Hi @Schnitzel. Just responding ahead of a conversation. Is it possible we're talking about two different scenarios? I agree that SSL termination at the haproxy makes sense. However, for local dev (using ddev) would an autogenerated self-cert not suffice for the majority of use cases?

@Schnitzel
Copy link

Schnitzel commented Oct 4, 2017

@rickmanelius

However, for local dev (using ddev) would an autogenerated self-cert not suffice for the majority of use cases?

I don't understand what the type of certificate has to do with the location it is terminated?

The very best scenario would be:

  1. HTTPs termination on the haproxy (for me a must)

2a. Using a certificate that is trusted by browsers with a system similar to devcert

If 2a. is not possible we could do
2b. Using a self signed certificate that throws a warning

@rfay
Copy link
Member Author

rfay commented Oct 4, 2017

We have an nginx router that front-ends the various sites and listens on localhost:80/443. Our consensus and current plan for this is to put a static self-signed wildcard cert for *.ddev.local on the front-end ddev router. The key problem is that it's not trusted by the browser, so requires people to do the magic trust-it dance, but that is normally not too bad after they've done it once.

@rickmanelius
Copy link
Contributor

Given the need to balance priorities this week, adding @tannerjfco as a possible alternative given @rfay is looking to better support @andrew-c-tran on UI items.

@rickmanelius
Copy link
Contributor

Related and probably most efficient to work together #416.

@tannerjfco
Copy link
Contributor

This is now resolved with #522

@rickmanelius rickmanelius modified the milestones: v1.1.0, v0.10.0 Nov 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants