-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various improvements to CRL handling #97
Conversation
Patch has passed first testings and has been assigned to be reviewed |
Some comments from the review: The change 262, 267-277 I think should not be needed as this fallback is done for self-signed certificates and no other correct cert should need this fallback. According to RFC 3280: 4.2.1.1 Authority Key Identifier [...] The keyIdentifier field of the authorityKeyIdentifier extension MUST be included in all certificates generated by conforming CAs to facilitate certification path construction. There is one exception; where a CA distributes its public key in the form of a "self-signed" certificate, the authority key identifier MAY be omitted. Maybe it could be argued for if it seems to some sort of defacto standard to break the RFC! The change to pubkey_crl seems to changs the "encoding level" check if this code is covered by OTP tests? Ask contributer to provide test case that covers it if not. Other changes look good. |
Yes, I realized that the AKI handling was incorrect last week and I reverted that on Basho's OTP fork but didn't have time to push the patch up to this branch as well. I have now done so. There are definitely certificates/CAs out there that do not provide an AKI, though, but the issuer_fun can deal with that via the no_issuer argument it is passed. I'm not sure about the "encoding level" bit, can you be more specific? |
Patch has passed first testings and has been assigned to be reviewed |
The patch removes the use of the function subject_alt_names that performs more decoding than pubkey_cert:select_extension replacement will. See: subject_alt_names(Extensions) -> Enc = extension_value(?'id-ce-subjectAltName', 'GeneralNames', Extensions), case Enc of undefined -> []; _ -> pubkey_cert_records:transform(Enc, decode) end. extension_value(Extension, ExtType, Extensions) -> case pubkey_cert:select_extension(Extension, Extensions) of #'Extension'{extnValue = Value} -> public_key:der_decode(ExtType, list_to_binary(Value)); _ -> undefined end. |
Hi We would also like an explanation as to why it was removed Thank you |
If I revert the altnames stuff, I start getting errors like this:
As fas as I can tell, it tries to decode fields in an already decoded OTPCertificate, and of course that doesn't go over so well. |
I also, with my patch, get back wildly different AltName values: twitter.com:
basho.com
|
Also, with the certificates I generate myself, using a patched make_certs from OTP, which is included in the erl_crl_example above:
If you run check.sh without the patch to OTP and with the patch, the patch makes it pass. |
Also note that if you run my little tool against godaddy.com:443, without the change to match_name, it will try to compare "crl.godaddy.com" to "http://crl.godaddy.com/gds3-61.crl", which obviously will never match.
|
Would you mind adding that example as a testcase? Our tests pass but is using the other way into CRT validation, and might be slightly adapted to make sure it passes. |
Can you indicate what you want me to test, and how it should be tested? The public_key tests are pretty difficult for me to follow and I don't know where a new test would go, or what it would look like. |
Hi! The pikts_SUITE is a little special as it builds on a standard test suite for PKIX-standard. But I would rather you add a test case to public_key_SUITE that resembles the way that you call the code in your real system but with "fake" (specially generated certs/keys) for the test case. I try to always generate the certs in the test case so that the timestamps will always be valid, this is however not true for the pkits_SUITE as we have to accept |
I'm really struggling with how to add a test case while generating certificates (and not just using ones shipped as test data). The certificate generator I'm using is the one from ssl, not public_key, with a bunch of patches to it to give it more of an API and to generate CRLs and the like. I'm also not even sure how some of those crazy real-world certificates were generated, which makes is a little challenging to generate certificates that trigger the same behaviour. I will see what I can do, but I'm concerned all my bumbling will make this patch miss R17. |
Would you be OK if I tested this from the ssl application test suite? It'd be a lot easier to adapt my test to do something like that than pull a bunch of stuff into the public_key test suite. |
Hi Andrew! I know certificate handling is not something that is easily grasped, and I Regards Ingela On Mon, Feb 3, 2014 at 11:22 PM, Andrew Thompson
|
I think I will move ahead with updating make_certs.erl in the SSL application to be more flexible (like the version I've been using for testing all of this) and I'll add tests to the SSL application. I know it would be better to add 'pure' tests to public_key, but I don't feel I can accomplish that and still make the R17 deadline. Hopefully I can generate certificates and CRLs with all the real-world 'exceptions' that I've seen during testing. |
OK, a new test suite has been added to the SSL application that will check CRLs when connecting to a server, 6 tests are included, and they check valid/revoked certificates in 3 conditions: a normal v2 CRL, a v1 CRL (no x509 extensions) and a v2 CRL with an issuing distribution point(IDP). The make_certs tool has also been greatly extended to allow generation of these certificates, but it is backwards compatible with the old all/2 API that was previously used. Three of the tests fail if the code changes are reverted (the other 3 expect the connection to fail because the CRL revoked, so they will 'pass' in that case). |
Oh, and I found what I believe to be a bug in the ssl_test_lib module. When a client fails to connect, start_client would hang forever waiting for the '{connected, Socket}' message. This message never arrives, and so start_client hangs forever. |
It looks like you forgot to do git add on ssl_crl_SUITE.erl ! |
* Handle v1 CRLs, with no extensions. * Compare the IDP on a CRL correctly, if present * Don't try to double-decode altnames Tests are also included, and the make_certs testing tool in the SSL application has been greatly extended.
Added missing file and squashed it into the main commit. |
Hello! I have some (bad)news for you. pkits_SUITE:invalid_uri_name_constraints and pkits_SUITE:valid_uri_name_constraints fails on all platforms! === Started at 2014-02-07 05:27:09 pkits_SUITE.erl(688): ERROR "4.13.34" "Valid URI nameConstraints Test34 EE" === Ended at 2014-02-07 05:27:09 snipsnip === Started at 2014-02-07 03:56:13 pkits_SUITE.erl(691): ERROR "4.13.37" "Invalid URI nameConstraints Test37 EE" === Ended at 2014-02-07 03:56:13 |
Yes, I think my patch may be wrong, but OTP is also wrong (as it cannot handle the IDP in the godaddy CRL, which I'm pretty sure is valid). In the absence of a nameConstraints extension (which godaddy does not use), it is possible that public_key does the wrong thing? I'm having a hell of a time trying to find where in ANY spec it talks about validating the IDP. |
When I implemented the CRL handling I looked at RFC 3280 now obsoleted by RFC 5280. It is a little hard to get your brain to fully understand it but it is what I had to go on. Also I used the PKI test suite http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html Yes public_key might be doing the wrong thing in your scenario but to fix your scenario should not break another scenario. I hope that can help, I wish could help you more but it was quite a while ago that I wrote that code and I do not remember all the details. |
Reading RFC 5280 I'm not sure why public_key is validating the Issuing Distribution Point Name at all, in beyond what the RFC says:
So beyond confirming that the DistributionPoint we used is present in the list of IssuingDistributionPoints, it seems to me that no other validation is needed. There seems no reason to use pubkey_cert:match_name. I'm going to rework the IDP checking to simply check that the DP name used appears in the IDP extension. That should fix the pkits suite as well as my CRL test. |
OK, I have pushed a commit that reworks how IDPs are validated. It makes both the pkits test and my CRL test pass. I may not be reading the RFC correctly, but it seems to be right to me. |
I will review your changes on Monday and read the RFC again. It was quite a while ago that I implemented the CRL validation, but I think that it is first now that it is getting used for real. An if I should have got it a 100 % correct in the first try I would actually be surprised. There are too many subtable cases and possibilities for "defacto standards" etc |
Is it the part "The identical encoding MUST be used in the distributionPoint fields of the certificate and the CRL." that makes you think that this simpler test is adequate? |
Yes. I also don't know that the code as it is in OTP now would work in the case of something like a file:// URI. |
What concerns me is section 7 in the RFC I think it implies that your test is too simple, and may |
Can you give a more concrete example of when it might fail? With an internationalized domain name? I feel like you're giving me a really hard time with this issue. It isn't like, without this patch, you can validate CRLs on the public internet at all. They ALL (that I have tested) will fail for the various reasons illustrated in this pull request. Yes, there are probably more failures, but surely working in the common case is better than continuing to ship a completely broken CRL validation path. It passes all your tests, as well as the new tests I've added (which cover cases clearly not tested before) and, as I say, it allows CRLs from the internet to be valid. Time is running out for R17 (and I have other things to work on). If you can't point to a specific problem or a specific missing test case that shows this patch is WORSE than what is in OTP right now, I'd encourage you to merge this and we can work on further improving things post R17. |
For ex. the RFC talks about "perform white space compression". If there are extra white spaces for ex "basho.com" and " basho.com" will not be considered equal. I do not no if it is common that this happens but as far as I can tell from the RFC it could, and they should be considered equal. Sorry for giving you a hard time but I have zero time right now to put into this, and we want to avoid fixing one problem and creating a new one. I do see your point that maybe your patch can be considered an improvement but I am not convinced I can convince my product owner it is the right thing to do! I can promise you that there will come something out of your hard work that will be included in OTP but I just can not promise it for R17. |
If IDNs for distribution points don't work in OTP today, we are not making it any worse, and they do NOT work, as my testcase illustrates (no IDP validation will work with the code as it is now in OTP). If you can point to a single regression this patch causes, then fine, I will fix it. Moving the goal posts by bringing IDNs and other vague potential problems into the discussion is not very helpful. Your own test suite shows that this doesn't break anything you consider 'working'. To be honest, I don't even care about the IDP, I only fixed it because I was trying to make all the CRLs I could find work. Could we get something in without that part of the patch? |
Hi we have discussed the issue and we decided that we want your patch for R17 and that someone will look into how we want to handle the IDP validation. Please do not think that we do not appreciate all your hard work, we think you made a very valuable contribution. |
We made some alterations to your patch and the whole solution has been merged to master and will be part of 17.0 |
Thank you. Are there any details on what was changed? |
I've made one minor change to the testing. In the certificates it is erroneous assumed that sha256 (for default_md) is available on every platform. It's not. This change is not merged to master but in our |
Look at the commit messages and diffs on master. Egils changes are not fredagen den 28:e mars 2014 skrev Andrew Thompson <notifications@github.com
|
Generate compile_flags.hrl also on GNU/Hurd
back on the Issuer instead.
hostname to a whole URL.
Some sample code to test this can be found here:
https://github.com/Vagabond/erl_crl_example
First, run 'make check' to run some simple tests against some generated CAs. Then you can do some CRL checking against the real world by doing
It will, somewhat noisily, report on the CRL status of the client certificate, and any intermediate certificate CRLs. Sometimes, like for twitter.com port 443 and myspace.com port 443, Erlang will not pass the intermediate certificates to the SSL verify_fun, so the path validation fails. I think this another bug in OTP I have not found yet, as openssl s_client shows the server providing the intermediate certificates.
Without these patches, you can't verify even the simple generated CRLs, with them you can validate most of the ones I've tried from the internet, the only exceptions I've found are because of the aforementioned bug.