Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenSSL Security Concerns #286

Closed
sempervictus opened this issue May 25, 2017 · 11 comments
Closed

OpenSSL Security Concerns #286

sempervictus opened this issue May 25, 2017 · 11 comments

Comments

@sempervictus
Copy link

Nessus does not appear to be a fan of the OpenSSL version in use, relevant CVE references from plugin output:

CVE:  CVE-2016-2177, CVE-2016-2178, CVE-2016-2179, CVE-2016-2180, CVE-2016-2181, CVE-2016-2182, CVE-2016-2183, CVE-2016-6302, CVE-2016-6303, CVE-2016-6304, CVE-2016-6306
...
CVE:  CVE-2016-7055, CVE-2017-3731, CVE-2017-3732
...
CVE-2016-2105, CVE-2016-2106, CVE-2016-2107, CVE-2016-2109, CVE-2016-2176

Are vulnerability assessments part of the release QA process? If not, might be worthwhile to add them. We could even perform scans on a public facing instance and push the reports back, but i figure you'll want authenticated checks for full versions of the build image before it becomes a snap as well...

@kyrofa
Copy link
Member

kyrofa commented May 25, 2017

Thanks for the scan, @sempervictus. Indeed, I actually scan daily (although please feel free to log an issue whenever you notice one).

Stable versions of Ubuntu can't update the SSL API used (for fear of breaking clients), nor can it update the SSL version (for the same reason). However, the Ubuntu security team actively backports all SSL security patches to the version of SSL that is available in the archive. As such, while Nessus is right that the upstream version of SSL has those vulnerabilities, none of those CVEs actually apply to the SSL in Ubuntu (Xenial, specifically).

I was told by the security team that some scanners actually tie into the USN database to filter out such results.

@sempervictus
Copy link
Author

sempervictus commented May 25, 2017

This was a remote scan without local auth, locally authenticated scans get confused by snapd, and in our case Nessus won't support the OS distribution since its Arch (needed something thin/binary still using glibc). While i'm aware of Canonicals efforts to backport, i would caution that its not always done successfully, and given the nature of the project a single serious security gaff would (and will, when it happens) hurt user confidence. SSLScan does look a lot better:

sslscan <a nexctloud snap cloud URL> | grep -i accepted
    Accepted  TLSv1  256 bits  ECDHE-RSA-AES256-SHA
    Accepted  TLSv1  256 bits  DHE-RSA-AES256-SHA
    Accepted  TLSv1  256 bits  AES256-SHA
    Accepted  TLSv1  128 bits  ECDHE-RSA-AES128-SHA
    Accepted  TLSv1  128 bits  DHE-RSA-AES128-SHA
    Accepted  TLSv1  128 bits  AES128-SHA
    Accepted  TLS11  256 bits  ECDHE-RSA-AES256-SHA
    Accepted  TLS11  256 bits  DHE-RSA-AES256-SHA
    Accepted  TLS11  256 bits  AES256-SHA
    Accepted  TLS11  128 bits  ECDHE-RSA-AES128-SHA
    Accepted  TLS11  128 bits  DHE-RSA-AES128-SHA
    Accepted  TLS11  128 bits  AES128-SHA
    Accepted  TLS12  256 bits  ECDHE-RSA-AES256-GCM-SHA384
    Accepted  TLS12  256 bits  ECDHE-RSA-AES256-SHA384
    Accepted  TLS12  256 bits  ECDHE-RSA-AES256-SHA
    Accepted  TLS12  256 bits  DHE-RSA-AES256-GCM-SHA384
    Accepted  TLS12  256 bits  DHE-RSA-AES256-SHA256
    Accepted  TLS12  256 bits  DHE-RSA-AES256-SHA
    Accepted  TLS12  256 bits  AES256-GCM-SHA384
    Accepted  TLS12  256 bits  AES256-SHA256
    Accepted  TLS12  256 bits  AES256-SHA
    Accepted  TLS12  128 bits  ECDHE-RSA-AES128-GCM-SHA256
    Accepted  TLS12  128 bits  ECDHE-RSA-AES128-SHA256
    Accepted  TLS12  128 bits  ECDHE-RSA-AES128-SHA
    Accepted  TLS12  128 bits  DHE-RSA-AES128-GCM-SHA256
    Accepted  TLS12  128 bits  DHE-RSA-AES128-SHA256
    Accepted  TLS12  128 bits  DHE-RSA-AES128-SHA
    Accepted  TLS12  128 bits  AES128-GCM-SHA256
    Accepted  TLS12  128 bits  AES128-SHA256
    Accepted  TLS12  128 bits  AES128-SHA

@kyrofa
Copy link
Member

kyrofa commented May 25, 2017

That's fair, although such a security gaff would impact Ubuntu itself as well, which would be a big deal. I tend to trust the folks who are paid to keep Ubuntu safe, but my opinion isn't the only one that matters. How do you feel about this? Is an ideal solution to track upstream openssl? I'm a little nervous about the API issues that may introduce with the other components of the snap, but have not experienced them firsthand.

@sempervictus
Copy link
Author

This is the standard pitfall of crypto - the implementation details are above the heads of almost all consumers, and changes to how the details are applied in new or backported methods can make a difference. Thing is, the newest versions of things tend to have the most mistakes - compiletime and functional tests will catch errors only so well as their coverage permits. If something is capable of referencing a free'd object, but never does it unless a number of stars line up, automated testing is likely to miss it without proper test cases; and nobody knows what those are until a bug is caught which necessitates the creation of a new one. Eternal cat and mouse game in infosec - think how long it took to actually get proper coverage on heartbleed across the distributions, or even shellshock for that matter (or anything RHEL does).

If you have binary dependencies against the Canonical version of OpenSSL, i'd say stay with it, and try to track with daily scans against a full-fledged OS deployed using the components packaged in the snap. This should allow you to get findings against supporting components and determine how/if they apply to a snap running in production - escalation of privs to be able to write into the FS won't help much in an RO mount, but something affecting transport integrity would be a concern.
If there's a way to unpack snaps into a full runtime container, that could work too - maybe AUFS/unionfs mounts preceeding an LXC invocation which would provide full networking too allowing the scanners to SSH in, have write access, etc. Would probably be nice to have a public history of the resulting reports from all scanners so users dont have to set up their own, but could just go to a website, type in their version, and see whats marked against it that day (maybe even as a nextcloud app for admins).

Separately, having Arachni go over this as well, but both it and Nessus suggest implementing https://www.owasp.org/index.php/SecureFlag.

@kyrofa
Copy link
Member

kyrofa commented May 25, 2017

[T]ry to track with daily scans against a full-fledged OS deployed using the components packaged in the snap. This should allow you to get findings against supporting components and determine how/if they apply to a snap running in production - escalation of privs to be able to write into the FS won't help much in an RO mount, but something affecting transport integrity would be a concern.

I'm not completely following you, here. By "full-fledged OS deployed using the components packaged in the snap," do you mean everything contained in the snap, but not actually running it in the snap: installing everything to standard locations? If so, I'm not clear on the benefit of this.

If there's a way to unpack snaps into a full runtime container, that could work too - maybe AUFS/unionfs mounts preceeding an LXC invocation which would provide full networking too allowing the scanners to SSH in, have write access, etc.

Again not quite clear on what you're saying. My scans run against the snap installed in LXC, one with HTTPS enabled, and one without. Are you saying the scanners need write access to the snap, though?

Separately, having Arachni go over this as well, but both it and Nessus suggest implementing https://www.owasp.org/index.php/SecureFlag.

Ah, that'll need to be part of the HTTPS toggle. Mind logging a new issue about that, please?

Would probably be nice to have a public history of the resulting reports from all scanners so users dont have to set up their own, but could just go to a website, type in their version, and see whats marked against it that day (maybe even as a nextcloud app for admins).

Yeah that would be pretty sweet, but I'm afraid it's a step beyond the manpower I have available.

@sempervictus
Copy link
Author

I'm suggesting the scans be run against an Ubuntu core installation with the snap's contents installed in a normal filesystem with all the attack surface provided by a complete OS. This approach should be the most compatible with scanning tools, and provide a "worst case" approximation of exposure to the actual runtime, with some of the concerns being mitigated by the container technology in use, kernel configurations/builds, and other "real-world" factors (which may also lead to additional exposure, but you can't run scans against every possible layout).
There's a number of ways to skin that cat, from actually deploying the files to a writeable FS inside LXC (i'm not sure if that is what you're doing, or if you're running snapd inside the LXC namespace), to using writeable overlays, or other inventive approaches.
I am thrilled, by the way, to hear that you're already doing this much validation - we often have to fight tooth and nail in corporate environments just to implement basic situational awareness, much less have discussions on mechanisms of integrating vulnerability awareness into QA. Kudos.

@kyrofa
Copy link
Member

kyrofa commented May 25, 2017

There's a number of ways to skin that cat, from actually deploying the files to a writeable FS inside LXC (i'm not sure if that is what you're doing, or if you're running snapd inside the LXC namespace), to using writeable overlays, or other inventive approaches.

Well, let me outline exactly what I'm doing today so we're on the same page. Then we can both get a better picture of what to improve. Note that all snap instances are their own lxc container (indeed, snapd is running inside the lxc namespace). This setup is identical to my own production instance.

  • Web Application Tests (configured to scan for all web vulnerabilities):
    • Ran daily against:
      • Snap from candidate channel with HTTP
      • Snap from candidate channel with HTTPS
  • Basic Network Scan (all ports, default config)
    • Ran daily against:
      • Snap from candidate channel with HTTP
      • Snap from candidate channel with HTTPS
      • My production instance

All scans run during the night and then email me, so my day starts with a review of the results.

No local authentication, although I've been meaning to investigate that (not for production, of course). You say that snapd confuses it... can you elaborate on the issues you encountered?

I am thrilled, by the way, to hear that you're already doing this much validation - we often have to fight tooth and nail in corporate environments just to implement basic situational awareness, much less have discussions on mechanisms of integrating vulnerability awareness into QA. Kudos.

Thank you, credit goes to @SkyWheel for getting us started. We're always looking to improve though, so I appreciate this discussion 😃 . The snap update mechanism means we can get automatic fixes to all users in a matter of hours, which is a tremendous amount of power. We take that very seriously.

@SkyWheel
Copy link

Hi @kyrofa ,
I have not been here for a while (was very busy). However performed some investigations on the issue. And received the same result: openssl version 1.0.2g-1ubuntu4.6 should be safe. My next step is to perform credential scanning on my box (with elevated privileges). After that we could be pretty sure that Nextcloud snap is safe and ready for production.

@sempervictus
Copy link
Author

Far as how scanners get confused - they enumerate the package manager of the userspace to which they connect. So the package-version-based checks actually look at the wrong list, possibly the wrong OS entirely. Moreover, the scanners have conditional logic to execute plugins based on environmental constraints - "run aptitude package checks on systems detected as Debian, skipping Yum ... checks." So that can turn into a bloody mess because version numbers start to not match up, even if they're detected, services are checked against the wrong things, and gaps form in the surface. Its a subtle sort of hell sorting out the details of these discrepancies, but todays vulnerability analysis engines are rather behind the times - they dont address namespacing, mount constraints, and a bunch of other operating logic which drives the bare metal (or virtualized, really) containerized ecosystems.
So the suggestion i give client development teams and security reviewers is to create the simplest form of the final product available - a flat OS with simple writeable mounts, no namespace constraints, using as much upstream packaging as possible (instead of PPAs, pip, gem, etc), and deploy their services with as much of the same configuration as the final system will have. The idea is to make the attack surface as wide as possible for the tests - you're looking to fail things so as to catch them now, even if they're not a direct/full-fledged vector in the final production container. We cant predict how clever our attackers will be, what they can chain together to achieve their goals, and what they have in their arsenal we dont know about. We can however maximize our understanding of potential exposure and address at every tier we find even the hint of a concern - "defense in depth" approach. :)

@SkyWheel
Copy link

OK, since the theme has been raised I've seen at least 2 openssl updates. As discussed, Ubuntu team manages it's own openssl version and backports all patches into it.
I reckon the ticket could be closed.

@kyrofa
Copy link
Member

kyrofa commented Oct 4, 2017

Thanks @SkyWheel, will do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants