Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new security policy #44918

Open
FiloSottile opened this issue Mar 10, 2021 · 18 comments
Open

new security policy #44918

FiloSottile opened this issue Mar 10, 2021 · 18 comments

Comments

@FiloSottile
Copy link
Contributor

@FiloSottile FiloSottile commented Mar 10, 2021

Background

The current Go security policy, golang.org/security, dictates that whenever a valid security vulnerability is reported, it will be kept confidential and fixed in a dedicated release.

The security release process is handled by the Security and Release teams in coordination, and deviates from the general release process in that for example it doesn't use the public Builders or TryBots. This led to issues going undetected in security releases in the past.

There are no tiers, and the distinction is binary: either something is a security fix, or it’s not.

Security releases are pre-announced on golang-announce three days before the release.

We’ve issued six security releases in the past eight months, on top of the eight regularly scheduled point releases.

Proposal

We propose introducing three separate tracks for security fixes.

  • Issues in the PUBLIC track affect niche configurations, have very limited impact, or are already widely known. Recent examples include #44916, #44913, #43786, #40928, #40618, and #36834.
  • Issues in the PRIVATE track are violations of committed security properties. Recent examples include “go mod download” code execution issues, #42552, #34902, #39360, #34960, #34540, and #29098.
  • Issues in the URGENT track are a threat to the Go ecosystem’s integrity, or are being actively exploited in the wild leading to severe damage. There are no recent examples, but they would include remote code execution in net/http, or practical key recovery in crypto/tls.

The Security team reserves the right to choose the track of specific issues in exceptional circumstances based on our case-by-case assessment.

We also propose the following handling procedures for each track.

  • PUBLIC track issues are fixed in public, and get backported to the next scheduled minor releases (which occur ~monthly). The release announcement includes details of these issues, but there is no pre-announcement.
  • PRIVATE track issues are fixed in the next scheduled minor releases, and are kept private until then. Three to seven days before the release, a pre-announcement is sent to golang-announce, announcing the presence of a security fix in the upcoming release, and whether the issue affects the standard library, the toolchain, or both (but not disclosing any more details).
  • URGENT track issues are fixed in private, and trigger an immediate dedicated security release, possibly with no pre-announcement.

All security issues are issued CVE numbers.

Motivation

Fundamentally, this proposal is about making the security policy scale.

Every package can be used in many different ways, some of them security-critical depending on context. So almost anything not behaving as documented can be argued to be a security issue. We want to fix these issues for affected users, but doing so in separate security releases imposes a cost on all Go users. With each security release, the Go community needs to scramble to assess it and update. If security releases become too frequent, users will stop paying attention to them, and the ecosystem will suffer.

The introduction of the tracks helps the community assess their exposure in each point release, and merging the security and non-security patch releases will lead to fewer overall updates and a more predictable schedule.

Originally, the rationale for dedicated security releases was that there should be nothing in the way of applying a security patch, like concerns about the stability of other changes. However, since security releases are made on top of the previous minor release, this only works if systems were updated to the latest minor release in the time between that and the security release. This time is on average two weeks, which doesn’t feel like long enough to be valuable. It’s also important to note that only critical fixes are backported to minor releases in the first place.

@fazalmajid
Copy link

@fazalmajid fazalmajid commented Mar 11, 2021

Thanks for outstanding work the Security Team has been doing, FIlippo!

It would be helpful if the issues were also tagged to distinguish between:

  • those where the Go runtime itself is affected, i.e. any application compiled with a vulnerable version of Go or using vulnerable packages is also vulnerable
  • those where the Go build environment itself is at risk, but the compiled binaries are not (other than as a side-effect of compromise code injected by attacks on the build process)

The distinction is not cut-and-dry in this era of automated CI/CD deployments and state-level actors engaging in supply-chain attacks, but it would help users assess whether the release warrants expedited deployment or not.

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 11, 2021

Ah, that's a good idea, @fazalmajid. The two classes do require different preparations, sometimes even by different teams, so it makes sense to mention that in the pre-announcement.

We can use a statement like this in pre-announcements: "The upcoming Go 1.A.B and Go 1.X.Y releases include fixes for HIGH severity (per our policy at golang.org/security) vulnerabilities in the Go toolchain / in the Go standard library / in both the Go toolchain and the Go standard library."

@p-rog
Copy link

@p-rog p-rog commented Mar 11, 2021

Why do you want to use three tier severity scale, when common practice in industry is a four tier scale?
Even in the mentioned in the proposal OpenSSL policy there is four categories scale.
Wouldn't it be better to follow common practice and use four tier system?
You may take a look at the CVSSv3 specification, and correlate your severity scale with the severity rating scale described there (https://www.first.org/cvss/specification-document).

The mentioned handling procedures could be similar for Medium and High severity issues.
You will only achieve some flexibility to better express the potential impact of the vulnerability.

@oiooj
Copy link
Member

@oiooj oiooj commented Mar 12, 2021

Thanks for outstanding work @FiloSottile

For CRITICAL security issues, It is possible to notify some major companies using Go ( such as cloud vendor ) in advance? Because they manage a large number of services, they need more time to prepare for changes.

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 12, 2021

@p-rog I prefer to ask why introduce a fourth tier, when we wouldn't do anything differently for it? What benefit would it provide? How would we pick what's a MEDIUM and what's a HIGH? What would we communicate to users about how differently they should treat them?

The current criteria are clear: LOW are things we are comfortable fixing in public, CRITICAL are things we want to fix right now, HIGH is everything else. It's not an easy assessment to make, but it's a necessary and useful one. What would be the criteria for MEDIUM?

CVSS is an excellent example of how these scales break down when they try to apply more rigid and abstract criteria to software that's reused in diverse contexts. In my experience, CVSS is unusable for anything that is not a piece of software that's deployable on its own: for example, how do you pick remote vs local exploitation for a library? If it's used on remote inputs, it's remote, if it's used on local inputs, it's local! (This is not a made up example, different distributions scored the recent RCE in libgcrypt differently with CVSSv3 because of rating it local vs remote. In our scale, it'd be clearly a "let's fix that right now", so a CRITICAL.) A standard library is the ultimate context-dependent software, so it would be especially meaningless for us to try and use criteria like the CVSS ones.

@fweimer-rh
Copy link

@fweimer-rh fweimer-rh commented Mar 15, 2021

I do not understand the go get policy. As far as I understand, go get will run the system toolchain if the downloaded package requests that. The system toolchain has not been designed to avoid code execution. This means that go get will always be reasonable effort only in terms of avoiding code execution, and cannot provide any strong guarantees. Classifying code execution on go get as High seems problematic, given that you can hide and patch only the Go parts.

@p-rog
Copy link

@p-rog p-rog commented Mar 15, 2021

@FiloSottile I understand your point of view. Your proposed severity levels are in direct relation to how you want to handle these cases. But it's not the purpose of the severity rating. The severity rating should show how serious the vulnerability is. That how you will handle the cases is of course in relation to the severity level, but severity scale takes into account the potential risk of the discovered vulnerability. Maybe take a look at Red Hat security ratings (https://access.redhat.com/security/updates/classification).

In regards to the CVSS, it's not ideal, because not each use case can be included into one CVSS score for a vulnerability. But, the worse scenario should be taken into the consideration in CVSS calculation. Then CVSS makes sense. Could be HIGH but in some scenarios like if an application uses only local inputs the impact could be lower and the CVSS could be different and this will be covered by the application vendor. In the other words, a flaw in standard library from your side should be analyzed in relation to the worst possible scenario and based on that you should assign the best Severity level and express in CVSS.

Based on my experience the three tier scale can't handle all cases. That's why four tier scale is more popular in industry.
I agree that sometimes it's difficult to decide if it's more MEDIUM or HIGH. But from the future perspective it's still better than three tier scale.

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 15, 2021

In the other words, a flaw in standard library from your side should be analyzed in relation to the worst possible scenario and based on that you should assign the best Severity level and express in CVSS.

If analysed in the worst possible scenario, no vulnerability in the standard library (and arguably in any library) is ever going to be local, since applications might take remote input and pass it to the library, but that score is not going to be particularly useful to most users.

However, it's true that we might be misusing the concept of severity, especially if we'd score any non-CRITICAL vulnerability as LOW if it's already widely known and not worth fixing in private.

Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 15, 2021

I do not understand the go get policy. As far as I understand, go get will run the system toolchain if the downloaded package requests that. The system toolchain has not been designed to avoid code execution. This means that go get will always be reasonable effort only in terms of avoiding code execution, and cannot provide any strong guarantees. Classifying code execution on go get as High seems problematic, given that you can hide and patch only the Go parts.

That's a good point. I'd be open to declaring go get code execution protections best-effort, and rating those fixes LOW (or PUBLIC, or whatever equivalent rating). Most other language ecosystems have code execution at build time, so it's not a common security expectation. @rsc?

(In general, we should progressively document the security expectations of the various parts of the distribution, but that's beyond the scope of this proposal.)

@p-rog
Copy link

@p-rog p-rog commented Mar 15, 2021

Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?

If it won't be called Severity scale but maybe "Handling scale" or "Handling types" then it's a very good idea!
Then the severity rate you can assign based on your judgement when assigning CVE. Of course if you would like to have a severity ratings.

@bcmills
Copy link
Member

@bcmills bcmills commented Mar 15, 2021

Most other language ecosystems have code execution at build time, so it's not a common security expectation.

If we have a vulnerability that can cause code execution while downloading (but not building or running) module dependencies, such as for go mod download or go get -d, then I would prefer that we treat those as HIGH severity. It's one thing to expect that users audit their dependencies; it's another altogether to expect them to audit their dependencies before they even download the source code.

@ericsampson
Copy link

@ericsampson ericsampson commented Mar 16, 2021

Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?

That reads a little odd to me, and that it's too focused on the mechanism rather than the criticality; I think people who work with security concerns at their orgs but are not deep in the Go ecosystem would be confused to hear "a PRIVATE level security issue has been discovered and will be addressed in release X.Y on date Z".

The original LOW/HIGH/CRITICAL sounds fine to me, FWIW.

@p-rog
Copy link

@p-rog p-rog commented Mar 16, 2021

That reads a little odd to me, and that it's too focused on the mechanism rather than the criticality; I think people who work with security concerns at their orgs but are not deep in the Go ecosystem would be confused to hear "a PRIVATE level security issue has been discovered and will be addressed in release X.Y on date Z".

The original LOW/HIGH/CRITICAL sounds fine to me, FWIW.

But the proposed Severity scale is based on how cases will be handled, that's why it would be better to call it "Handling scale" or "Handling types" with levels PUBLIC, PRIVATE, and URGENT.

The severity scale should be directly related to the impact of the flaws.

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 16, 2021

To be clear, if we do switch to something like PUBLIC, PRIVATE, and URGENT, we will not surface those labels in announcements. We'll simply pre-announce an undisclosed vulnerability fix for PRIVATE vulnerabilities, and just list them in the release announcements for the rest.

@ericsampson
Copy link

@ericsampson ericsampson commented Mar 16, 2021

ah ok thanks :)

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Mar 24, 2021

I updated the proposal to refer to PUBLIC/PRIVATE/URGENT tracks rather than severity, based on the feedback in this thread.

@rsc
Copy link
Contributor

@rsc rsc commented Mar 24, 2021

Based on the discussion above, this proposal seems like a likely accept.
— rsc for the proposal review group

@rsc rsc moved this from Incoming to Likely Accept in Proposals Mar 24, 2021
@rsc rsc moved this from Likely Accept to Accepted in Proposals Apr 1, 2021
@rsc
Copy link
Contributor

@rsc rsc commented Apr 1, 2021

No change in consensus, so accepted. 🎉
This issue now tracks the work of implementing the proposal.
— rsc for the proposal review group

@rsc rsc changed the title proposal: new security policy new security policy Apr 1, 2021
@rsc rsc removed this from the Proposal milestone Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Proposals
Accepted
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants