-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advance disclosure of security vulnerabilities #3259
base: master
Are you sure you want to change the base?
Advance disclosure of security vulnerabilities #3259
Conversation
I think it would be appropriate to include crates that are maintained by the rust project but aren't rust itself, such as |
To minimize confusion, I suggest adding timezone to the things that should be mentioned in the public pre-announcement. |
Will the list of notified organizations be a shared publicly? From the public applications it's already possible to determine which organisations are on the list. Having this list public would ensure that everyone can gain a quick overview.
Will it be disclosed afterwards, which additional organizations have been notified if any? 🙃 |
Thanks for raising that. Distributions shipping packages of crates maintained by the project is something that slipped off my mind when drafting this. I'll see how to incorporate that in the next few days.
Good point. We've always included the timezone in our communications up to this point, but making it explicit won't hurt.
There was a typo which made things confusing, but yes, the list of organizations will be shared publicly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks awesome! I'm very happy to see these changes. I've left a couple of comments inline, in particular I'd like to see this policy extended to internal-only toolchains as long as their existence can be proved to the WG.
will be considered: if the WG thinks an organization that would not be eligible | ||
otherwise is trying to find a loophole to be able to apply, the WG will have | ||
the authority to reject the application. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would have a requirement that an organisation be trustworthy. I realise the WG has final say and that this is impossible to judge objectively, but I think it is useful to have as a guide and to build trust with the community in general that the policy will be applied sensibly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I'm not opposed to it, but do you have some criteria in mind to decide whether an organization will be trustworthy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not, I would leave that totally to the WG's discretion. I would just add that to the text so there is some expectation that this is an aspect the WG would consider when deciding on whether to include a company in the process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kinda skeptical of a vague criteria, as it risks becoming the catch-all for the WG applying this policy unevenly. Do you think something like "the organization must not have a history of confidentiality violations" would still cover your concerns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, vague is kinda bad, but we shouldn't be afraid of explicitly leaving stuff to people's discretion or require over-precise definitions. We are dealing with people, not code, and people are vague and imprecise. I think having a guideline which sets expectations and helps guide the WG to make policy decisions is a good thing. "the organization must not have a history of confidentiality violations" seems necessary but not sufficient. Like if a cyber-crime group applied for the disclosure, you'd want to say 'no' even if they were really good at keeping secrets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like if a cyber-crime group applied for the disclosure, you'd want to say 'no' even if they were really good at keeping secrets.
Ok that's a good example, and it lets me visualize better what you were worried about! I'll have a think about it.
* Organizations distributing the Rust toolchain (as provided by the Rust | ||
project) or a fork of the Rust toolchain explicitly meant to be used by their | ||
external customers or users. Organizations shipping the toolchain just to | ||
internal customers/users are not eligible, nor are organizations publicly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have the caveat for internal users? I imagine a large user could have more internal users than a small public fork. I think that if an org is willing to share evidence that such a toolchain exists with the WG, even if that is not public, then that should be enough to qualify (assuming the org is trustworthy and in good standing, etc).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any organization could have internal users of Rust, and we can't include every organization that does so. I don't think we should be in the business of defining "big enough" companies in this regard.
The alternatives section covers the possibility of disclosing to major players in the ecosystem, and the upsides and downsides of doing so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its not about internal users, its about an internal fork or internal distribution. I think that is likely to be few enough companies that we could support that. That's different from major players and nothing to do with 'big enough'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually want to create an incentive for people to create their own internal distribution or fork, though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is enough of an incentive to sway people's decision here. People have internal distributions already, the question is how secure we want them to be, and I think we want them to be as secure as possible. (And personally I think we want to incentivise people to use Rust, and whether that is internal only or public (or closed or open source) is secondary)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing a small upfront investment in setting up an internal toolchain to continue getting full vulnerability details a week ahead of the public would be an easily justifiable expense, and something I could definitely see happening.
I'm not sure if you're agreeing or disagreeing with me :-) I agree that it should be easy to do this for an internal toolchain, but doing so requires getting the vulnerability notifications, even if there are no external customers.
The reasoning I had behind this RFC is who your organization would be blocked on to update their projects. If they're only blocked by another team inside the company, the company can probably figure out a way to speed the upgrade up. If instead they're blocked by a third party toolchain vendor there would be no way for them to upgrade until that vendor finished their patches and testing.
I'm not sure I agree with this. In very large companies, if two teams are in different orgs, they may as well be in different companies. Such large companies tend to develop bureaucracy too and become pretty rigid about not releasing internally until all the required testing, etc. is done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if you're agreeing or disagreeing with me :-) I agree that it should be easy to do this for an internal toolchain, but doing so requires getting the vulnerability notifications, even if there are no external customers.
What I'm saying is that it's not a huge investment to set up an internal toolchain with the sole purpose of receiving these notifications, as long as you don't carry custom patches and you mirror rust-lang/rust
's CI configuration as much as possible. I don't think any of the companies maintaining an internal toolchain right now are maintaining it for this reason, but I could see other companies in the future doing that.
The more we share vulnerabilities ahead of time the more risk of leakage there will be, and I don't know of a fair way to define which internal toolchains would benefit off this without the risk of sending notifications to tens of companies as Rust's usage increases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't think anyone would have an internal toolchain without any changes at all, because what would be the point? :-) At the least, I would expect an internal toolchain to be slightly patched and/or built in some kind of interesting way. In addition to that, it is possible internal toolchain users may want to do some degree of extra testing or verification.
The more we share vulnerabilities ahead of time the more risk of leakage there will be
This is true, but to counter, I don't think there are likely to be more orgs with internal toolchains than those building their own for external use. I don't think those orgs are likely to be more or less trustworthy re leaking secrets than orgs with external users. I do think that those orgs with internal toolchains are likely to be vulnerable to security exploits (imagine a vuln in AWS or Azure code) and that such orgs are likely to take a long time to deploy re-compiled code. (Again, imagine deploying code to every server at AWS). So, I think although there is increased risk with increased numbers, internal toolchains are not riskier than external ones, and their needs for this kind of timely support may be greater.
I don't know of a fair way to define which internal toolchains would benefit off this without the risk of sending notifications to tens of companies as Rust's usage increases.
Yeah, I sympathise with this. I'm not sure either. I think there needs to be some establishment of trust, but I'm not sure how to do that in a way that scales even to tens of orgs. Personally, I would leave it up to the WG's discretion - if the org can persuade you they are trustworthy and have a legitimate toolchain (e.g., by being well-respected and showing you the toolchain, or by joining the WG and helping with triage ala LLVM) then add them to the list. If there is any doubt, don't add them. I wouldn't over-index on fair (as long as we're avoiding glaring situations like saying yes to AWS and no to Google or something).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't think anyone would have an internal toolchain without any changes at all, because what would be the point? :-) At the least, I would expect an internal toolchain to be slightly patched and/or built in some kind of interesting way. In addition to that, it is possible internal toolchain users may want to do some degree of extra testing or verification.
It would indeed be fairly pointless thinking about just the toolchain itself, but if the "reward" is getting vulnerability reports ahead of time it might start to make more sense (and could be easily disguised as "we're building the binaries ourselves to mitigate trusting trust attacks", which on its own is a reasoning that would make sense).
This is true, but to counter, I don't think there are likely to be more orgs with internal toolchains than those building their own for external use. I don't think those orgs are likely to be more or less trustworthy re leaking secrets than orgs with external users.
To be clear, I'm not saying that companies maintaining an internal toolchain are bad at keeping secrets from leaking! Each organization we add though, regardless of who they are, increases the chance of accidental leaks.
I do think that those orgs with internal toolchains are likely to be vulnerable to security exploits (imagine a vuln in AWS or Azure code) and that such orgs are likely to take a long time to deploy re-compiled code. (Again, imagine deploying code to every server at AWS). So, I think although there is increased risk with increased numbers, internal toolchains are not riskier than external ones, and their needs for this kind of timely support may be greater.
Hmm, sure, but this is not specific to companies that maintain an internal toolchain. If a big company has a big internal Rust codebase, but it doesn't have a need to maintain an internal toolchain, they'd still meet the criteria you're mentioning here.
by joining the WG and helping with triage ala LLVM
In my experience inside the LLVM Security Group, that's not a model that works well (can elaborate in DMs if you're curious).
I wouldn't over-index on fair (as long as we're avoiding glaring situations like saying yes to AWS and no to Google or something).
The worry I have is that while it's easy to think the tech giants have a good reason for this, due to basically everyone knowing how many critical things they develop, companies that are not Google / Microsoft / Amazon / Meta / Apple might have even more reason to receive advance notification but they simply aren't as known. Saying yes to Google but not AWS might be indeed glaring, but the same could be said for saying yes to AWS but not an aerospace manufacture using Rust in critical software, even though the chance of anyone even knowing the name of that manufacture is fairly small.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would indeed be fairly pointless thinking about just the toolchain itself, but if the "reward" is getting vulnerability reports ahead of time it might start to make more sense (and could be easily disguised as "we're building the binaries ourselves to mitigate trusting trust attacks", which on its own is a reasoning that would make sense).
Doesn't the same apply to having an external-facing but unmodified fork? I don't think that making a toolchain publicly available is a huge barrier to preventing malicious orgs.
Each organization we add though, regardless of who they are, increases the chance of accidental leaks.
Agree. But while minimising numbers is good, I don't think that implies that external/internal facing is a good criteria for doing so.
Hmm, sure, but this is not specific to companies that maintain an internal toolchain. If a big company has a big internal Rust codebase, but it doesn't have a need to maintain an internal toolchain, they'd still meet the criteria you're mentioning here.
Right but having to batch, build, and test the internal toolchain adds time to the process. So there is a real need here.
The worry I have is that while it's easy to think the tech giants have a good reason for this, due to basically everyone knowing how many critical things they develop, companies that are not Google / Microsoft / Amazon / Meta / Apple might have even more reason to receive advance notification but they simply aren't as known. Saying yes to Google but not AWS might be indeed glaring, but the same could be said for saying yes to AWS but not an aerospace manufacture using Rust in critical software, even though the chance of anyone even knowing the name of that manufacture is fairly small.
But if we can verify the manufacturer is legit, then we can give them the notifications too. If we don't share with any internal-only providers, then they still lose out :-)
To step back, I think your concern is essentially more participants = more risk, and supporting internal-only distros potentially doubles the number of participants. That is totally valid. I would summarise my counter argument as although I agree we should minimise the number of participants, whether the distro is internal or external facing doesn't feel like a good (as in risk-minimising or impact-maximising) or fair criteria for doing so. I don't see a win-win here, unfortunately because I don't see how we can minimise risk in other ways. (Like, we could admit internal and external participants, but toss a coin for each one and only send notifications to heads. This is clearly fairer and I believe it would lower risk and increase impact of the scheme, but obviously it is unsatisfactory :-) )
In the Prior Art section, you might want to list the Xen Project: https://xenproject.org/developers/security-policy/ As a longstanding member of the Xen Project Security Team I read through the RFC with interest. Most of the important things are covered and I agree with the thrust of the RFC. It is IMO very important for a process like this to be fair, and seen to be fair (whatever that means to the people in the community). I think this RFC proposal achieves that fairness. I hope others will agree, and if not, raise their doubts via this process. Thanks! |
Thanks for chiming in, and added Xen to the prior art section! While the prior art is not meant to collect all similar policies and is focused mostly on other programming languages, there are a couple of interesting points in Xen's that would benefit the discussion in this RFC:
I'll have some thinking to do, but I expect to incorporate some text around transparency and notifying similar projects in the coming days (there is a discussion above on notifying large production users already). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a major comment at this time, but I want to offer big 💜 for working on documenting and thinking through this policy!
Namely, vulnerabilities in the project infrastructure, crates maintained by the | ||
Rust project, or other projects that are not shipped as part of the toolchain | ||
will not result in an advance notification, as there would be no update for the | ||
organizations to prepare in advance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is true? If (say) regex needs a security update, users need to prepare to bump just as much as they'd prepare for a toolchain upgrade, IMO.
I think the guiding line is if the update is something we do or users do: if users need to take any action, there should be a similar process for any of those actions IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is true? If (say) regex needs a security update, users need to prepare to bump just as much as they'd prepare for a toolchain upgrade, IMO.
This is a relic from a previous draft of this RFC, that didn't include public pre-announcements of vulnerabilities, and doesn't account for programmerjake's mention of distributions also distributing crates in addition to toolchains. I'll update the text shortly.
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
Reading the text this initially indicated that only the toolchain stuff gets notified and I also see @programmerjake and @Mark-Simulacrum related comments - I've been writing lately about the vendored OpenSSL that manages to stay out of the way and the caveat of static linking people should be looking after their binaries and take into account they should be re-building the whole binary whenever one of the components gets vulnerability - Would different tiers of categories of notifications be introduced ? e.g. one for cargo (as whole incl. deps) and one for toolchain etc. My questions around possible cases are - Would this get notified and How would -
Also - since static linking makes the whole binaries vulnerable unlike .so system components that can be replaced Let's say minimum requirement for tar used by cargo was x.x in Cargo manifest but tar was x.x vulnerable which gets fixed in x.y Cargo gets bumped +1 and updates the tar requirement to x.y Would we tell forward to move to cargo +1 since pre-bump dependency tar x.x was vulnerable making previously compiled cargo vulnerable with tar x.x where as cargo +1 includes tar x.y staticly linked ? Normally in distros they just update distinct system components but static linking requires another level of thinking for the whole dependency chain. |
hello, curious about the progress here: am I reading correctly that there no further points open? Can this RFC be merged? It has been a lot of work and effort, would be a bit sad to leave it here 🙂 |
Update: I checked in with Pietro, and he said that he'll try to get back to this in a few months. |
This RFC proposes a change to the project's security policy to expand the set of organizations who are notified with the full details of Rust security vulnerabilities ahead of public disclosure, and to pre-announce to the general public upcoming security releases.
Rendered