Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New RFC: signing registry index commits #2474

Closed

Conversation

withoutboats
Copy link
Contributor

@withoutboats withoutboats commented Jun 14, 2018

This RFC proposes signing the commits to the index of a registry, and for cargo to automatically verify these signatures. It additionally includes a proposed system for performing key rotations. It is intended to be a minimal, intermediate step toward improving the security of cargo registries (including crates.io) without completely replacing the existing index system.

Thanks to Henry de Valence, André Arko, Yehuda Katz and Tony Arcieri for providing feedback on this RFC prior to publishing.

Rendered

@withoutboats withoutboats added T-lang Relevant to the language team, which will review and decide on the RFC. T-cargo Relevant to the Cargo team, which will review and decide on the RFC. and removed T-lang Relevant to the language team, which will review and decide on the RFC. labels Jun 14, 2018
2. A keys file exists for that registry.

An attempt to update the `HEAD` of a signed registry to a commit that is not
signed by one of the existing committer keys is a hard failure that will
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modulo key rotation I presume?

(May need to be clearer about "existing" here)

For that reason, crates.io will adopt policy that `can-rotate` keys are stored
in an offline medium, with due security precautions. As a future hardening, we
could also support a threshold signature scheme, requiring signatures from
multiple `can-rotate` keys to perform a key rotation, reducing the impact of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can also be handled out of band; you can have one (potentially more) can-rotate key that are Shamir'd between core team members. While there is a single key it is only accessible if (any) X core team members (or other keyholders) come together.

This scheme does have the vulnerability of introducing a step where they need to share their Shamir keys and decrypt the signing key, which can still be leaked somehow and have zero protection after that.

@rvolgers
Copy link

rvolgers commented Jun 14, 2018

There is one possible caveat to git's integrity guarantees which doesn't apply to Cargo anyway but might still be worth mentioning. At least I think it shows that while git does provide useful security properties, it is a flexible tool that is not necessarily designed to uphold those security properties in all usage scenarios. This might be worth mentioning in case of future custom Cargo deployments and other new developments.

Apparently git over the file:// or rsync:// transports also copies the index files from the remote repository (and doesn't verify them), which means the integrity guarantees are weaker.

Not sure if this is still the case. I wasn't able to find any updated information quickly and wanted to leave this note here in case I forget.

@est31
Copy link
Member

est31 commented Jun 15, 2018

Thanks @withoutboats for writing this beautiful RFC!

I like that it strikes the balance between improving security on one hand, and applying pragmatic choices at the other hand, e.g. sticking to SHA-1 until our stack of used technology supports SHA-2.
There has recently been an update on progress on that issue on the git mailing list: link.

I also agree that a switch to TUF would indeed be a bad idea. To me, there seems to be little benefit in knowing that some particular author has authored a crates.io package vs knowing that the author logged in to crates.io successfully. TUF also dramatically increases the risks of individual key loss.

As an open question, I wonder how source replacement should be handled after the change. When working on Rust projects offline for example, I need to edit the index to point to my localhost server for storage for example. It would be cool to still enable such local edits without requiring signatures or without me needing to change key set of the local registry or other possibly complicated operations. A flag that can be appended to sources like verify-signatures = false would solve these issues.

@burdges
Copy link

burdges commented Jun 16, 2018

Just fyi, there is a Rust PGP implementation project: https://sequoia-pgp.org/

I support this direction overall of course. In fact, PGP/GPG brings an enormous if painful array of useful tooling, ala smart cards, which sounds useful but might not work with these keys in .cargo.

We've moved past the pain of PGP with modern messaging tools like signal, etc. which largely replace careful key management with strong automatic forward secrecy measures. I'd propose an automatic forward security solution for repositories together with a slightly simpler PGP integration than proposed here:

We could give all repositories with local commits have an automatically generated private Ed25519 key in the .cargo directory, with the public key placed into the repository itself. We then rotate this signing key with every commit, or publish in this case, with the old key signing the next whole commit, including new public key added to the repository.

These automatic keys may not restrict crates.io uploads of course, but they would provide useful forensic information, even for crates that never utilize PGP. Adversary who compromise a developer's machine create a break in this chain of keys if they rewrite any commit history predating that currently published. Also, we achieve cheap PGP integration with ordinary PGP signed repository commits. In fact, crates can frequently benefit from PGP retroactively, assuming the developer never broke the association between the repository and the key in .cargo. We'd need a scheme to restrict crates.io index uploads to PGP signed commits.

This automated scheme leaks more metadata about developers' workflow: Any breaks the signature chain indicate that repositories were cloned or copied between machines. If two signing chains remain active it indicates the developer works from two different machines. Or worse the absence may indicate they use synchronization tools that themselves create an attack vector, ala dropbox. etc.

tl;dr This RFC proposes a seemingly non-standard PGP integration that supports forward security properties, ala can-rotate. Instead, we should provide those forward security properties with a separate automatic always-on signing scheme, that provides forward security by rotating keys with every commit, while using more standard git PGP integration for access control to crates.io and long term key management.

@ishitatsuyuki
Copy link
Contributor

The motivation of signing a repository is not clear; it's no more than one more layer of access control. Doing MITM on HTTPS is only common for corporate proxies, and in that case 1) there doesn't seem to be a threat model, and 2) if tamper is required it's likely that the organisation will force an alternative key to be trusted.

We're also different from TUF, in the sense that we're not distributing updates, just metadata. The point is that a registry is full of user uploaded content, in contrast to a package repository where only trusted users package the distributions.

Finally, putting ultimate trust on the centralised registry doesn't make sense, for the same reason as above. Blindly signing someone else's content is basically false sense of security. What we need instead is user signed crate tarballs, and maybe some "officially approved" keys (or signing chains) that helps user identifying genuine crates.

@withoutboats
Copy link
Contributor Author

withoutboats commented Jun 16, 2018

@est31

As an open question, I wonder how source replacement should be handled after the change.

The supported source replacement mechanisms aren't impacted by this change; a registry replacement can be signed just like any registry can, whereas a local-registry or directory replacement have no additional security properties under this RFC.

What you're doing though - having users edit their own index - would stop working. I'm surprised it works today, I guess cargo performs a merge for you entirely automatically when it updates the index again? I think all around ideally your system would have a solution that doesn't require them to edit their index, not sure what that would look like but it seems like a fruitful discussion to have on another thread.


@burdges

I didn't understand your comment. I think the miscommunication is around what git repositories this RFC is talking about. The index information for the crates.io repository, containing all of the information about crates.io packages and used in version resolution for those packages, is distributed as a git repository. End users are never intended to commit to this repository, they only fetch it.


@ishitatsuyuki

When you say that "doing MITM on HTTPS is only common for corporate proxies" I think you misunderstand the motivation of this RFC. You seem to be referring to the sort of aboveboard interception of HTTPS data using a certificate they've required users to trust is common that is common in certain settings. But this RFC is talking about malicious attacks on the transport protocol used to obtain the index updates in general, and a MITM attack in which an attacker obtains a false certificate for github.com and then secretly intercepts traffic between the user and GitHub would be an example of this sort of attack.

In other words, this RFC is about hardening us in the event that an attacker somehow successfully breaches the security of TLS or SSH. This RFC provides a second layer of authentication between cargo and a registry service that isn't dependent on the transport layer.

@mark-i-m
Copy link
Member

@withoutboats Thanks! This is a great step.

Is the "security considerations" section intended to be a threat model? Could we possibly make it more explicit and label it "threat model"?

Also, I am assuming that we are choosing not to handle a few things in this first pass: namely corruption of the .cargo/pubkeys file (or whatever it is called). For example, if I can somehow cause that file to be corrupted, then user just had to start over and they're stuck with the "trust the first download" thing. Alternately, if I can run a build.rs that adds my key to your ring...

If we're not dealing with this now, that's fine. I still think this is an improvement, but I think those caveats should be more explicit.

(Also, one potential mitigation is to run build.rs as the "nobody" user and give .carrgo/pubkeys a mode like the .ssh dir)

@withoutboats
Copy link
Contributor Author

@mark-i-m Yea, this RFC is not intended to protect against an attacker with unfettered access to a users home directory and the ability to remotely execute code.

@burdges
Copy link

burdges commented Jun 17, 2018

I described a scheme that applies to anything remotely resembling a repository, but I could surely described it more clearly.

In this RFC, you propose signing crate publications, but suggest doing this by managing a per-crate PGP keys outside the usual PGP ecosystem, right? You should do roughly this, except (a) uses a less complex signature scheme that can be fully automated and rotates keys with every publication, and (b) authenticates these automatically generated chains of signing keys by signing one link using PGP.

@ishitatsuyuki
Copy link
Contributor

But this RFC is talking about malicious attacks on the transport protocol used to obtain the index updates in general, and a MITM attack in which an attacker obtains a false certificate for github.com and then secretly intercepts traffic between the user and GitHub would be an example of this sort of attack.

If GitHub is compromised, we have other things to worry about. For example, you can steal an arbitrary user's session to gain access to crates.io. Also, to prevent the attack in your example HPKP should be used for HTTPS, and for SSH it only trusts one key per host.

Also, I don't think that we're going to use Git indefinitely; downloading the full index doesn't scale, and what I proposes is in rust-lang/cargo#5597. tldr, with that approach a per-crate/user signing makes much more sense.

@est31
Copy link
Member

est31 commented Jun 17, 2018

HPKP should be used for HTTPS

Github doesn't seem to use HPKP.

@titanous
Copy link

Github doesn't seem to use HPKP.

HPKP is deprecated: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/he9tr7p3rZ8

@withoutboats
Copy link
Contributor Author

In this RFC, you propose signing crate publications, but suggest doing this by managing a per-crate PGP keys outside the usual PGP ecosystem, right?

This is not what the RFC proposes. This is about signing index commits to the registry index, which cargo uses to resolve dependencies, and which is stored as a git repository. It has nothing to do with per-crate PGP keys (I think you're confusing it with an earlier unrelated proposal, for end users signing their package).

@withoutboats
Copy link
Contributor Author

If GitHub is compromised, we have other things to worry about. For example, you can steal an arbitrary user's session to gain access to crates.io.

You can also create a new github account to "gain access" to crates.io. I assume you mean that you could gain an access token with that user's privileges, which is true, but out of scope of this RFC. If you can impersonate a user's GitHub account, you can impersonate that user with crates.io's current auth model, its true. But this RFC is about protecting against attempts to impersonate the registry, not any particular user.

@burdges
Copy link

burdges commented Jun 17, 2018

I see. I skipped the opening sentence, which defines what a registry is. oops ;)

Yes, I believe this all looks good in that case. Go for it! :)

[Insert image saying "SIGN ALL THE REPOS"]

I've already pointed out the sequoia-pgp.org project, which might prove helpful.

Just to be clear, there might still be advantages in rotating keys rapidly here too, but we usually envision forward-security as protecting humans from themselves, and good policies can achieve the same ends for organizations like registries.

We can harden crates.io against attacks at this point by distributing the
current trusted keys with the rustup distribution, allowing any security
mechanisms we put in place for the distribution of Rust and cargo binaries to
also secure this operation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I’ve seen done in linux package managers is a simple prompt that asks the user about whether they want to add some key to their key store.

This would require users of cargo on e.g. CI (where almost every use is the first use) to appropriately acknowledge that they are not getting any security or setup the CI so that the keys are cached/signed correctly between runs.

@ishitatsuyuki
Copy link
Contributor

What's your threat model?

First, there are only limited ways to compromise the registry.

  1. Compromising GitHub: very unlikely.
  2. MITM without gaining full access to the client: unlikely.
  3. Breaching a rust-lang admin's token: this can happen, and is the only way to write to crates.io-index.
  4. Compromising a rust-lang admin's account: harder. This allows hijacking the crates.io OAuth as well, allowing arbitrary crate uploads/updates.
  5. Breaching the S3 token: harder, I can think of two ways: 1. hack a dev's machine, 2. leak it from Heroku.
  6. Compromising crates.io: unlikely, but if so it's game over.

Second, if you could compromise the registry, what can you do? The real crate file is stored on S3. Which means, you need to compromise both the Git index and S3 to do real harm.

@est31
Copy link
Member

est31 commented Jun 23, 2018

@ishitatsuyuki the git index includes the URL to use to download the crate files from. Right now it points to crates.io, but if you can freely modify the index to change checksums of crates, you can also edit it to point to another URL.

You also forgot another threat: breakage of the TLS connection between Github and the local cargo instance. This is more likely than you think, as many shady people have access to SSL root certificates.

@burdges
Copy link

burdges commented Jun 23, 2018

I'd consider it a "safe assumption" that for different users GitHub's CA is compromised, GitHub itself is compromised, Amazon is compromised, and some rust-lang accounts and servers are compromised.

Among consumer services ala yahoo, etc. such compromises must be considered a given, post Snowden. About the only argument against this goes "We've seen APTs attack sysadmins, but not developers", which sounds pretty silly. It's also wrong because the Jupiter backdoor being shifted from NSA control to presumably Chinese control presumably occurred through developers being compromised..

@arielb1
Copy link
Contributor

arielb1 commented Jun 30, 2018

@ishitatsuyuki

TLS is proxied often enough, and often enough by unreliable proxies (consider security appliances of dubious security) that relying on it for verification is non-ideal.


An attempt to update the `HEAD` of a signed registry to a commit that is not
signed by one of the existing committer keys is a hard failure that will
prevent cargo from resolving dependencies in that registry any longer. Until
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving malicious data in a git repository is dangerous because it might accidentally be read (consider all the apt-get problems with it skipping verification in some cases). Is there a way of preventing unverified contents from hitting the filesystem?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, are we relying on git for downgrade protection? i.e., does git prevent you from updating a commit to a non-child commit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way of preventing unverified contents from hitting the filesystem?

I don't know if libgit2 exposes a way to fetch into memory without saving to the file system. In general, as long as cargo traverses the database from HEAD (after verifying the signature of HEAD for each cargo command), I don't think there's a lot of risk here.

It might be nice to reverse the fetch after verification fails, but we want to keep the repo in a failed state somehow to avoid this being a way of DoSing the index.

does git prevent you from updating a commit to a non-child commit?

We're just pulling, so we do a fetch followed by a merge. We can make sure that the merge is a fast forward. I'm not sure what you mean by downgrade protection; I think this suggests you're thinking we'd want invalid updates to be rejected and for the index to then continue working as normal; this is actually undesirable in my opinion: we want it to fail loudly, so that for example this can't be used as an avenue for keeping a user from receiving valid updates to the index (by always serving them invalid updates, which they experience as having no update).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if libgit2 exposes a way to fetch into memory without saving to the file system.

I don't mean avoiding using the .git directory, just that we shouldn't check out the new branch before we verify its signature - there were too many git vulnerabilities that relied on checking out files with carefully-crafted names/contents.

Downgrade attacks

For kinds of attacks on package manager, see the classic paper at https://isis.poly.edu/~jcappos/papers/cappos_mirror_ccs_08.pdf

Apparently that paper calls a downgrade a "replay attack". By that, I mean sending a version of the crates.io metadata that is older than the version the client has, hoping that the client would go back to an older version. I think git fetch + merge will not accept downgrades, but it needs to be checked and written down.

we want it to fail loudly, so that for example this can't be used as an avenue for keeping a user from receiving valid updates to the index

An attacker that can defeat the transport encryption can always conduct a freeze attack and pretend that no update had happened yet (up to any expiration dates we might have). I don't see an advantage in attacking using invalid data.

However, still, if an invalid update is detected, cargo should report an error rather than silently not performing an update just to keep us sane.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mean avoiding using the .git directory, just that we shouldn't check out the new branch before we verify its signature

Definitely! We should make sure cargo never checks out FETCH_HEAD.

An attacker that can defeat the transport encryption can always conduct a freeze attack and pretend that no update had happened yet (up to any expiration dates we might have).

You're right, and I had forgotten I had removed an expiration mechanism from this RFC.

Copy link
Member

@est31 est31 Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cargo doesn't check out the index any more: rust-lang/cargo#4060. If you have a checkout at ~/.cargo/registry/index/github.com-1ecc6299db9ec823/ then it's because an older version of cargo which still did a check out.

Edit: it's a bit more complicated sadly. rust-lang/cargo#4026 made cargo not check out the index, but that broke older cargo so rust-lang/cargo#4060 made it check out the index again. If we want to make cargo not check out the index, we'd have to drop backwards compatibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, we don't do real checkouts, but its about managing what reference we use for our local HEAD. Right now we just get refs/remotes/origin/master, we'll need to track a local master branch as well as the origin/master with this RFC (without doing a full checkout).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arielb1 looking at the code, we use a refspec "refs/heads/master:refs/remotes/origin/master"; because this doesn't have a +, we only perform a fast-forward fetch

signed (a 'signed registry'), the signature of HEAD is verified against the
public keys which are permitted to sign commits to the index.

## The keys TOML
Copy link

@tarcieri tarcieri Jul 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section (and by extension the key rotation section) is the only part of this proposal that's incompatible with TUF, because TUF defines a metadata format for the exact same thing.

TUF typically uses JSON (and Canonical JSON) for metadata, however I don't see any reason why it couldn't use TOML instead.

I think you could have a "minimum viable TUF" implementation that just uses a few more files and a slightly different structure for the data which otherwise would work the exact same way, and wouldn't (at least initially) be dependent on anything more heavy handed than just parsing the TOML files and verifying signatures the exact way you otherwise would in this proposal.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be trivial to convert the TOML to canonical JSON for signing.

@withoutboats
Copy link
Contributor Author

I have been implementing this RFC on a branch & I've mostly found it successful. I want to make a few small changes to the RFC based on my experience:

  1. Do not reject registries as broken when HEAD is signed but they're missing a keys file. This policy makes deciding to stop signing commits very disruptive for a registry & in retrospect provides little benefit: if they can delete your keys file, they can probably write their own public key into it.
  2. Rather than determining key rotation order by the commits the tags point to, which is potentially very expensive as you have to iterate over every parent ID since your last update, instead require registries to number their key rotations sequentially.

@trishankatdatadog
Copy link

trishankatdatadog commented Aug 3, 2018

Hi @withoutboats,

Firstly, thanks for working on improving crates.io security! Someone has to start the hard work, and it's great that you're doing it.

@tarcieri, @SantiagoTorres and I have been discussing this. We think you should be able to transparently use the PEP 458 TUF security model we drafted for PyPI, but over git for crates.io. Allow me to elaborate a bit.

TUF is designed to handle the worst-case scenario where the software repository itself (in this case, crates.io) is compromised. I think it's safe to say that a repository compromise is no longer in the theoretical realm.

We understand your concerns that TUF looks a bit complicated, but there are good reasons for its design decisions:

  1. Separation of duties: different "roles" sign different types of metadata (with varying levels of importance) with different keys.
  2. Threshold signatures: metadata for really important roles are signed using m out of n keys.
  3. Explicit and implicit key rotation: there are mechanisms to recover from a key loss or compromise. TUF also provides a mechanism called delegations to transparently distribute and rotate keys for 3rd party developers.
  4. Minimizing risk with offline signatures: keys for really important roles are kept offline, or off the repository, in safe deposit boxes, for example, and used only rarely.
  5. Diversity of cryptographic algorithms: TUF uses multiple signing and hashing algorithms so that a compromise of any one of them is insufficient to cause a repository compromise.

The PEP 458 model basically starts with providing what we call the "minimum security model." In this simple model, automation running the repository (crates.io) will sign for all packages using online keys (or signing keys accessible automatically to the automation). All signed metadata (just JSON files) can be transported over any file transfer protocol, such as HTTP or even git. There are repository tools that will let the automation easily produce signed metadata, and I believe the rust-tuf client can parse this metadata (though @heartsucker should correct me if I am wrong).

Later on, we can talk about ramping up security with better security models such as PEP 480, but I strongly believe PEP 458 should not be too difficult to adapt for crates.io. Please let us know if you have questions, and we will be happy to help however we can.

Thanks,
Trishank

@withoutboats
Copy link
Contributor Author

Could you provide more material and specific suggestions for how the security properties of this proposal could be improved? My problem with TUF is not that its "a bit complicated," but that it is presented as a wholesale solution which is incompatible with our existing infrastructure. What material changes do you want to see made to this RFC, and how do you believe those changes would improve our security practices?

@heartsucker
Copy link

@withoutboats Here's two papers that have info about the benefits of TUF.

@tarcieri
Copy link

tarcieri commented Aug 29, 2018

I took a stab at modifying this proposal to use TUF file formats and concepts:

withoutboats#7

I think they map pretty naturally: I mapped the can-rotate privilege to the TUF root role, and the can-commit privilege to the TUF timestamp role.

I think there's a lot of value in using TUF for this purpose, particularly for a "Phase 2" of supporting end-to-end signatures on crates created by developers at the time the crates are published. It also doesn't require that many changes: really it's just changes to the file formats.

@burdges
Copy link

burdges commented Aug 31, 2018

Avoid true threshold signatures here, ala BLS, because they anonymize the signers within the key set, which weakens accountability in any application like this or TUFs. There is nothing wrong with ordinary logic that says "approve rotation if k of these n keys signs", but you need the specific k signers to be designated in each signature. In fact, there is nothing even wrong with BLS aggregation via delinearization used as a fake threshold signature scheme, so that verification requires identifying the signers. Apologies for not picking about the terminology.. maybe multi-signature sounds less ambiguous.

@tarcieri
Copy link

@burdges the threshold signature implementation I think might make sense here is using a sequence of signature packets within a single ASCII "armored" OpenPGP message, particularly since it allows doing threshold signing in a backwards compatible way, where existing tools can still verify the signature but ignore the additional signature packets in the same message. This approach works with the signature algorithm of your choice.

@burdges
Copy link

burdges commented Sep 1, 2018

Yeah, that's harmless of course.

@Centril Centril added the A-security Security related proposals & ideas label Nov 22, 2018
@ebkalderon
Copy link

Quick question about key rotation and the handling of compromised keys. If a previously trusted key used to sign commits is suddenly known to be compromised or revoked, what is the procedure for recovering from that? For example, are the commits to the registry index signed by that key cherry-picked, verified by the original committer, and re-signed by a new trusted key? I could not determine from the text of this RFC how compromised keys are handled.

@tarcieri
Copy link

tarcieri commented Dec 4, 2018

@ebkalderon glossing over the weaknesses in SHA-1 for a moment, signing any given commit authenticates the entire history, as the entire history forms a single hash-based data structure.

This is covered on L46-L51 of this proposal. L58 describes how authentication occurs: only the signature on the latest commit (i.e. HEAD) is checked. The other signatures may be invalid (and MUST be treated as invalid if they were created by a compromised key).

@alexcrichton
Copy link
Member

@rfcbot fcp postpone

The Cargo team discussed this RFC last week during triage and unfortunately we concluded that we don't have bandwidth at this time to work through this RFC and review it at this time. As a result I'm going to propose that we close this as postponed. We're still very interested in the motivation behind this RFC but at this time we're not quite equipped to handle it.

@rfcbot
Copy link
Collaborator

rfcbot commented Oct 28, 2019

Team member @alexcrichton has proposed to postpone this. The next step is review by the rest of the tagged team members:

No concerns currently listed.

Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@rfcbot rfcbot added proposed-final-comment-period Currently awaiting signoff of all team members in order to enter the final comment period. disposition-postpone This RFC is in PFCP or FCP with a disposition to postpone it. labels Oct 28, 2019
@ehuss
Copy link
Contributor

ehuss commented Oct 28, 2019

@rfcbot reviewed

@rfcbot
Copy link
Collaborator

rfcbot commented Oct 28, 2019

🔔 This is now entering its final comment period, as per the review above. 🔔

@rfcbot rfcbot added final-comment-period Will be merged/postponed/closed in ~10 calendar days unless new substational objections are raised. and removed proposed-final-comment-period Currently awaiting signoff of all team members in order to enter the final comment period. labels Oct 28, 2019
@tarcieri
Copy link

I'd love to help work on a follow-up effort. That said I don't have bandwidth for this right now either, so no worries about postponing it.

@trishankatdatadog
Copy link

Same

@rfcbot rfcbot added finished-final-comment-period The final comment period is finished for this RFC. and removed final-comment-period Will be merged/postponed/closed in ~10 calendar days unless new substational objections are raised. labels Nov 7, 2019
@rfcbot
Copy link
Collaborator

rfcbot commented Nov 7, 2019

The final comment period, with a disposition to postpone, as per the review above, is now complete.

As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed.

The RFC is now postponed.

@rfcbot rfcbot added postponed RFCs that have been postponed and may be revisited at a later time. and removed disposition-postpone This RFC is in PFCP or FCP with a disposition to postpone it. labels Nov 7, 2019
@rfcbot rfcbot closed this Nov 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-security Security related proposals & ideas finished-final-comment-period The final comment period is finished for this RFC. postponed RFCs that have been postponed and may be revisited at a later time. T-cargo Relevant to the Cargo team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.